Adds an alternative static jpg loader.
The jpg loader copied the jpeg decoding implementation from this open-source
repo: https://github.com/richgel999/jpeg-compressor
That opensource has the public-domain license, it doesn't have any restriction
of the copy.
note: jpgd.cpp is modified version (changed decompress_jpeg_image_from_stream)
for returning BGRA
meson's find_library() throws an error when the package cannot be found.
png static library is added, so it should be passed when package is not found.
Therefore, delete unnecessary find_library.
Copied only necessary decoding functions from the lodepng opensource.
See: https://lodev.org/lodepng/
additional changes:
- disabled crc for the optimal size.
- converted the format bgr -> rgb for our png example.
Still we don't have a concrete idea for the image formats,
We should fix the converting methods between bgra <-> rgba.
@Issue: https://github.com/Samsung/thorvg/issues/594
1.
Remove macro function. The existing macro function
was doing meaningless nested `return false`.
2.
Extract the logic to find the type as a function.
3.
The SimpleXMLType::Error case is not actually used,
and in case of invalid XML, Do 'return false' immediately.
This patch has 2 purposes,
1. revise the loaders infrastructure to support both static/external linking loaders.
2. add a template for static jpg/png loaders after external jpg/png.
Our default loaders prefer static linking, external loaders are only available
when dependent libraries on the build system are found.
You might wonder why we have the external loaders together,
they might be faster than static loaders since the popular libraries are likely to be well maintained,
fine-grained optimized.
Thus in this patch, meson tries to apply the external loaders first
then see if the dependencies were found or not,
if it's failed, it turns to the default static loaders.
Next this patch, we need the contribution for actual static jpg/png loaders implementation.
@Issue: https://github.com/Samsung/thorvg/issues/594
"runtime error: load of misaligned address 0x7fb67895c815 for type 'unsigned int', which requires 4 byte alignment"
This is actually not an issue but we can resolve it with an easy workaround,
since we don't need to see this report repeatedly.
Current paint::bounds() returns the coordinates under the raw status,
the values are not quite useful if the paint object has the transformed children.
Thus, we extends the feature and give an additional parameter "transformed"
to return the coordinates values after transformation by user demands.
This is also necessary for tvg format, since we need the exact view size of the scene information.
The previous api is deprecated and we introduce a new api to replace it.
@APIs:
+ Result Paint::bounds(float* x, float* y, float* w, float* h, bool transformed) const noexcept;
- Result Paint::bounds(float* x, float* y, float* w, float* h) const noexcept;
@Issues: https://github.com/Samsung/thorvg/issues/746
Since the arc flags can have values 0 or 1, we reported as
an error cases, when a float value was given.
Since the EBNF grammar can be used, we misread some paths.
Removing the condition that prevents giving a float as a flag solves
this problem and is in agreement with the w3 specs.
This reserved count was just missed,
Aside from it, tvg_loader logic is not well organized (hard to expect)
We can refine it by recovering the data tree structure in the reverse order.
@Issues: https://github.com/Samsung/thorvg/issues/768
By choosing compress option, tvg tries to compress the data to reduce the binary size.
Since the compression has the double-edges sword, we provides an option to users
to select it by their demand. Basically, compression is better than non-compression.
After profiling, we decided to use the encoder/decoder of Guilherme R. Lampert's.
Here is the profiling result:
test.tvg: 296037 -> 243411 (-17%)
tiger.tvg: 54568 -> 50622 (-7%)
image-embedded.tvg: 2282 -> 1231 (-46%)
@Issue: https://github.com/Samsung/thorvg/issues/639
About compression method:
Lempel–Ziv–Welch (LZW) encoder/decoder by Guilherme R. Lampert
This is the compression scheme used by the GIF image format and the Unix 'compress' tool.
Main differences from this implementation is that End Of Input (EOI) and Clear Codes (CC)
are not stored in the output and the max code length in bits is 12, vs 16 in compress.
EOI is simply detected by the end of the data stream, while CC happens if the
dictionary gets filled. Data is written/read from bit streams, which handle
byte-alignment for us in a transparent way.
The decoder relies on the hardcoded data layout produced by the encoder, since
no additional reconstruction data is added to the output, so they must match.
The nice thing about LZW is that we can reconstruct the dictionary directly from
the stream of codes generated by the encoder, so this avoids storing additional
headers in the bit stream.
The output code length is variable. It starts with the minimum number of bits
required to store the base byte-sized dictionary and automatically increases
as the dictionary gets larger (it starts at 9-bits and grows to 10-bits when
code 512 is added, then 11-bits when 1024 is added, and so on). If the dictionary
is filled (4096 items for a 12-bits dictionary), the whole thing is cleared and
the process starts over. This is the main reason why the encoder and the decoder
must match perfectly, since the lengths of the codes will not be specified with
the data itself.
This change protects against negative value in unsigned int of
RenderRegion.x/y. This fixes a problem of invisible paint if ClipPath
bounds was negative.
@issue: #704