We are introducing the FontLoader, which slightly differs
from the ImageLoader in terms of features. To adequately
support both, we have separated the loader functionalities
into FontLoader and ImageLoader. This allows us to optimally
adapt the LoadModule for each case.
The loader cache is applied to conserve memory.
If the input data is already present in loaders,
the loader cache will promptly return the active loader.
This results in a lot of memory savings for the duplicated resources.
binary diff: -400 bytes
I've added new parameter, const string& resourcePath, to load external image on lottie.
Result load(const char* data, uint32_t size, const string& mimeType, bool copy, const string& resourcePath)
Note: tvgLoadModule will have new overrided method `open`, not to effect to other changes except animation.
Issue: #1793
- The TVG binary format now consistently compresses the data.
- Removed redundant internal properties as part of this change.
Please note that this change will break compatibility with the TVG file format from version 1.0 onward.
Issue: https://github.com/thorvg/thorvg/issues/1372
These new apis would enable users to easily modify the motion scene,
The data structure of the paints has been changed from an array to a list.
@APIs:
std::list<Paint*>& Canvas::paints() noexcept;
std::list<Paint*>& Scene::paints() noexcept;
@Deprecated:
Result Canvas::reserve(uint32_t size) noexcept;
Result Scene::reserve(uint32_t size) noexcept;
@Issue: https://github.com/thorvg/thorvg/issues/1203
When parsing a binary stored as a char type,
interpreter can access the misaligned memory while accessing it with a pointer.
To prevent that, pass the array copied to memcpy as tvg Object.
Introducing the gradient transform() apis and changing the grad
algorithms made it possible to apply the shape's transformation
before saving the tvg file, in case the shape (or its stroke)
has a fill.
This reverts commit cd5116b053.
Ah this breaks the Stress example due to Picture::duplicate() is not available...
Need to consider and come back again.
* common: added colorSpace() function
This patch introduces colorSpace() function for SW and GL engine.
* infra: change LoadModule:read() into LoadModule:read(uint32_t colorspace)
This patch changes LoadModule:read() into LoadModule:read(uint32_t colorspace)
* picture: implement passing colorspace into loader
This patch implements passing colorspace into loaders.
Loader->read is now called on the first update.
* external_jpg_loader: support colorspaces
* external_png_loader: support colorspaces
"runtime error: load of misaligned address 0x7fb67895c815 for type 'unsigned int', which requires 4 byte alignment"
This is actually not an issue but we can resolve it with an easy workaround,
since we don't need to see this report repeatedly.
This reserved count was just missed,
Aside from it, tvg_loader logic is not well organized (hard to expect)
We can refine it by recovering the data tree structure in the reverse order.
@Issues: https://github.com/Samsung/thorvg/issues/768
By choosing compress option, tvg tries to compress the data to reduce the binary size.
Since the compression has the double-edges sword, we provides an option to users
to select it by their demand. Basically, compression is better than non-compression.
After profiling, we decided to use the encoder/decoder of Guilherme R. Lampert's.
Here is the profiling result:
test.tvg: 296037 -> 243411 (-17%)
tiger.tvg: 54568 -> 50622 (-7%)
image-embedded.tvg: 2282 -> 1231 (-46%)
@Issue: https://github.com/Samsung/thorvg/issues/639
About compression method:
Lempel–Ziv–Welch (LZW) encoder/decoder by Guilherme R. Lampert
This is the compression scheme used by the GIF image format and the Unix 'compress' tool.
Main differences from this implementation is that End Of Input (EOI) and Clear Codes (CC)
are not stored in the output and the max code length in bits is 12, vs 16 in compress.
EOI is simply detected by the end of the data stream, while CC happens if the
dictionary gets filled. Data is written/read from bit streams, which handle
byte-alignment for us in a transparent way.
The decoder relies on the hardcoded data layout produced by the encoder, since
no additional reconstruction data is added to the output, so they must match.
The nice thing about LZW is that we can reconstruct the dictionary directly from
the stream of codes generated by the encoder, so this avoids storing additional
headers in the bit stream.
The output code length is variable. It starts with the minimum number of bits
required to store the base byte-sized dictionary and automatically increases
as the dictionary gets larger (it starts at 9-bits and grows to 10-bits when
code 512 is added, then 11-bits when 1024 is added, and so on). If the dictionary
is filled (4096 items for a 12-bits dictionary), the whole thing is cleared and
the process starts over. This is the main reason why the encoder and the decoder
must match perfectly, since the lengths of the codes will not be specified with
the data itself.