there some multiple wrong calculation about size during tvg optimiation.
this patch fixes them.
1. picture needs to return the current desired size because
it will save the transformed the geometry. the final size should be
recorvered as the base size from the loader.
2. clippath missed to multiply parents transform, it's fixed.
@Issue: https://github.com/Samsung/thorvg/issues/752
Both functions implemented using 128-bit registers.
avxRasterTranslucentRect is around 5 times faster than cRasterTranslucentRect (i7-8700 CPU - Coffe Lake)
This patch adds saveTvg() function into thorvgwasm.cpp.
Functions saves tvg using File System API.
To enable fs, changed build flag: -s FORCE_FILESYSTEM=1.
Increase in result thorvg-wasm.js size: about 68kB to about 125kB.
picture must return the boundary info - 0, 0, w, h
We assume that it has a designated picture size.
Aside from this issue,
bounds() api must be reviewed, its behavior is quite in a trouble...
unless the result is not transformed, its information is useless...
@Issue: https://github.com/Samsung/thorvg/issues/741
Changes:
Added neonRasterTranslucentRect implementation. Rendering was tested on
32 lottie fiels. Without neon ~ 18.1 FPS was measured. With neon ~ 20.1
FPS was measured.
By choosing compress option, tvg tries to compress the data to reduce the binary size.
Since the compression has the double-edges sword, we provides an option to users
to select it by their demand. Basically, compression is better than non-compression.
After profiling, we decided to use the encoder/decoder of Guilherme R. Lampert's.
Here is the profiling result:
test.tvg: 296037 -> 243411 (-17%)
tiger.tvg: 54568 -> 50622 (-7%)
image-embedded.tvg: 2282 -> 1231 (-46%)
@Issue: https://github.com/Samsung/thorvg/issues/639
About compression method:
Lempel–Ziv–Welch (LZW) encoder/decoder by Guilherme R. Lampert
This is the compression scheme used by the GIF image format and the Unix 'compress' tool.
Main differences from this implementation is that End Of Input (EOI) and Clear Codes (CC)
are not stored in the output and the max code length in bits is 12, vs 16 in compress.
EOI is simply detected by the end of the data stream, while CC happens if the
dictionary gets filled. Data is written/read from bit streams, which handle
byte-alignment for us in a transparent way.
The decoder relies on the hardcoded data layout produced by the encoder, since
no additional reconstruction data is added to the output, so they must match.
The nice thing about LZW is that we can reconstruct the dictionary directly from
the stream of codes generated by the encoder, so this avoids storing additional
headers in the bit stream.
The output code length is variable. It starts with the minimum number of bits
required to store the base byte-sized dictionary and automatically increases
as the dictionary gets larger (it starts at 9-bits and grows to 10-bits when
code 512 is added, then 11-bits when 1024 is added, and so on). If the dictionary
is filled (4096 items for a 12-bits dictionary), the whole thing is cleared and
the process starts over. This is the main reason why the encoder and the decoder
must match perfectly, since the lengths of the codes will not be specified with
the data itself.
Calling picture->load after it was already once called resulted in
segmentation fault or memory leak (depending on whether the vector (svg, tvg)
or raster (jpg, png, raw) file was loaded).
This patch checks the image has already been loaded. If so, the load()
returns InsufficientCondition.
@issue: fixes#719
if the last contour dispatching is dealt with closed command but actual command
is not the closed, the close tag is written with the opened
In this case, stroking rendering is buggy.
So this optimization stragtegy is to merging shapes.
If two shapes have the same layer, having save properties except the paths,
we can integrate two shapes to one, this helps to build up a simpler
scene-tree, reduce the runtime memory, helps for faster processing for rendering.
As far as I checked tiger.svg, it removes 142 shape nodes,
decreased the binary size: 60537 -> 54568.
Overall, avg 4% binary size can be reduced among our example svgs by this patch.
This change protects against negative value in unsigned int of
RenderRegion.x/y. This fixes a problem of invisible paint if ClipPath
bounds was negative.
@issue: #704
This optimizes binary size by skipping the scene if it has the only child.
though the reduced size is too trivial size (avg 0.4% as far as I checked our example svgs),
we can reduce the loading job & runtime memory as well.