picture must return the boundary info - 0, 0, w, h
We assume that it has a designated picture size.
Aside from this issue,
bounds() api must be reviewed, its behavior is quite in a trouble...
unless the result is not transformed, its information is useless...
@Issue: https://github.com/Samsung/thorvg/issues/741
Changes:
Added neonRasterTranslucentRect implementation. Rendering was tested on
32 lottie fiels. Without neon ~ 18.1 FPS was measured. With neon ~ 20.1
FPS was measured.
By choosing compress option, tvg tries to compress the data to reduce the binary size.
Since the compression has the double-edges sword, we provides an option to users
to select it by their demand. Basically, compression is better than non-compression.
After profiling, we decided to use the encoder/decoder of Guilherme R. Lampert's.
Here is the profiling result:
test.tvg: 296037 -> 243411 (-17%)
tiger.tvg: 54568 -> 50622 (-7%)
image-embedded.tvg: 2282 -> 1231 (-46%)
@Issue: https://github.com/Samsung/thorvg/issues/639
About compression method:
Lempel–Ziv–Welch (LZW) encoder/decoder by Guilherme R. Lampert
This is the compression scheme used by the GIF image format and the Unix 'compress' tool.
Main differences from this implementation is that End Of Input (EOI) and Clear Codes (CC)
are not stored in the output and the max code length in bits is 12, vs 16 in compress.
EOI is simply detected by the end of the data stream, while CC happens if the
dictionary gets filled. Data is written/read from bit streams, which handle
byte-alignment for us in a transparent way.
The decoder relies on the hardcoded data layout produced by the encoder, since
no additional reconstruction data is added to the output, so they must match.
The nice thing about LZW is that we can reconstruct the dictionary directly from
the stream of codes generated by the encoder, so this avoids storing additional
headers in the bit stream.
The output code length is variable. It starts with the minimum number of bits
required to store the base byte-sized dictionary and automatically increases
as the dictionary gets larger (it starts at 9-bits and grows to 10-bits when
code 512 is added, then 11-bits when 1024 is added, and so on). If the dictionary
is filled (4096 items for a 12-bits dictionary), the whole thing is cleared and
the process starts over. This is the main reason why the encoder and the decoder
must match perfectly, since the lengths of the codes will not be specified with
the data itself.
Calling picture->load after it was already once called resulted in
segmentation fault or memory leak (depending on whether the vector (svg, tvg)
or raster (jpg, png, raw) file was loaded).
This patch checks the image has already been loaded. If so, the load()
returns InsufficientCondition.
@issue: fixes#719
if the last contour dispatching is dealt with closed command but actual command
is not the closed, the close tag is written with the opened
In this case, stroking rendering is buggy.
So this optimization stragtegy is to merging shapes.
If two shapes have the same layer, having save properties except the paths,
we can integrate two shapes to one, this helps to build up a simpler
scene-tree, reduce the runtime memory, helps for faster processing for rendering.
As far as I checked tiger.svg, it removes 142 shape nodes,
decreased the binary size: 60537 -> 54568.
Overall, avg 4% binary size can be reduced among our example svgs by this patch.
This change protects against negative value in unsigned int of
RenderRegion.x/y. This fixes a problem of invisible paint if ClipPath
bounds was negative.
@issue: #704
This optimizes binary size by skipping the scene if it has the only child.
though the reduced size is too trivial size (avg 0.4% as far as I checked our example svgs),
we can reduce the loading job & runtime memory as well.
Skip to save transform data by accumulating them through the scene tree,
and then applying the final transform to the path points.
Assume that each transform have 36 bytes, it could be increased linear to paints node count
if every paints has transform in the worst case.
Fudamentally, this save their memory and only remains to Bitmap Pictures,
also helps to reduce the loading/rendering workloads since
it doesn't need to perform any transform jobs after converting.
Added error string printing on jpg image loading failure.
The error message help developer find the corrupted jpg file.
Error message is not printed for open from char* data as there the
loaders are tried on by one.
tvg binary format might break the compatibility if any major features have been changed.
It's allowed to do it when the major version is upgraded.
In that case, still we need to support the backward compatibility,
we can provide multiple binary interpreters and choose the proper one
based on the current loading tvg binary format version.
Thus, you can add further interpreters if it's necessary in the future.
Our policy is to derive the TvgBinInterpreterBase class to make it running on
the interface.
for example, if the major version is upgraded 1.x, you can implement TvgBinInterpreter1.
Separate simd implementation by files to maintain them easier.
Now we have avx, c, neon version implementation base,
we can add implementations to them.
Changes:
- Prepare neon verison of ALPHA_BLEND API.
- Use ALPHA_BLEND_NEON in _translucentRle
Notes:
- _translucentRle with neon support reduces execution time of this
function ~ 300 % (measured on uint32_t 400 x 400 buffer).
- API was tested on ARMv7l device with GCC 9.2 based toolchain. Results
on other devices could be different.
tvg file stores the version info of thorvg when it's generated,
in order to compare it with the runtime thorvg version when it's loaded.
For this, thorvg builds up the current version of symbol on the initilaization step,
that can be referred by the tvg loader.
This reduces tvg binary format size by converting PathCommand to more compact size.
This optimization increase +12% compress rate with our example:
195,668 => 174,071 (sum of all converted tvgs from svgs)
@Issues: https://github.com/Samsung/thorvg/issues/639
* Apply bilinear interpolation to images
Apply bilinear interpolation when drawing images with transformation
If the mapped coordinate value is a floating point value,
bilinear interpolation is performed using adjacent pixel values.
Interpolation is not performed in cases when the color values to beinterpolated
are the same or scale down case.
If svg contained multiple defs tags only the first one was handled correctly.
Every next 'defs' tag was skipped and its inner elements parsed as it was
outside the defs.
Pixel-based image picture doesn't work at size() method.
Obvisouly, we missed to implement it properly.
This patch corrects it.
@Issue: https://github.com/Samsung/thorvg/issues/656