Scene effects are typically applied to modify
the final appearance of a rendered scene,
such as adding a blur effect.
Each effect would have a different number of parameters
to control its visual properties. The Scene::push() interface
uses variadic arguments to accommodate various cases.
Users should refer to the SceneEffect API documentation
and pass the parameters exactly as required for the specific
effect type. For instance, GaussianBlur expects 3 parameters
which are:
- sigma(float)[greater than 0]
- direction(int)[both: 0 / horizontal: 1 / vertical: 2]
- border(int)[extend: 0 / wrap: 1]
- quality(int)[0 ~ 100]
and, scene->push(SceneEffect::GaussianBlur, 5.0f, 0, 0, 100);
New Experimental APIs:
- SceneEffect::ClearAll
- SceneEffect::GaussianBlur
- Result Scene::push(SceneEffect effect, ...);
Example:
- examples/SceneEffect
issue: https://github.com/thorvg/thorvg/issues/374
Streaming model for massive vertex and index creations: minimize memory allocations, range checks and other conditions
Reduce number of segments length calculations (sqrt) and bbox (min and max).
Update distances and bboxes on a whole buffer and only if necessary. For shapes without strokes compute distances not necessary at all. bbox can be updated only on the final stage of geometry workflow, but not on the each stage.
Using stack memory instead of heap. its more cache friendly and did not fragment memory, faster memory allocations (weak place of realization)
Using cache for points distances and whole path length. Updates only if necessary
Validation of geometry consistency executes only on the final stage of path life cicle. It more friendly for data streaming: no any conditions and branches.
Using binary search for strokes trimming
Pre-cached circles geometry for caps and joints
Refactored strokes elements generation functions. Code is more readable and modifiable in general. Can be easily fixed if some geometry issues will be finded
Emscripten 3.1.66 includes support for WebGPU's premultiplied canvas.
Surface configurations have been updated with premultiplied alpha mode to support this feature.
see: c82a307c61
Corrected the alpha interpolation order during blending.
This also corrected the hard mix blending result in the guitar sample.
issue: https://github.com/thorvg/thorvg/issues/2704
move instance, adapter and device creation from renderer to application
its necessary for web integration, because browser have its own mechanics to create hardware handles
this changes makes webgpu canvas more universal to use in case of system and web applications
issue: https://github.com/thorvg/thorvg/issues/2410
The anti-aliased outline color was incorrectly blended
at the multiply option.
The fix can be observed in the example:
'examples/lottie/resourcesguitar.json'
in order to do this, RenderMehthod::blend() method introduced
`bool direct` for figuring out the intermediate composition.
- used hardware blending stage for scene blending
- used AABB for scene blending
- reduced number of offfscreen buffers coping
- reduced number of render pass switching
- used render pipelines abilities to convert offscreen pixel format to screen pixel format
- removed unused shaders
Spec out this incomplete experimental feature,
this is a still promising one, we will reintroduce
this officially after 1.0 release
size: -2kb
issue: https://github.com/thorvg/thorvg/issues/1372
Deep shader refactoring for the following purposes:
* used pre-calculated gradient texture instead of per-pixel gradient map computation
* used HW wrap samples for fill spread setting
* unified gradient shader types
* used single shader module for composition instead of signle module per composition type
* used single shader module for blending for each of fill type (solid, gradient, image) instaed of signle module per blend type
* much easier add new composition and blend equations
* get rided std::string uasge
* shaders code is more readable
* bind groups creation in real time removed - performance boost
* blend and composition shaders decomposed - performance boost
* shader modules and pipeline layouts generalized - less memory usage
* shared single stencil buffer used - less memory usage
* bind groups usage simplified
* general context API simplified and generalized
* all rendering logic moved into new composition class
* ready for hardware MSAA (in next steps)
* ready for direct mask applience (in next steps)
[issues 1479: ClipPath](#1479)
Supports ClipPath composition.
Clip path composition is an only composition type who doesn't ignore blend method.
Clip path is a combination of composition approach and blend approach using compute shader
deprecate the `identifier()` APIs by replacing them with `type()`.
ThorVG is going to introduce an instance `id()`,
and this could be confused with the `identifier()` methods.
with this new type() method can reduce the memory size
by removing unncessary type data.
New Experimental C APIs:
- enum Tvg_Type
- Tvg_Result tvg_paint_get_type(const Tvg_Paint* paint, Tvg_Type* type)
- Tvg_Result tvg_gradient_get_type(const Tvg_Gradient* grad, Tvg_Type* type)
New Experimental C++ APIs:
- Type Paint::type() const
- Type Fill::type() const
- Type LinearGradient::type() const
- Type RadialGradient::type() const
- Type Shape::type() const
- Type Scene::type() const
- Type Picture::type() const
- Type Text::type() const
Deprecated C APIs:
- enum Tvg_Identifier
- Tvg_Result tvg_paint_get_identifier(const Tvg_Paint* paint, Tvg_Identifier* identifier)
- Tvg_Result tvg_gradient_get_identifier(const Tvg_Gradient* grad, Tvg_Identifier* identifier)
Deprecated C++ APIs:
- enum class Type
- uint32_t Paint::identifier() const
- uint32_t Fill::identifier() const
- static uint32_t Picture::identifier()
- static uint32_t Scene::identifier()
- static uint32_t Shape::identifier()
- static uint32_t LinearGradient:identifier()
- static uint32_T RadialGradient::identfier()
Removed Experimental APIs:
- static uint32_t Text::identifier()
issue: https://github.com/thorvg/thorvg/issues/1372
Skip shapes rendering, if opacity is 0 and if fill color for shape and strokes also equal to 0
This behavior is used in sw renderer and fix visual artifacts in referenced animations.
Also this rule fix composition results in case of AlphaMask and InvAlphaMask methods
it provide changes public API for webgpu canvas interface to provide nessessary handles to native window for different platforms:
API Change:
- Result target(void *instance, void *surface, uint32_t w, uint32_t h) noexcept;
In a case of usage stencil buffer only we need to turn off an color target writes. In other case color buffer fill be filled by unxepcted color if fragment shader did not return any value.
It happens in a case on OpenGL realization of webgpu, that used in linux
Befire:
After:
[issues 1479: Masking, InvMasking, LumaMasking, InvLumaMasking](#1479)
Computes composition and blending using simgle pass istead of two full screen passes
[issues 1479: lottie](#1479)
- optimaze stroking algorithm by prevent vector normalizations
- using render meshes heaps to reduce webgpu instances re-creations
- using single instance of pilylines for standard operations such as trim and split
- "on-the-fly" strokes generations with dash pattern
- "on-the-fly" mesh objects generation in a process of path decoding
- merge strokes into single mesh by each shape
[issues 1479: lottie](#1479)
Vertex, Index and uniform buffers now updates instead of recreate.
Implemented pools form mesh objects and render shapes data
it increase performance in 30-40% in massive animations scenes
[issues 1479: lottie](#1479)
To optimize bled operations hardware pipeline blend stage are used for some blend methods:
BlendMethod::SrcOver
BlendMethod::Normal
BlendMethod::Add
BlendMethod::Multiply
BlendMethod::Darken
BlendMethod::Lighten
Other types compute shaders used
[issues 1479: antialiasing](#1479)
Anti-aliasing implementation
Implements antialiasing as a post process on cimpute shaders and original render target with scale of 2x.
Can be modified as an external settings
Before the current changes, all surfaces were painted using a full-screen overlay, no matter how large the object was rendered. This approach is redundant and required reorganization. At the moment, all objects are rendered using an overlay equal to the box of the object itself, which reduces the cost of filling the surface.
Also surfaces and images were divided into different entities, which reduces the pressure on memory.
Also geometry data for rendering and geometry data for calculations in system memory were logically separated.
For further development of features, we need to create off-screen buffers that will allow us to implement functionality related to composition and blending, as well as for loading data to system memory from the framebuffer. Separating the framebuffer into a separate entity allows you to create several instances of them, switch between them, and blend them according to given rules.
For current time we have only a single render target instance, that have a handle to drawing into surface surface, like a native window.
New approach allows:
- offscreen rendering
- render pass handling
- switching between render targets
- ability to render images, strokes and shapes into independent render targets