Currently, we rely on ShapeTessellator being able to get a BitmapHandle
without a RenderBackend. With the upcoming BitmapData refactor,
we will always need a RenderBackend to get a BitmapHandle, which creates
borrow-checker issues in ShapeTessellator (which is stored in a
RenderBackend).
To solve this, we split BitmapSource.bitmap into two methods -
BitmapSource.bitmap and BitmapSource.bitmap_handle. ShapeTessellator
continues to use BitmapSource.bitmap, and uses the u16 bitmap id
instead of a BitmapHandle. The BitmapSource.bitmap_handle method
is used inside each render backend to convert the id to a BitmapHandle,
avoiding borrow-checker issues.
This PR fixes a numbe of interconnected bugs:
* We weren't consistently uploading a dirty BitmapData to the render
backend before drawing to/from it.
* BitmapData.draw should *not* add a fill color - it should draw over
the current contents of the BitmapData
* After drawing to a non-transparent BitmapData, we need to manually
set the opacity back to 255 for each pixel (the drawing process
takes transparency into account, but the opacity information is
thrown away at the end).
Change `Bitmap::new()` to accept a `ruffle_render::bitmap::Bitmap`
directly, instead of `width`, `height` and `bitmap_handle`. As a
consequence, all `RenderBackend::register_bitmap_*` methods are no
longer necessary - we can use `ruffle_redner::utils::*` to obtain
a `ruffle_render::bitmap::Bitmap` right before calling `Bitmap::new()`.
And make it generic, as a first step towards making it a general-purpose
data structure for the whole codebase. Some potential replacements are:
* `BoundingBox` in `render/src/bounding_box.rs`.
* `BoxBounds` in `core/src/html/dimensions.rs`.
* Parameters to a bunch of `BitmapData` methods in
`core/src/bitmap/bitmap_data.rs`.
Main changes:
* Merge `ColorTransformParams` into `ColorTransformObject`, as it's only relevant for AVM1.
* Make `BitmapData::color_transform` work with a generic `ColorTransform`, which uses fixed-point
arithmetic.
Note that Ruffle still calculates color transforms slightly different from Flash. This is probably
caused by inaccuracy of the current `ColorTransformObject` to `ColorTransform` conversion and/or the
`ColorTransform` application logic itself. Since this requires further research, it'll be fixed in a
future PR.
We now check if a BitmapData has been disposed by checking
for a zero width or height (which cannot happen otherwise).
As a result, we no longer need the 'disposed' field on the AVM1
BitmapData object.
* avm2: Implement `BitmapData.draw` for `wgpu` backend
This method requires us to have the ability to render directly to a
texture. Fortunately, the `wgpu` backend already supports this in
the form of `TextureTarget`. However, the rendering code required
some refactoring in order to avoid creating duplicate `wgpu` resources.
The current implementation blocks on copying the pixels back
from the GPU to the CPU, so that we can immediately set them in
the Ruffle `BitmapData`. This is likely very inefficient, but will
work for a first implementation.
In the future, we could explore allowing the CPU image data and GPU
texture to be out of sync, and only synchronized when explicitly
necessary (e.g. on `getPixel` or `setPixel` calls).
* Rename `with_offscreen_backend` to `render_offscreen` and use Bitmap
* Don't panic when backend doesn't implement `render_offscreen`
Fixes some issues with our winding # calculation which would cause
incorrect results for hitTest.
* The convention for handling an intersection at endpoints was
not the same between lines and bezier curves.
* The bezier curve winding # function was not properly handling
some cases where the curve was strictly y-monotonic.
* Simplify the code a bit so that ray-curve intersections are
returned in a consistent order based on upward/downward crossing.
It was only used to make structs `#[derive(gc_arena::Collect)]`, and
generally it doesn't make much sense that `render` needs to be GC-aware.
So instead annotate `render` fields in `core` with `#[collect(require_static)]`.
Previously, the viewport height and width were stored in
both `Stage` and the `RenderBackend`. Any changes to the viewport
dimensions (e.g. due to window resizing) needed to be updated in both
places to keep our handling of the viewport consistent.
This PR adds a new `ViewportDimensions` type, which holds the
width, height, and scale factor. It is stored inside the
`RenderBackend` impl, and is retrieved using the newly added
method `RenderBackend.get_viewport_dimensions`. After a `Player`
has been constructed, any code that needes access to the viewport
dimensions will ultimate go through this method.
Unfortunately, `Stage` needs to use the viewport dimensions
in `build_matrices`. Therefore, any code modifying the viewport
dimensions should go through `player.set_viewport_dimensions`,
which ensures that the stage matrices are rebuilt after the render
backend is updated.
Each render backend keeps track of a stack of BlenModes,
which are pushed and popped by 'core' as we render objects
in the displaay tree. For now, I've just implemented BlendMode.ADD,
which maps directly onto blend mode supported by each backend.
All other blend modes (besides 'NORMAL') will produce a warning
when we try to render using them. This may produce a very large amount
of log output, but it's simpler than emitting each warning only once,
and will help to point developers in the right direction when they
get otherwise inexplicable rendering issues (due to a blend mode
not being implemented).
The wgpu implementation is by far the most complicated, as we need
to construct a `RenderPipeline` for each possible
`(BlendMode, MaskState)`. I haven't been able to find any documentation
about the maximum supported number of (simultaneous) WebGPU render
pipelines - if this becomes an issue, we may need to register them
on-demand when a particular blend mode is requested.