When removing a clip, first check if it has an unload event listener somewhere
it's hierarchy.
If it does, enqueue the removal to happen on the next frame, by moving it to a negative depth.
This is done by:
- using the global constant pool instead of a fresh empty one:
- OK, as no call-site is directly executing arbitrary bytecode that
could care about the contents of the constant pool.
- pre-allocating the global scope object in the `Avm1` context
- using the global scope directly instead of allocating a local scope:
- OK, because no call-site is directly defining locals on the
returned Activation's scope.
The desktop player now takes a `--spoof-url` argument, which overrides
the movie URL provided to ActionScript. This does not affect non-root
movies loaded through `Loader`.
This is currently somewhat buggy, `homestuck_02791.swf` stops at 12% for some reason. I tried handing it both compressed and uncompressed lengths with no luck.
Backends that need synchronous preload behavior now explicitly ask for it as follows:
* `tests` - repeatedly call `preload` in a loop with an exhausted execution limit to stress-test the chunked preload
* `exporter`, `scanner` - synchronous/unlimited preload to match prior behavior
These may change in the future.
Actions are abstract; here we're using it to count bytes loaded (as a proxy for execution time). AVM code could potentially be adapted to count operations run instead.
Our AVM2 `SharedObject` support is now *almost* equivalent
to our avm1 `SharedObject` support. We implement serialization
and deserialization for primitives, arrays, and `Object` instances
with local properties. We also implement serialization for `Date`,
but not `Xml` (since our AVM2 `Xml` class is just a stub at the moment).
This is enough to make 'This is the only level too' save level
progress to disk.
Currently, we always serialize to AMF3. When we implement
the `defaultObjectEncoding` and `objectEncoding`, we'll need
to adjust this.
* avm2: Implement `BitmapData.draw` for `wgpu` backend
This method requires us to have the ability to render directly to a
texture. Fortunately, the `wgpu` backend already supports this in
the form of `TextureTarget`. However, the rendering code required
some refactoring in order to avoid creating duplicate `wgpu` resources.
The current implementation blocks on copying the pixels back
from the GPU to the CPU, so that we can immediately set them in
the Ruffle `BitmapData`. This is likely very inefficient, but will
work for a first implementation.
In the future, we could explore allowing the CPU image data and GPU
texture to be out of sync, and only synchronized when explicitly
necessary (e.g. on `getPixel` or `setPixel` calls).
* Rename `with_offscreen_backend` to `render_offscreen` and use Bitmap
* Don't panic when backend doesn't implement `render_offscreen`