It's fine that we timed out on WebGL; GL objects can be deleted early as they will be kept around by the driver if GPU work hasn't finished. Moreover, the way we emulate read mappings on WebGL allows us to execute map_buffer earlier than on other backends since getBufferSubData is synchronous with respect to the other previously enqueued GL commands.
Relying on this behavior breaks the clean abstraction wgpu-hal tries to maintain and we should find ways to improve this.
Vulkan, D3D12 and (optionally) Metal require us to keep objects used in active submissions alive until the GPU has finished working with them. wgpu-hal inherits this rule from those backends but OpenGL doesn't have this requirement and due to limitations on the web where a blocking wait can't be issued we make use of this property in wgpu-core.
The 2nd issue is that map_buffer is synchronous with respect to the other previously enqueued GL commands which we also take advantage of. The other backends and the OpenGL/OpenGL ES targets of our GL backend will map buffers immediately (even if the GPU is still using them).
We might be able to resolve this with just an is_webgl getter on the HAL device and document this in HAL.
https://github.com/gfx-rs/wgpu/pull/6413 added the following comment:
Vulkan, D3D12 and (optionally) Metal require us to keep objects used in active submissions alive until the GPU has finished working with them.
wgpu-hal
inherits this rule from those backends but OpenGL doesn't have this requirement and due to limitations on the web where a blocking wait can't be issued we make use of this property inwgpu-core
.The 2nd issue is that
map_buffer
is synchronous with respect to the other previously enqueued GL commands which we also take advantage of. The other backends and the OpenGL/OpenGL ES targets of our GL backend will map buffers immediately (even if the GPU is still using them).We might be able to resolve this with just an
is_webgl
getter on the HAL device and document this in HAL.