Open greggman opened 1 week ago
@kainino0x
Hi, Thanks for the feedback!
This proposal is interesting, but it does look like a mirror version of the transferToGPUTexture
proposal, with both having similar limitations. The two proposal could even exists in parallel. Your proposal is nicer for users who are WebGPU first, but not for Canvas2D users. In particular, your proposal does't allow presenting the results to a normal 2D canvas. CanvasRenderingContext2D.createFromWebGPUDevice(...)
would need to return a new type of offscreen canvas context which cannot be used to present frames. In contrast, the transferToGPUTexture
API allows Canvas2D content to be presented by a WebGPU context, or WebGPU content to be presented by a Canvas2D context.
Regarding the issues you are bringing up:
The canvas might not be compatible with the WebGPU device. In this case, a copy is needed to transfer the canvas's contents into the WebGPU device and back.
Note that in Chromium's current implementation of transferToGPUTexture
, only the first transfer might cause a copy. On this first transfer, we set a flag at the document level telling the canvases to create WebGPU-compatible textures from now on, which won't require a copy. We are thinking about adding a hint at context creation to declare that the context will transfer textures to WebGPU. This way, we can create the right Canvas2D texture from the start, therefore avoiding the copy on the initial call to transferToGPUTexture
.
The canvas might be the wrong texture format. (app can't choose like it can with WebGPU)
This is an issue in both proposal. If we transfer from Canvas2D to WebGPU, we need to configure WebGPU to use the Canvas2D texture format. If we transfer from WebGPU to Canvas2D, we need to make sure the WebGPU texture was created with the right format in the first place or else the Canvas2D won't be able to use it. Because Canvas2D is the more restrictive of the two, it makes sense for the restrictive Canvas2D to create the texture and the more flexible WebGPU to accept it (Postel's law: "Be liberal in what you accept, and conservative in what you send").
The canvas format might change on each transfer requiring the app to be built around checking and remaking resources on demand. (Mozilla confirmed this is an issue)
I'm curious to hear more about this. Why would the canvas format change on each transfer? Wouldn't that make it even more of a headache if Canvas2D wasn't the one creating the texture? You'd need WebGPU to create textures with a different format on every transfer. Could you clarify?
Setting the canvas.width,height after transferring to WebGPU is strange.
Can you expand on what you find strange here? Changing the size of the canvas resets the canvas to it's default state, as if newly created. It was the de-facto and only way to reset a canvas before the reset()
API was added. If the canvas is reset, it make sense to reset all states and release all resources. Thus, calling reset() aborts the transfer and destroys the GPUTexture.
IIUC, there is a proposal to allow canvas to
transferToWebGPU
This seems like it has several issues.
Instead of starting with canvas2d and transfering to WebGPU, would it be better to start with WebGPU and transfer to canvas 2d?
Advantages:
webgpuContext.getCurrentTexture()
if you want a texture displayed in a canvas then transfer it to the context.I'm not saying the above API is perfect. I'm just trying to throw out the ideas that it might be more useful to start at WebGPU instead of starting at Canvas2D. In particular, being able to render to any texture, not just a "canvas texture" seems useful. It also seems like a more interesting model, that a
Canvas2DRenderingContext
is just a API that draws to a texture, any texture, not just canvas blessed textures.