Open munrocket opened 3 years ago
For readPixel
we need to change context usage for example to { usage: GPUTextureUsage.RENDER_ATTACHMENT | GPUTextureUsage.COPY_SRC }. Here additional samples with 'toDataURL', 'toBlob', 'imageBitmap' https://github.com/gpuweb/cts/blob/c84e2748ec4ed1daff7cb836c5c7d1a01664be28/src/webgpu/web_platform/canvas/readbackFromWebGPUCanvas.spec.ts
These first two examples don't appear to be doing a (WebGL 1.0) "readPixels" style operation.
I don't know what in this file you're pointing to. Is it getBufferData? That is async, using copyBufferToBuffer and mapAsync(READ). It's reading raw buffer data, not texture (pixel) data.
This is also async, using copyTextureToBuffer + mapAsync(READ).
For
readPixel
we need to change context usage for example to { usage: GPUTextureUsage.RENDER_ATTACHMENT | GPUTextureUsage.COPY_SRC }.
This is actually a Chromium bug that I believe has been fixed. COPY_SRC is not supposed to be needed to read data out of a WebGPU canvas.
Note that reading data out of WebGPU canvas is NOT the normal (or efficient) way of getting data out of WebGPU - that would be mapAsync(READ).
Hmm, interesting. Reading canvas back is useful for screenshots or video capture.
Month ago I am tried to find any example with it. TF.js/Babylon.js was the first projects where I tried to find something :)
I am not tried mapAsync, but simple solution with COPY_SRC from Marcin Ignac solving issue with screenshots.
Reading from canvas didn't work in Chromium at all until pretty recently. And then we only fixed the COPY_SRC thing after that. Can you try Marcin's example without COPY_SRC and see if it works?
Hmm, interesting. Reading canvas back is useful for screenshots or video capture.
Yes definitely. There are a bunch of usecases where it makes sense - needing to output image data to other apis (like webgl, 2d canvas, video capture) especially if they might be able to avoid reading the data all the way back to JS. And makes sense for screenshots where you already have data in the canvas anyway.
It works for me. But I need to invoke .toBlob() right after device.queue.submit([commandEncoder.finish()]);
. For some reason invoking it from another place gives me empty picture.
It works for me. But I need to invoke .toBlob() right after device.queue.submit([commandEncoder.finish()]);. For some reason invoking it from another place gives me empty picture.
I think this should be expected? If you do .toBlob() before the submit, the texture contents will be empty because submit is what actually writes to the texture.
It is expected. See here: https://gpuweb.github.io/gpuweb/#abstract-opdef-get-a-copy-of-the-image-contents-of-a-context
.toBlob()
will see the current contents of the [[currentTexture]]
at the time it is called. If called after the texture has been presented (i.e. after the canvas is presented to the screen), then it will be null.
Filed #363 about 3d textures.
Regarding readbacks from canvas, maybe a nice simple sample would be an animated demo with a screenshot button that captures the canvas into a blob, and puts the blob in an <img>
tag.
An important bit here would be which frame is captured - it's always the current drawing buffer, if there is one, and otherwise whatever's currently on the screen. It would be ideal to somehow show which frame was captured - maybe render at a very low framerate and render an incrementing counter that can be seen in the screenshot. And have two buttons: "capture now" which captures in between frames, and "capture at end of next frame" which captures inside rAF (after rendering, before returning).
Also note that this sample will test some corner cases of browsers, so we should make sure it actually works.
This examples can be useful.
Some code with
readPixel
is here, but I don’t understand yet how to do it.