Open stevexbritton opened 3 months ago
I've demonstrated with a more simplified example that using a video frame with a THREE.CanvasTexture
works as expected: https://jsfiddle.net/3k1q0fez/
So there must be an issue in your app level code. Please use the forum or stackoverflow to search for the root cause. If it turns out to be an issue in the engine, we can reopen the issue.
Hi, thank you for such a quick response, unfortunately your simplified example is too naive, so I have taken the liberty of making a small modification to your jsfiddle to demonstrate the problem. I have replaced your VideoFrame creation code with the example code provided in the MDN documentation (https://developer.mozilla.org/en-US/docs/Web/API/VideoFrame/VideoFrame) and CRUCIALLY added the setting of the displayWidth and displayHeight properties with values different to codedWidth and codeHeight values, a much more likely scenario when working with real VideoFrames. You will now see, when you run your jsfiddle, that the code no longer works. Also, if I change my example to ask for a camera with video dimensions that do not require cropping the source image (1280x720 for my Mac 16") my code works correctly. I think this demonstrate there is no app level issue with my code and the issue is with Three not correctly handling VideoFrames where the display size is different to the coded size. I hope this is enough evidence for you to re-open this issue. Thank you
Do you mind sharing the updated fiddle?
I'm sorry, I haven't used jsfiddle before, is this what you want: https://jsfiddle.net/5jbs3oaq/13/
So the root cause is that the coded width/height differs from the display width/height.
This totally explains the issue of course since the dimensions of the texture and its buffer size do not match.
Would it be correct to always use codedWidth
and codedHeight
? The bit that handles the dimensions for video frames looks like so right now:
My understanding is that when you ask for a navigator.mediaDevices.getUserMedia video of a certain dimension the camera returns images of a certain size, the VideoFrame's codeWidth & codeHeight, and this may be cropped down to the requested size which is the VideoFrame's displaySize & displayWidth. So the texture size needs to be the displayWidth & displayHeight but the data to copy is a window within the VideoFrame data, which I believe is defined by the VideoFrame's visibleRect property.
Would it be possible to extract the effective frame data on app level based on visibleRect
and put the data into a buffer for a data texture? The dimensions of the data texture would be displayWidth
and displayHeight
.
If this works, we maybe can try to integrate this into the renderer.
I believe VideoFrames like ImageBitmaps just hold references to the data, which can be passed to and stored directly in the GPU and the VideoFrame data coming from video devices is indeed held in the GPU. Therefore, copying to the CPU to create a DataTexture would be slow. Is the data copied from the GPU to the CPU when creating a CanvasTexture from an ImageBitmap?
Is the data copied from the GPU to the CPU when creating a CanvasTexture from an ImageBitmap?
No since the image bitmap data are already on the CPU side. This should also true for video frames, imo.
Sorry, did you mean GPU not CPU. I'm saying the data is already on the GPU side and I'm hoping it doesn't have to be copied to the CPU to create the texture only to be copied backup to the GPU.
Description
When using a CanvasTexture for a Scene background it works when the CanvasTexture is created from an ImageBitmap, but not when it's created from a VideoFrame. The error "GL_INVALID_VALUE: Offset overflows texture dimensions." is reported when "renderer.render()" is called.
Reproduction steps
Code
Live example
Screenshots
No response
Version
1.66.1
Device
Desktop
Browser
Chrome
OS
MacOS