diarmidmackenzie / aframe-multi-camera

A-Frame components implementing multiple cameras
MIT License
24 stars 6 forks source link

Rendering to <canvas> element? #3

Open floe opened 1 year ago

floe commented 1 year ago

Just tested your component with my a-frame 1.4.1 scene and it works like a charm, kudos!

However, for my somewhat esoteric usecase, I'd like to render the output of the second camera to a canvas element (just like in https://jgbarah.github.io/aframe-playground/camrender-01/ ). Unfortunately, that doesn't seem to do anything. I verified that aframe-multi-camera itself works, using an extra plane, but I need something that I can use to create a MediaStream object from, and that has to be a canvas.

My setup:

<script src="[https://cdn.jsdelivr.net/gh/diarmidmackenzie/aframe-multi-camera@latest/src/multi-camera.min.js](view-source:https://cdn.jsdelivr.net/gh/diarmidmackenzie/aframe-multi-camera@latest/src/multi-camera.min.js)"></script>

...

<a-scene cursor="rayOrigin: mouse">
      <a-assets>
        ...
        <canvas id="canvas3"></canvas>
      </a-assets>

...

      <a-entity id="second-cam" secondary-camera="output:screen; outputElement:#canvas3; sequence:before" position="0 1.6 -1" rotation="0 180 0"></a-entity>

When I change the second camera options to "output:plane; outputElement:#testplane; sequence:before", I get the expected result rendered to the plane, but with the code above, the canvas stays unchanged. Any ideas about how to fix this?

Thanks!

diarmidmackenzie commented 1 year ago

Hi, to render to a canvas would require a second WebGL context (i.e. a 2nd THREE.WebGLRenderer)

One of the things I deliberately tried to do with this set of components was to avoid the need for multiple WebGL contexts, as described here: https://diarmidmackenzie.github.io/aframe-multi-camera/#single-vs-multiple-webgl-contexts

In your case, it sounds as though you actively want an additional canvas & hence you'll need an additional WebGL context, since each THREE.WebGLRenderer targets exactly one canvas element.

In that case, I think you would be better off using the code from jgbarah's camrender.js, rather than trying to adapt these components?

Is there a reason that doesn't work for you?

floe commented 1 year ago

I've fiddled around a bit more and found that the canvas element actually gets rendered to (if I move it out of a-assets and make it visible as a standalone element, it shows the second camera view). But the MediaStream I get from canvas.captureStream() still shows a blank element. So I'll try the approach from camrender.js next, thanks for your quick response!

floe commented 1 year ago

Update: yes, it works with camrender.js, with the caveat that the canvas needs to be initialized before captureStream() works (either through THREE.WebGLRenderer, or through canvas.getContext("webgl")).

diarmidmackenzie commented 1 year ago

 (if I move it out of a-assets and make it visible as a standalone element, it shows the second camera view). But the MediaStream I get from canvas.captureStream() still shows a blank element

When you do this, I think is not actually rendering to the 2nd canvas.

Rather, it is rendering to a section of the original canvas that is defined by the boundary of the 2nd canvas.