Open wizgrav opened 7 years ago
As a first step OVRUI/src/Control/VREffect.js could be updated to match the capabilities of THREE.VREffect which supports rendering the scene to a THREE.RenderTarget. I wrote that support for the latter and if you want I can do it here as your version is very similar. The questions are where should the renderTarget be generated and stored, how should it be provided to VREffect.render() to trigger render to texture and how it should then be provided to the developer after the scene's rendering for further processing?
In a scenario where the scene is rendered to a texture instead of the browser window directly, input schemes are also going to break down. We can explore supporting export-to-teture in a hacky manner, but I'd want a better long-term story for the full React VR experience before officially documenting and supporting it.
I'm with you, hacky is great, maybe providing a callback slot somewhere that when defined would provide the renderer and renderTarget to a function?
This all gets configured at init time, we can probably just provide some extra options to the VRInstance constructor.
Description
Sorry if I've missed something obvious, but how could one get the scene rendered in a texture so as to apply post processing effects on it?
Expected behavior
Actual behavior
Reproduction
Solution
Additional Information