erichlof / THREE.js-PathTracing-Renderer

Real-time PathTracing with global illumination and progressive rendering, all on top of the Three.js WebGL framework. Click here for Live Demo: https://erichlof.github.io/THREE.js-PathTracing-Renderer/Geometry_Showcase.html
Creative Commons Zero v1.0 Universal
1.94k stars 177 forks source link

Headless rendering? #52

Open SirWyver opened 4 years ago

SirWyver commented 4 years ago

Hi Erich,

truly amazing work you did there! I was wondering, do you think there is a way to support headless rendering or at least an elegant way of storing the renderings info files?

erichlof commented 4 years ago

Hi @SirWyver Thank you! I must admit I had to Google the term 'headless rendering' so apologies if I'm not quite understanding what your use case would be. But assuming you mean saving and organizing the content on the back-end and just having the three.js path tracing renderer on the client-side front-end, yes I believe that would be possible.

The CMS could contain all the boilerplate files like my commonFunctions.js, three.js itself, any model .gltf data stored in .glb format, my pathtracingCommon.js, files like that. So far as my individual demos go, and possibly creating a front-end experience where the user could simply select different scenes, add/delete/manipulate geometry and have the updated scene be displayed in a relatively quick manner (subject to glsl shader compilation-bound time), yes I think that it is doable.

Each demo, or scene if you like, has 2 crucial files that change from demo to demo, or scene to scene. Take my Cornell Box demo for example - the 2 files it must absolutely have to render that exact scene are: Cornell_Box.js and Cornell_Box_Fragment.glsl . The personalized *.js file for each demo (often with the demo name in the title) serves as a scene setup and three.js specific setup, as each scene requires a different argument list for fragment shader setup and possibly any object matrices and stored model data that will be handled through the three.js library (such as the 2 boxes' matrices (2 THREE.Object3D()'s) in the Cornell Box scene).

The other file, Cornell_Box_Fragment.glsl is the heart of the path tracer and each personalized .glsl fragment file like this (again, usually with the demo name in its title) is specifically set up to deal with that scene's path tracing. If you look in the SetupScene() function in the .glsl shaders, it will usually define in a hardcoded manner the geometry and light info for a simple scene like this. My dream goal would be to have the .js file set up everything with familiar three.js commands such as new THREE.DirectionalLight() or new THREE.PointLight() or new THREE.CylinderGeometry() and then just have everything magically sent over and converted to glsl. This turns out to be much more involved than I first realized, which is why I haven't gotten around to generalizing my API like this as of yet. So for now, the .js sets up anything generally related to three.js that it needs to know about, and the *.glsl actually path traces the scene and is optimized for this very scene. Again, I suppose one could generalize an all-purpose "mega-shader" that would generate glsl on the fly to handle any arbitrary scene path tracing, but that would be a monumental effort.

There are so many things to focus on and get right in a path tracer, that I have not really given much thought admittedly to how a company or business would actually store this engine on the back end somehow, where the user front-end experience would only give access to entering and leaving different scenes in real time. But it definitely could be done - it just requires some common infrastructure work and planning for the CMS. The tricky part is the front end, where if a user changes things, how that gets sent back to the server, then converted into .js three.js setup (Object3D's), then ultimately converted into a glsl file capable of handling any geometry or light info that was just changed, in order to present back to the end-user. I could see the three.js online editor itself as a use-case scenario, but instead of rendering in the traditional way, you click 'render' and my path tracer kicks in and renders whatever geometry and lights it sees. That would be awesome! I just don't have the time to refit three.js' editor to be able to do something like that.

I hope I got the gist right of what you were asking about. If I missed your use-case scenario, please clarify and I'll do my best to point you in the right direction. :)

SirWyver commented 4 years ago

I see, thanks a lot for this thorough explanation! I agree coding this "mega-shader" to render arbitrary scenes would require a lot of/too much coding. A nice little feature (not explicitly related to this project) though would be a way to capture the rendered scenes and stream it to disk

erichlof commented 4 years ago

Hi @SirWyver

Ahh yes, that is a good idea. For instance, if one was doing an animation or short movie clip comprised of a series of individually rendered images, having a way to save each image (or rendered animation frame) to disk would be helpful indeed. I have an idea of how to do it - just thinking it through - maybe you could have a max samples count (a threshold where the image will stop refining, say like 1000-2000 samples, hopefully when most of the noise is cleaned up to a suitable movie level). Then once the renderer hits that max sample count, it automatically saves. I'm assuming because we are working in the browser, there's most likely a Canvas API function to save the entire contents of the browser window into an image file. But is there such a function to save as a common image format, like .jpg or .png? Again, there's probably a utility function somewhere out there on the web or npm to essentially snapshot the browser window's contents and quickly convert it and then save.

passariello commented 4 years ago

Another reason to have individually rendered images is to have the ability or opportunity to re-edit in post production. In many cases TGA, RLA, RLE or open format from 32bit to 48bit are used at professional level. OpenEXR (USD?) and all type of integration in a pipeline are welcome!

To collect image in a streaming is used a lot ifl format Bye


Dario Passariello Principal Software Developer Trainer and 3D Expert https://dario.passariello.ca mobile: +1-778-318-2645

On Mon, Sep 28, 2020 at 12:20 PM Erich Loftis notifications@github.com wrote:

Hi @SirWyver https://github.com/SirWyver

Ahh yes, that is a good idea. For instance, if one was doing an animation or short movie clip comprised of a series of individually rendered images, having a way to save each image (or rendered animation frame) to disk would be helpful indeed. I have an idea of how to do it - just thinking it through - maybe you could have a max samples count (a threshold where the image will stop refining, say like 1000-2000 samples, hopefully when most of the noise is cleaned up to a suitable movie level). Then once the renderer hits that max sample count, it automatically saves. I'm assuming because we are working in the browser, there's most likely a Canvas API function to save the entire contents of the browser window into an image file. But is there such a function to save as a common image format, like .jpg or .png? Again, there's probably a utility function somewhere out there on the web or npm to essentially snapshot the browser window's contents and quickly convert it and then save.

— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/erichlof/THREE.js-PathTracing-Renderer/issues/52#issuecomment-700231488, or unsubscribe https://github.com/notifications/unsubscribe-auth/AEH36PJP4HXPXWJNTZJEMP3SIDOZVANCNFSM4RZNOM7A .