erichlof / THREE.js-PathTracing-Renderer

Real-time PathTracing with global illumination and progressive rendering, all on top of the Three.js WebGL framework. Click here for Live Demo: https://erichlof.github.io/THREE.js-PathTracing-Renderer/Geometry_Showcase.html
Creative Commons Zero v1.0 Universal
1.91k stars 177 forks source link

OpenGL version? #31

Open zhaishengfu opened 5 years ago

zhaishengfu commented 5 years ago

wonderful work!Why do you use WEBGL not OpenGL? I think OpenGL may be faster.

erichlof commented 5 years ago

Hi @zhaishengfu Sorry for the late reply - I was on vacation. :)

The main reason I use WebGL 2.0 (which is based on OpenGL ES 3.0) rather than pure OpenGL under Windows is the ease of cross-platform portability. Basically any device that has a capable modern browser (which is pretty much everything these days) can run my code. I develop the demos on my Windows laptop but as soon as I release it, people with Windows desktop, Mac, Linux, tablets, smartphones, etc. can experience the demos as long as they can use a browser, and that browser can be Chrome, Firefox, or Edge at that.
Pure OpenGL might be a little faster like you said, but since my code mainly uses shaders that get compiled down to GPU machine code, I think the trade-off for portability and "write once, run everywhere" is worth it in the end.

Thanks! :-)

zhaishengfu commented 5 years ago

Oh, enjoy your vacation. I got your reason. I have seen some one says using compute shader to inplement ray tracing, do you think it is better?

erichlof commented 5 years ago

Hi again, :) Yes it is probably a little more efficient to do the tight raytracing loop with a compute shader. If I'm not mistaken I think that is how you must access NVidia's raytracing hardware on their RTX cards - through compute shaders. My project does the more traditional full-screen quad trick and uses a vertex and fragment shader. The vertex shader is only a couple of lines of code because it only needs to draw 2 big triangles that form a quad that stretches across the entire viewport. Then all the work is done with the pixel (or fragment) shader that operates on all the pixels that are covered by those 2 triangles, which ends up being every single pixel of your full screen.

To see how to make a raytracer with compute shaders where you are essentially talking to the individual GPU cores and setting up each individual thread with its own ID, take a look at Sam Lapere's GitHub projects . He shows how to do it with CUDA as well as OpenCL/OpenGL on Windows.

I don't know if the tight raytracing loop runs any faster on the compute setup vs. the vertex/fragment shader full screen quad approach, but I will say this - WebGL, since it runs on browsers that may or may not be running on mobile devices, is capped at 60 fps. That is the best you can hope for in collecting many samples per second to get rid of initial noise. But with compute, like NVidia RTX, OpenCL, and CUDA, you don't have that cap, and it is possible to run at 140 fps or 200 fps, thereby collecting many more samples per second. However this assumes a simple geometry scene setup. If you're trying to run a detailed level from a AAA game with millions of triangles, it will probably do no better than 60 fps at this moment in GPU raytracing (summer 2019). As GPUs get more powerful in the future, I can envision doing any geometry no matter how complex at 120 fps - enough to do real-time VR path tracing with minimal noise and enough fps to make it a comfortable experience.