erichlof / THREE.js-PathTracing-Renderer

Real-time PathTracing with global illumination and progressive rendering, all on top of the Three.js WebGL framework. Click here for Live Demo: https://erichlof.github.io/THREE.js-PathTracing-Renderer/Geometry_Showcase.html
Creative Commons Zero v1.0 Universal
1.91k stars 177 forks source link

Multiple OBJ models rendering #9

Closed EtagiBI closed 5 years ago

EtagiBI commented 6 years ago

Hello,

First of all, I would like to thank you for all your efforts! Your project seems to be the only one alive project dedicated to Three.js photorealistc rendering.

As for my question, is it possible to render multiple OBJ models with your PTR? My friend and I are working on a Three.js based room planner as our university project. We have a bunch of textured furniture models with different material determined via corresponding MTL files. Is this feature supprted by PTR? As I can see from your demos, at the moment PTR works with simple shapes only.

erichlof commented 5 years ago

@MEBoo Thanks for hashing through all of this with me. On the issue of scene loading, yes GLTF supports most of the features that I need to read in when importing objects. I might just leave the OBJ+MTL path alone and let it sit on this repo as an example. It was giving me issues because sometimes the loader couldn't find a material .src for some reason and the web page will just be black. I want examples to work every time. Also the aforementioned lack of PBR creates a lot of problems and guesswork when I'm trying to import it for the path tracer.

I think I'll stick with the GLTF for now unless something better comes along. As you said, if I can just parse the three.Scene, it shouldn't matter what loader is used. However, in terms of ease-of-use and frictionless workflow, I wanted my project to provide a clear way to get objects into the path tracing scene, hence the GLTF loading example. I could have made everybody convert the scene themselves into a Three.Scene in JSON format (which would be simple to parse), but not only is this seemingly cumbersome, the size of a scene described in JSON is not web-friendly like GLTF is (GLTF packs down so you can better transmit it through the internet). So I'm providing a sort of middle-step where I load the user's .gltf scene using the GLTFLoader, and then after three.js turns it into a JSON format Three.Scene() (the format I ultimately need), I can then parse the scene, extracting the bits of info that I need.

Finally on the materials live-edit issue you mentioned, I think it is possible. I don't have any examples currently, but if you were to somehow change the material of a sphere from diffuse to glass inside the path tracing shader, it would update instantly at 60 fps. The only bottleneck when trying to do this with a model made out of triangles is that all the material data has to be on a three.DataTexture that has been loaded onto the GPU. It is possible to change the model's materials by altering the triangle_array[] fields. However, it will not update visually. The changed triangle_array[] texture will have to be re-loaded onto the GPU. But it shouldn't take more than a second or two I would imagine. I have yet to try this, but I might give it a try in the near future just to make sure it works. If a lot of editing is desired, I could even look into somehow incorporating my path tracer into three.js's fully-functioning editor

MEBoo commented 5 years ago

@erichlof yes in the case of scene import, everyone should use the GLTF, not the THREE JSON encoded format... But when the GLTF is loaded using the default loader provided by THREE, you have already all the objects instanced... no need to parse any JSON string and no action is required for the user to instance the scene... actually only a double parse happens because you first load the GLTF into THREE, then you traverse the THREE scene ... if you provide a middle-step way (like now), then you'll provide a way that is a bit more performing to direct load a GLTF scene into your pathtracer. But what I mean is: no need to parse a JSON scene format (the one usually used by three), but only to traverse the scene for Meshes and Lights. So someone who wants to load a GLtf could simply follow the standard path provided by Three https://github.com/mrdoob/three.js/blob/master/examples/webgl_loader_gltf.html

loader.load( 'models/gltf/DamagedHelmet/glTF/DamagedHelmet.gltf', function ( gltf ) {scene.add( gltf.scene );});

As you can see is 1 line of code is required to the user to setup a gltf scene... then you can simply traverse the three scene.

Regarding materials, mine was only an example of a possible use case, it's not important... was only to speak about the possibility for the user to instance a new PBR material runtime or just before starting the pathtracer during the scene setup, I was speaking about supporting both the PBR materials even if GLtf doesn't instance the "physical" one

Thankyou for listening ;)

erichlof commented 5 years ago

@MEBoo

Ok thank you for clarifying. I believe I understand the best way forward. I will continue to support and encourage the use of the GLTF loader for getting objects imported, then I just traverse the three.Scene and extract the info. Your editing scenario has sparked my interest: I might throw together a quick non-triangle model demo (i.e. a simple sphere scene) and include a basic html button that once pressed, switches back and forth between materials that are applied to the sphere, in real time. If it goes smoothly, I could apply that to the models loading scene, by using the drop-down controls gui that three.js uses on most of its demos, in order to provide more buttons/sliders for PBR parameters. Thanks again for your input :)

MEBoo commented 5 years ago

Wow that would be awesome! If you provide updates to mesh materials runtime, I think you have to optimize performance by transfering only updated info to the GPU, I don't know if I'm wrong because I don't know how and when you transfer data to the shader, I have to study :D Consider also the update of a mesh position (or scale or ....) sorry too much imagination :D

PS: an ultimate consideration about OBJs -> the OBJ standard was created to load single mesh with its material, not an entire scene with cameras and lights defined. So if someone would like to import an entire SCENE, then he should use a format for scenes, like COLLADA or GLtf. Even if OBJ can contains a group of Meshes, that is not a scene. Said that, if someone does import OBJ+MTL then, before adding it to the three.scene, he should convert the material, nothing more. The same if someone import a COLLADA scene, etc.

PPS: regarding lights, don't know if you already import the three.scene lights... but if you consider doing this, then you should read the "power" property of the light expressed in lumens, and not its intensity property -> because when using PBR materials in three, you should also use renderer.physicalcorrectlights = true, and because intensity means nothing for a real application like yours

Good work!

erichlof commented 5 years ago

@MEBoo Yes I'm not sure either if you can just load relevant data, like a single value to a texture that is already residing on the GPU. I don't know if three.js has any examples of how to handle this. I will have to study too, lol.

Regarding OBJ considerations, thank you for the info on that subject - that makes sense to me now. I'm sort of new to all this three.js importing stuff; in the past I would just use the three.js primitives like three.Box or three.Sphere and build the scene up myself out of different pieces. So your considerations help me. :)

Regarding lights, at the moment, I'm not reading in light information. The demos for model loading currently have a huge light-blue/light-green sphere light that is wrapped around the entire scene, like a sky dome. This helps with converging noise quicker. But I do intend to support point lights (light bulb) and directional lights (the sun) and just parse the three.Scene for light objects. The only light type I might have trouble with is spotlights. At the moment I don't have any demos with spot lights, other than the "classic scene" for "bi-directional path tracing" with the gray room and the egg shaped glass sitting on the little table. This has a spot light shining towards the left, but I had to bi-directionally path trace it, which is expensive in terms of geometry. But we'll see, nothing is impossible right? :-D

MEBoo commented 5 years ago

@erichlof glad to little help you! :) yes nothing is impossible ;)

Regarding "converging noise quicker", from what you said, I think that a closed room/scene is better than an opened one, so this should be pointed out to developers once lights are supported.

Regarding spotlights: what about creating a point light inside a cone geometry with inside a super reflective material? like real spotlights? ...otherwise you have to do some math to simulate rays coming from a spotlight..

erichlof commented 5 years ago

@MEBoo Sorry for the delayed response - I've been hard at work implementing the material switching feature we were discussing a couple of posts back. I'm happy to report: it works great! Even better than expected! : Switching Materials Demo When a different material is chosen, the path tracer instantly traces the new material. The switch happens at 60 fps! I like this style of demo so much that I tossed out the old static materials demos (1-4) and replaced them with this snappier new one! The GUI works perfectly on mobile too ;)

Regarding spotlight handling, yes the point light inside of the cone (or cylinder) should work in theory. I will have to test it out - Ha ha, another demo project I'm already thinking about now! Thanks for the suggestion, I'll keep you posted!

MEBoo commented 5 years ago

hey impressive!!! It's cool to watch indirect light changes real-time!! Awesome!!

Ahaha ok! Maybe we loose 1 bounce with this method? Maybe It would be better to simulate the spotlight firing photons like the cone do... but let's see! Remember to coat the internal cone's surface with a reflective mirror :D

MEBoo commented 5 years ago

Hey here we are at 60 posts ... when you complete the three.scene parsing I think this thread is done ... I'll open other topics for different arguments, like material maps, lights etc ;)

erichlof commented 5 years ago

@MEBoo Sounds good to me! Yes, the spotlights will be made out of reflective metal. If you want, you can open up a new thread about supporting various three.js light types. And yet another thread for supporting various materials/textures by parsing the scene. Yes we kind of took over this thread from the OP (sorry about that @EtagiBI ). I still intend on working to get multiple OBJ (or multiple GLTF models rather) supported. It will require more BVH work, which is the most complicated part of the code-base and most sensitive to change. I'll post to this thread if I have any breakthroughs on multiple objects.

MEBoo commented 5 years ago

@erichlof yes for "three.scene parsing" I mean also BVH of BVHs since without them is impossible to render a scene with multiple meshes... I'll open the others threads once I can test the scene parsing, so I can try myself something more complicated about maps and lights ;)

See you soon!

MEBoo commented 5 years ago

Hi Erich! I always checked for updates on the bottom of your project homepage, the news section, but only today I saw that you were writing updates on the top :D

Nice to see the planet demo and that you are still working on the project! Merry Christmas ;)

erichlof commented 5 years ago

@MEBoo Hello, Merry Christmas! Yes I have been making updates and changes here and there across the codebase - nothing earth-shattering, but just incremental improvements to the various pieces in order to make the whole project more unified and, in some cases, bug fixing and making things work properly (i.e. mobile swiping camera rotation). I had started down the path of adding different light types to scenes containing BVH's (like I mentioned in an earlier post on this thread), but I had to take a detour because I quickly found out that having a small light source (like a point light or a light bulb) was taking way too long to converge with all the dark noise in the images. This is because I was trying to use old fashioned path tracing and let the rays try and find the light source by chance. The frame rate was still great, 30-60 fps, but the image quality was unacceptable. If you noticed, my current BVH demos have a huge sky-light dome/sphere that makes the models rendering converge very quickly, so I had been skirting around this issue when I was getting my feet wet with BVHs.

Now that I couldn't put off the problem any longer, I decided I must investigate further exactly what is going on in the BVH (hence the new BVH visualizer demo). When I was satisfied with that, I turned to the lighting algorithm. At first I tried what I already had working on my Cornell Box Demo, Geometry Showcase, etc., which is direct light sampling on every diffuse bounce. When I tried the same algo with a BVH introduced into the mix, this sped up the convergence, but tanked the frame rate, and sometimes crashed the WebGL rendering context because the BVH was not only being called 4 times per frame like my current BVH demos, it could be called 8+ times because of the additional direct light sample through all the scene geometry.

Luckily I was reading Peter Shirley's new Ray Tracing in One Weekend series (a great read) just for fun and possible inspiration, when I came upon a single sentence in which he says you can either belong to one camp and sample the light directly on every bounce, aka send shadow rays (as everyone including me is currently doing) or go with the camp in the minority (of which he is a member) and statistically just aim more rays at the small light source and down-weight the contributions accordingly to probability theory. On a whim I tried this approach in the current demos without the BVHs (so I could measure the success rate) and it works great so far! It converges as-fast or almost-as-fast depending on lighting complexity, and the best news is the frame rate stays at a solid 60fps (I don't have to send extra shadow rays on each diffuse bounce) and even on mobile when the frame rate would have been 10fps, it goes up to 20/25fps because of the reduced thread divergence - the rays proceed in a more lock-step fashion.

So that is what I've been working on lately. It's kind of a twisty path I'm on right now, because there's multiple ways of approaching the problem. Sorry it's been quiet lately on the BVH side of things, but I feel I need to explore this avenue of lighting so that when I introduce different light types, multiple objects with their own BVH's, and a BVH for the BVHs, that I can be confident that it won't crash the browser webgl context, as well as produce a nice image quickly. Soon I will post the lighting changes to the entire codebase - they work really well. I'll keep you posted! ;-) Happy Holidays!

MEBoo commented 5 years ago

@erichlof hey so many problems!! Let me understand: using BVH + sky-light dome there are no performance/quality problems (as we saw in your demo). But when you put a small light in that scene with BVHs you had slow converging (and bad quality). So you started checking the BVH algorithm and then the core light render algorithm.

I didn't expected any issues since your Cornell Box Demo already is with direct light sampling... so I expected that BVH + direct light wasn't a problem :(

But hey... after all, seems like you built a better path tracer now! Hope this new approach is usable and produces better performance/quality for every use case ... from the old Cornell Box to the new polygon scene.

Thanks for your amazing job! Happy Holidays!

erichlof commented 5 years ago

@MEBoo Yes you got it! You understand perfectly! :)

Direct light sampling works most of the time and that's why everyone, including me, usually defaults to that approach. It guarantees that on every bounce you will get a light sample contribution, no matter how small the light source. But it comes at a cost - you must search through all the geometry (BVHs included) again to get that light sample and see if the surface in question is being lit, or is blocked by an occluding geometry object, thus leaving it in shadow. That's why it is sometimes called the 'shadow ray' technique.

For all light types it produces really fast results, if you can afford double the geometry searches. I found out that I couldn't afford it when trying to add the heavy-duty models with BVHs in that geometry search, at least for WebGL shaders in the browser. Hence the slightly different approach of stochastic ray direction choices slightly in favor of the light source. If it chooses to sample the light, it does so on the next loop, acting like a shadow ray - if it doesn't choose to sample the light, it bounces randomly and collects the usual diffuse color bleeding from other objects in the scene as a traditional path tracer does. It doesn't matter which it chooses, but you must account for weighting if it does decide to spring towards the small light source.

This random picking allows all the rays to do roughly the same amount of work on each frame, thus keeping divergence low and frame rate high. I believe it should work for BVH models. I'll post a demo soon with a single point light and a triangle model. Stay tuned!

MEBoo commented 5 years ago

@erichlof understood! Hope the new path-tracer maintains the same quality ... regarding noise etc.

I'm tuned ;)

erichlof commented 5 years ago

@MEBoo Yes it should converge to the same quality result. I made the repo-wide changes last night. Hopefully it is a seamless transition and you might not even realize that anything is different under the hood, which is a good thing! The only thing that is different is that it won't crash when I add the BVH with different light types, ha! ;-)

erichlof commented 5 years ago

@MEBoo and all, The following is a copy of my response to question #22

I'm happy to report that the initial test of new stochastic light sampling technique works great with the BVH! BVH Point_Light_Source Demo The light source is very small and bright and the rest of the scene is dark. If a normal path tracer was used, the noise would be very slow to converge. But as you can see, with the new approach, the image resolves almost instantly. And the best part is that the cost is the same as that of a traditional path tracer - 4 bounces max through the BVH structure to get the refractive glass surfaces looking correct. I will ramp up the triangle count and try some different light source types, like spot lights and quad lights, but from what I can see so far, things are looking good!

Just wanted to share the promising results on this thread as well! -Erich

MEBoo commented 5 years ago

@erichlof so ... Happy new year!! A new big milestone achieved!! Now let's see what happens with more lights / triangles!

So, will this new tracer method be the default algorithm for every use case? Is it good for the Sky Dome case?

A little issue: I see in the "bvh point light demo" some white point that never happened before with the old algorithm...

erichlof commented 5 years ago

Hi @MEBoo Yes this new algorithm should work for all general cases. Now with the sky dome, you could just delete the random if statement that makes the rays go towards the light source because the rays will find the dome just fine without assistance. Essentially you'd end up with a traditional path tracer like I have currently on all the BVH demos.

The only case that would be tricky would be like the bidirectional demos, the room with a small table and glass objects on it and the light sources are hidden, inside a casing, or behind a nearly shut door. That would require the bidirectional algo because no amount of rays will be able to find the light sources, they are hidden. But we can cross that bridge later. :-)

Those bright spots are called fireflies and they are a bane to rendering programmers everywhere, ha ha. I will see if I can mitigate the bright spots somehow.

Happy new year!

erichlof commented 5 years ago

@MEBoo I got rid of the bright fireflies! BVH_Point_Light_Source Demo

MEBoo commented 5 years ago

@erichlof You know... you are the man!!! And now? You could start experimenting light types? Hand crafted Spotlights? Rect Area lights?

A question... Using multiple lights in a scene, the algorithm will converge slowly or it's exately the same?

erichlof commented 5 years ago

@MEBoo Yes the next step is different light types. The spot lights will be made by placing a variable size bright spherical bulb (usually white, but can be changed to other spot colors) inside a variable size metal open cylinder (color is usually black metal, but can be changed if desired). The rays that go toward this light source will only get light if they find the bulb inside the opening of the cylinder, or hit the inside of the reflective cylinder and then find the bulb on the next bounce. It should work, in theory. :-D
Rectangle area lights are already working - check out the Billiard table demo and the Lighting Equation Demo (from Kajiya's famous 1986 paper). Dome lighting and large spherical lights already work too - current BVH demos (as you know) and check out Geometry Showcase demo for multiple large sphere lights.

Speaking of that demo, to answer your question about handling multiple lights and multiple different types of lights in the same scene, at first I will do what I did on the Geometry Demo (and Lighting Equation demo with 3 different quad lights) and treat them Monte Carlo style, or randomization - basically it's like a roulette wheel; on every animation frame, you spin the wheel and have as many slots as there are light sources. Where the ball lands is which light source is 'the winner' for that series of 4 bounces. On the next animation frame you do it again for the upcoming loop of 4 bounces. Eventually all light sources will be the winner. Now this has worked with 3 or 4 light sources, however I have yet to try with unlimited light sources (like welding particles), or vastly different types all mixed together, say a dome, sphere lights, quad area light, and a spot light in an interior modern home scene at the same time.

My prediction is that it will work, but the amount of noise and the time to converge that noise will go up a little with each addition of a complex lighting scheme. This is because less and less rays get devoted to each light source in the big list of light sources. You can't get to all of them each frame, otherwise the framerate would suffer too much.

erichlof commented 5 years ago

@MEBoo I just got Spot Lights working! Here's a demo: Spot Light Source Demo As you can see, the casing of the spotlight is cylindrical and depending on which side you're looking at, it is black on the outside (or could be any color like gray or white, whatever you want) and reflective metal on the inside. The bulb is made out of a sphere, which is white in the demo, but you could easily change it to colored spot lights.

The new stochastic sampling works great on these highly contrasting lighting scenes! Normally if you had darkness in the background and a very bright spot on the scene subject, the noise would have been unbearable. But with this new technique, it is able to search through the BVH inside a WebGL browser shader, running at 60 fps, and converges almost instantly!

MEBoo commented 5 years ago

AWESOME!! No other words... I think the most complex objectives are almost done for this project!

MEBoo commented 5 years ago

Hi @erichlof ... I see many updates.. what happened?

erichlof commented 5 years ago

Hi @MEBoo
I improved the sampling of the hemisphere above the diffuse surface that needs to be lit. This results in smoother lighting with less noise. More importantly, I found a way to better sample the light sources by adding a diffuseColorBleed variable to most of the demos. This allows the end user to dial up and down the amount of gathered diffuse color bouncing vs. the amount of direct lighting shadows. It ranges from 0.0 to 0.5 and if it is 0.5, you get exactly what I had in the past for all the demos, basic path tracing with full color bleeding. But if you wish to dial it down, the convergence goes way faster at the cost of a slight loss of color bleeding (which in practice isn't even that noticeable).

Also, I updated most of the demos because I am trying to unify the path tracing algorithm so that a similar plan of action or algo can work for the various demos and their individual lighting needs. The only demos this doesn't apply to are the outdoor environments and the 2 bi-directional scenes from Eric Veach's paper (these need different strategies).

Click on the Geometry Showcase and Quadric Geometry demos and you should notice they run faster, converge more quickly, and provide a smoother experience!

MEBoo commented 5 years ago

wow! even better! what's next?

EtagiBI commented 5 years ago

@erichlof, I'm amazed by your progress! By the way, have you already switched to GLTF/GLB? These new formats are well optimised, so it's possible to render more complex models without any speed losses.

erichlof commented 5 years ago

@MEBoo I'm about to start working on multiple models (finally, which was the title of this epic thread in the first place, LOL!), and a BVH for the BVH's after that. On the side, I have been revisiting the bi-directional scenes and seeing if there are any improvements to be made. This is because with the BVH's inside a room or house for example, most of the light sources are hidden inside cove lighting, recessed lighting panels, underneath lamp shades, etc. Even though I got the spot lights and point lights working fast with the BVH recently, those demos are just a tad idealistic so far as real world architecture and lighting plans go.

Those demos have exposed point and spot lights, and the older demos have huge spheres hanging in the air or big quad area lights (like the museum demos). Things work well in those idealistic lighting conditions, but once you try to render an apartment or bathroom with recessed lighting, the noise returns big time. The only solution to this indoors problem that I can see at the moment is bi-directional path tracing, so I am revisiting some of that old code to see if I missed any optimizations.

erichlof commented 5 years ago

@EtagiBI Thank you! Yes I believe that GLTF will be the best way forward. All the GLTF models I have downloaded from places like Sketchfab, everything just works. I haven't heard about GLB yet, is that Binary? If so, I believe some of the models like the Damaged Helmet (here in the models folder of this repo) do have a binary section that describes the vertices and indices. This speeds everything up, from downloading, to loading into memory. So, if that is the case, then yes I will be using that format going forward.

MEBoo commented 5 years ago

@erichlof nice new demo!!

erichlof commented 5 years ago

@MEBoo Thanks! I've been busy working on moveable BVHs . I've had a breakthrough, I'm putting together a small demo to show the new functionality. Will post it soon!

MEBoo commented 5 years ago

@erichlof Hey just saw the moveable BVHs with models!! Don't know how you did but it's very fast! I see that the scene is accumulating samples where there is no movement... correct? The result even with only 1/few samples is awesome!

So a real spotlight, low light, hi-poly model texture+other maps on a BVH updated and moved runtime!!

erichlof commented 5 years ago

@MEBoo Yes I was very excited to see it actually working for the first time! The secret is that I treat the BVH like I do the boxes in the Cornell box scene. If you notice in that old demo, the mirror box and the short diffuse box are slightly turned. How I achieved this is taking advice from old ray tracing pros and instead of transforming the box and trying to trace an arbitrary rotated object (which is hard and expensive), you just transform the ray by the opposite (or inverse matrix to be exact), which puts the turned box essentially back to facing straight-on, then trace the 'non-rotated' object, which is easy to do. Then you rotate the normals and hit data back into world space with the desired rotation matrix. So on a whim, I tried this with the entire BVH, 7000+ boxes, transformed the ray by the inverse of the rotated root node of the BVH, and then trace as normal!

Now, I've been struggling with figuring out how to do this with a mesh skeleton, bones and animation, which actually move the mesh vertices in real-time on the GPU vertex shader. I thought I could go down the bone hierarchy and transform the ray by the inverse of each bone, but it turns out to be a little more complicated than that, because of weighting and skinning deformations and such. But I will post my findings if I get something working with simple animations.

About the accumulation of samples, it's actually just 1 sample over and over again, but I let the background scenery (that which is not actively moving) 'bleed' a little more from the previous frame. So there is a little more motion blur effect on the ground and walls, but it is not distracting because those things are static. 1 old sample bleeds more into the new sample, so it's like having 2 samples for a split second I guess. On the dynamic objects, or when the camera is moving, I manually turn down the 'bleeding' from the previous frame, in order to minimize distracting motion blur that would occur if I did nothing about it. It is a delicate balance between smooth motion blur which covers up distracting noise, and moving objects which you want to be more crisp and clear without too much distracting motion blur. :)

MEBoo commented 5 years ago

@erichlof nice... understood!

mmm don't know why you are working on IK/animations ... it's a big world apart! But you could check what is already done in threejs https://threejs.org/examples/?q=skin#webgl_animation_skinning_blending

don't know if it is GPU based...

erichlof commented 5 years ago

@MEBoo Yeah that's exactly what I want to have path traced inside the engine! It is in fact GPU based - the bones and animation data are stored as a GPU data texture (kind of like my BVH data texture) and then the joints' rotations and offsets are read in the vertex shader and affect the vertices of the mesh. I'm not sure how to trace all of that though, there could be as many as 200 bone matrices which are each 4x4 floats. That's a lot of inverses to do! I'm not sure if it'll work out in the end, but it's worth investigating. In the meantime, I'm about to refactor all the demos and get rid of the duplicate code from .html file to .html file. This change will make the demo collection less error prone and easier to maintain. :-)

MEBoo commented 5 years ago

@erichlof Hi!! Any feature update other than codebase refactor?

erichlof commented 5 years ago

@MEBoo Hi! I have been working on loading HDR equi-rectangular images and using those as the sky backgrounds. It's going well so far, the new GLTF model viewer #27 will benefit most from these backgrounds. Sorry it's been a little quiet lately, I had to read up on HDR images, how they work, how three.js handles them, etc. But user n2k3 and I should have a working demo soon!

I am going to try adding multiple model files to the Difficult lighting demo, the one with the slightly cracked open door and the 3 objects on the coffee table. In the original, those are supposed to be 3 Utah teapots with 1000 triangles each and different materials for each one. The current demo has ellipsoids, but I always wanted to have the 3 classic models in there, and now I have the means to add them I think. That will be step 1 to getting multiple BVH objects in. Step 2 will be a BVH for the BVH's!
:)

erichlof commented 5 years ago

Hi @MEBoo and @EtagiBI Well, good news and not-so-good news - The good news is I successfully loaded multiple OBJ (now GLTFs in the new refactor) which was the original title of this now epic thread! Here's a little preview:

multipleobj

It is finally starting to look like the original classic scene by Eric Veach in his seminal paper! I realize now that I could have done some trickery with offsetting the casting rays (or how instancing is done in ray tracing) since all of the objects have the same shape, and I just might do that for the final demo - but this is actually doing it the hard way for proof of concept: it loads a teapot of 4,000 triangles, makes its BVH, uploads it to the GPU for path tracing, then loads another teapot of 4,000 triangles, makes its BVH, uploads to GPU, then loads yet another teapot of 4,000 triangles, makes its BVH, uploads to GPU. So in the end, we have 12,000+ triangles spread between 3 models, each with their own BVH and materials, as you can see in the image.

Now for the not-so-good news: If you look at the top left corner framerate, it has gone down by half. This demo used to run on my admittedly humble laptop at 50 fps, now it is at 25 fps. Still real time and interactive and amazing that all this is happening on a freakin' browser, but nonetheless not as fast as I was hoping for. It is safe to say, that adding more objects would eventually grind the shader to a halt. Speaking of shaders, the other not-so-good news is that on first start-up compilation, it crashes my webgl context and results in a black image. I have to reload the webpage, then it usually compiles the second time (not sure why that is). This is not only annoying for me to have to keep doing every time I change something in the code and debug, but for the end user - I don't want to crash everybody's webpage, then ask them to reload a second time - just to get it to work so they can see the cool demo.

So I will continue exploring ways of first of all getting it to compile on the first time every time, and then increasing the framerate (which is less crucial, but would be nice). As always I'll keep you guys updated. I just wanted to share the initial success (tinged with a little failure, lol) and finally progress this epic thread! Sorry it has taken this long to get to this point, but other avenues I have gone down have helped get this multiple OBJs feature started and hopefully improved! :-)

MEBoo commented 5 years ago

Nice news and milestone 🥇 !!! Finally we can close this "issue" thread 😁

3 questions: 1) is the code format indipendent? I mean... as you did before, are we able to send to GPU any THREEjs Mesh (pre-loaded with GLTF / OBJ / MyUltimateOptimizedFormat)? So are you parsing the THREEjs scene and dynamically building BVHs?

2) About the compiler bug: I can't understand how a shader could compile a time and a time not! I've already seen something like this happen, but I simply can't understand how :)

3) About the performance loss: are these frame drops due to multiple bvhs or to too many polygons? I mean, if you use a single BVH with 3 models inside and 4000*3 polygons, you have the same frame drop?

Afterall, I think that this is the tech of the future... but you can't dream about having a real-time real application for now. But now we can have a "background" client photo-realistic rendering engine 😉

erichlof commented 5 years ago

@MEBoo Hi! About the 3 questions,

1: Well yes and no. Somewhere along the way, I think it was a couple of months ago, I decided to support .gltf and .glb (gltf in binary format for faster transmission) and remove the examples of the .OBJ files and other formats. The reason is twofold, first the .OBJ is heavier and less compressed than .glb. And second, .OBJ is an old format so even though I can extract the three.js data from the three.js created mesh when it loads, three.js does not know how to insert PBR materials into that old format, and there's no way for authors to define those types of materials in the old format when they create them in 3dsMax, Maya or Blender. GLTF on the other hand natively supports textures of all types like metalness maps, and physical materials like glass with IoR specified by the author, which I in-turn absolutely need to load into my path tracer. I know this decision might leave out some models that we have lying around, but the good news is that free websites like ClaraIO are able to convert any file type into GLTF for faster web transmission and native PBR support. In fact, you can load [insert your favorite format here] into clara, then ADD free pbr materials that are ray-tracing friendly, then save the whole thing, and hit 'Export All' gltf 2.0 and you're done. That's exactly what I did for 90% of the demo models on this repo, they were originally in another format. This decision makes my life a little easier by reducing the corner cases and codesize of handling the three.js Mesh after it has been loaded by an arbitrary unknown-in-advance format. This way I can either intercept the gltf data myself (it is in a human-readable format, the material stuff anyway) or wait further down the pipeline and get everything from three.js's correctly created Mesh with ray-tracing friendly materials and specifications (which is what I'm currently doing). Of course you could try this whole process with three.js's FBXLoader for example with some minor modifications to my demo code, but then again, I want to only think about 1 format that is built for the web, works with three.js, supports animations, and has modern detailed material specifications.

2: I ran into the 1st-time fail, 2nd-time pass compilation problem back when I created the CSG museum demos a while ago. That's why there are 4 separate demos. Initially I had all 14 CSG models in the same museum room, but it wouldn't compile at all. Then I reduced it by half to 6 or 7, then it compiled on the 2nd time only. Then I split it further into 4 demos with 3 or 4 objects each, and it compiles every time. I think it has to do with the amount of 'if' statements you have in the GPU code. CPUs love 'if' statements, GPU's - not so much! If you have too many branches, it crashes. It must not like all the 'if' ray hit bounding box on all the models - some parts of the screen have to traverse the BVHs, and some parts of the screen get lucky and hit an easily-reflected surface or wall, which also partly explains the framerate drop - GPU thread divergence.

3: Which ties into the performance drop - yes I think it is because of different models, GPU divergence and branch statements. I don't believe the triangle count has much to do with it. Take a look at the BVH_Visualizer - it handles 100,000 triangles at 60 fps on my humble machine (of course if you fly into the dragon model, the frame rate goes down, but for the most part, it doesn't even break a sweat). So there are a couple of things to try in the near future: A BVH for the BVH's (but in this simple case of 3 similar teapot objects, I'm not sure if that will help any), and like you mentioned, combine all 3 teapots into a super-teapot type shape and place a BVH around 12,000 triangles. That might work better. Also, in my last post I mentioned 'trickery' - you can actually do a modulus on the ray origin and treat it as multiple rays, and therefore it would return hits for 3 objects (like copies for free), even though you only load 1 teapot into the scene. This is a little more advanced and just a tad deceitful (ha), but something I want to try eventually - for example, a forest with thousands of trees seen from a helicopter view.

MEBoo commented 5 years ago

@erichlof 1) yes yes I know the history, it's actually written in this post! I only asked if you did the scene parser, since someone could edit materials properties run-time, and the way to do this is loading the imported mesh into THREE and then apply a material, or just another example, someone could aggregate objects/meshes to build a scene... so the best thing would be a real scene parser that will parse meshes (geometries and pbr materials), completely abstracting the objects source/format

2) understood :/ What I don't understand, since I not checked your code, is: are the objects coded in the GPU shader? Aren't they sent to the shader from JS? So why the code was so "object" dependent in the museum demo? Here the BVH has so many "if" ?

Wow .. Hope you will find the way Thanks for the info and the work

erichlof commented 5 years ago

@MEBoo

  1. Ahh ok you were meaning a pure scene parser. Yes I suppose we could do that, but the only issue is as the size and number of the models grow, it gets more error-prone for the end user to manually assign materials to selected groups of triangles inside the model. Take for instance the Damaged Helmet model in my 'Animated BVH Model' demo: Let's say that an author modeled that helmet in Maya and had no material specified (white), then saved it to .obj or whatever format, doesn't matter, then yes the user could load all 15,000 triangles and three.js would create a mesh object. Currently I am parsing the scene by extracting the child.geometry.material property from three.js, which it in turn got from the file. So, inside the GPU path tracer, it would see vec3(1,1,1) rgb white diffuse and render it as such. But if the user wants the face mask to be glass, then they want IoR of 1.5, then they want metal on the outside with a roughness of 0.1, it would be difficult to intercept the loading/creation of the three.js mesh object and manually put those in there. In fact, I don't even think you can select a group of triangles out of 15,000 say, and assign metal to triangles 2,287 to 3,013. That would be super error prone, which brings the problem further back, meaning the author would have to name the materials in Maya and assign the physical properties such as color, reflection amount, IoR in a helpful visual way inside of Maya. Then it would make sense to just dump all of that into an exported gltf file that we can read verbatim without any chance of errors or guesswork. That's how I see the problem, but maybe I'm missing something that you would want in future versions. Please let me know :)

  2. Yes if you look at the shader code for the CSG demos, there are tons of 'if' statements because of the multiple possibilities a ray could intersect a hollow, solid, additive, subtractive, or intersection overlapping pair of shapes. So as the number of objects grew in the demo room, the 'if' statements started to pile up. Now in the case of BVHs, yes there are potentially many 'if' statements for each set of bounding boxes and their triangles. But thanks to binary trees it ends up being an O(log n) search where n is the number of triangles. That works perfectly for 1 model, no matter how big, because you essentially halve the problem at each step. But if you have 2 BVHs, the right part of the screen's rays might have to go down one BVH tunnel and work on pruning it, while the left part of the screen's rays will have to prune a completely different BVH - again thread divergence rears its ugly head. And GPUs sometimes have to execute both paths of 'if' statements and then mask out the result that it didn't need in the end - super wasteful, but that's how things work I guess in GPU land. One more thing is that I've read that branches that involve possible texture lookups is generally bad practice. However, the only way to get the ray tracing on the GPU is through data textures, and searching through (or not for some parts of the screen) those big 2048x2048 textures for bounding box data. Also, if they locate an intersected triangle, they need to go to another big texture and lookup that triangle's vertex data, like UV, material, and color, etc. What I might try first is deferring the texture lookup until the last possible moment inside the bounces rendering loop and see if that helps mitigate the crashing/performance problems.

erichlof commented 5 years ago

@MEBoo Oh I think i know what you mean now - you wanted to be able to change a material on the fly? Like for instance, with the helmet model, changing one of its children.geometry.material to glass instead of metal? Or blue instead of red color? If I'm understanding correctly then yes, it is possible, but the only requirement is that the user must have default materials and the materials must be assigned to the exact triangles of the original mesh. So using the helmet model again for example, in Maya they would have to say "child0: face mask, child1: metal top of helmet, child2: hoses and connections on bottom of helmet. " Then three.js would correctly assign a child.geometry.material to each of those children parts of the model (even though they might be default and all the same and boring at load time), and then the end user could say "now that the model has loaded, I want child1.geometry.material to be smoother more mirror-like metal, and child0.geometry.material.IoR = 1.4", or something to that effect. Am I understanding your functionality request correctly? If so, the only problem is that I preload the triangle data (colors, materials, uv, index of refraction, reflectivity, etc) as a 2048x2048 data texture. If the user changes something on that triangle list, I would need a way of efficiently looking up the triangles in question and then updating the floating point numbers corresponding to their options. It's not impossible, I just haven't tried intercepting it like that while the path tracer is reading that same texture every frame.

MEBoo commented 5 years ago

@erichlof yes I already do this (material change) on my little project... In THREE materials / meshes / groups have names! So I can parse the scene and do whatever I want. But not only this use case: people use THREE to compose a scene, ok pre-loading meshes from various sources, but then composing all together. That is the why I suggest for your project to read/parse the entire THREE scene, and render it as is. At least for the first render, then the user should be able to update the scene, and re-send changes to your shader...

erichlof commented 5 years ago

@MEBoo Ok thanks for clarifying. Yes at the moment the loadModel and init functions just take the created three.scene and go through its children, saving the materials and geometry. So it is doing almost fully what you suggest. All my examples use gltf loader but as long as the user has a three.scene, that should be sufficient. In the future, just for example purposes, I might make a small demo that loads a pure three.scene in JSON format, (whatever the output is of the three.js editor that mrdoob created and maintains). That will show that the type of loader does not matter, you just need a three.scene file in the end. :-)

n2k3 commented 5 years ago

That is the why I suggest for your project to read/parse the entire THREE scene, and render it as is. At least for the first render, then the user should be able to update the scene, and re-send changes to your shader...

@MEBoo that will be the functionality of the glTF viewer I'm working on, which will be merged into this repository when it's done. Currently it loads multiple glTF models into a Three.js scene, then you can call another function that will read the scene and prepare all models for path tracing. In the near future I'll make it so that users can drag & drop new model(s) into the viewer to replace the old ones and call the same function for the new models. Once that feature is done, you could use just the prepare function for your own scene (and not use any of the glTF loading stuff, if your models use a different file format).

MEBoo commented 5 years ago

@n2k3 @erichlof You are awesome people 😁

erichlof commented 5 years ago

@MEBoo @EtagiBI @n2k3

Success on multiple fronts! Not only is it compiling correctly and loading every time (without crashes), I got the frame rate up to about 30 fps, which is still real time, and I even applied a hammered metal material to the steel teapot on the left! Now it looks almost exactly like Eric Veach's original rendering for his Bi-Directional path tracing thesis. :)

multipleteapotobjects

MEBoo, I achieved the compiling every time by still loading the 3 teapots separately (so it could be any type of model, any number of models, different models from eachother, different amounts of triangles of each model, whatever you want), but then before uploading to the GPU, I merged the triangle data into one texture that still fits comfortably on a 2048x2048 three.js DataTexture. That way the shader doesn't have to read from 3 different geometry data textures (which was causing the crashing and slower frame rate), but just reads from a larger 'uber' scene data texture.

I guess it's fitting that this post is the 100th post of this epic thread! I think we can safely close out this topic. ;-D