knightcrawler25 / GLSL-PathTracer

A toy physically based GPU path tracer (C++/OpenGL/GLSL)
MIT License
1.85k stars 175 forks source link

New features #15

Closed CedricGuillemet closed 2 years ago

CedricGuillemet commented 5 years ago

What I plan to do:

What I'd like to have:

To experiment:

knightcrawler25 commented 5 years ago

Some hacky progress for the scene BVH. Surprisingly, works quite nicely even though I am updating the entire BVH texture after updating the top level BVH. Also, noticed some issues with normal mapping which I would have to fix at some point

ezgif-1-83ac79877086

CedricGuillemet commented 5 years ago

OMG that's amazing! is it merged already? I'll integrate imguizmo quickly :D

knightcrawler25 commented 5 years ago

It's not merged yet. I was playing around and got it to work for now. I'm trying to figure out how to properly do partial updates on the 2D texture since the BVH data is contiguous. Will try to do that soon and merge the changes in :)

CedricGuillemet commented 5 years ago

Another thing I'm thinking: rendering is done 1 tile at a time. then fullscreen pass to draw the result. This gives us like 1000fps on some scene where a big portion of resources is spent drawing on screen. We can be smarter: draw as much tiles until we use 30ms then draw the result. Same on the web. framerate is locked at 60Hz which means a lot of GPU time is not used.

knightcrawler25 commented 5 years ago

I merged the changes in for updating the scene bvh and materials/transforms. I added a quick and dirty test to see if its working. You can use 'm' to switch between materials and 'r' to rotate the head. I'm not too sure about the renderer modifying a scene variable though. Maybe there is a better way. ezgif-1-17b756d59eaa

CedricGuillemet commented 5 years ago

That's amazing! I'll try to integrate the gizmo tonight. Thanks again :)

knightcrawler25 commented 5 years ago

For the gizmo, would you be requiring ray casting ability against the scene bvh for picking objects?

CedricGuillemet commented 5 years ago

For now, I'll use the mesh array and an imgui list. But yes, I'd like to do picking. 1 thing leading to another. It might me interesting to extend the rendering to AOV (Arbitrary output variables). Have 1 or multiple passes to an offscreen buffer. Not iterative pass. And that pass would output objectid, screenspace position, normals, albedo, depth ... This would allow picking or use intel denoiser or even export to software like nuke for compositing. We need to do a pass on the shaders. I feel like there is a lot of copy/paste. managing #include directive should be on the list too. So many things to do. I'll post a screen with the gizmo in the coming hour.

CedricGuillemet commented 5 years ago

image Apparently, the mesh matrix is not in world space. I'll fix that tomorrow.

knightcrawler25 commented 5 years ago

Rather than keeping the path traced progressive mode I was thinking a simple rasterized mode (GGX, IBL) might be better because right now the pixelated 1 spp progressive mode does not offer much value other than just to keep the renderer interactive. The video shows what i'm thinking of trying. This is still path traced but something similar and rasterized might be better for scene edits. What do you feel? Magicavoxel seems to do something similar and smoothly blends between the two modes. ezgif-1-d51f1b21c698 How it works currently, for comparison: 66070833-ab533b80-e56f-11e9-8f5f-7ca7cc2aade1

CedricGuillemet commented 5 years ago

Is it possible to use the path tracer without bounce and only use the albedo? I mean, use a simplify path trace. So much simplified that it can run full or half screen with a sharpen filter.

knightcrawler25 commented 5 years ago

Shaders have been simplified somewhat but still tried to retain most of the look of the scene. Just using albedo without the lights wasn't looking nice especially on the cornell box/ajax scenes. I'll try to make it similar to what blender EEVEE does. At full screen, the BVH traversal and ray/triangle intersection test costs a lot and having no control of the workload in a fragment shader doesn't help much. Tested the same in a compute shader and it works much better there (but wouldn't be useful for using in a browser). I'll push that branch in later for future use after making some changes. Also available, is a hackup up a rasterized mode in the Rasterization-Test branch which you can play around with to see how it feels (Lights and background are not drawn right now and area lights are treated as point lights). Also, I read that not having a license would mean the repo is not open source/free so I have added an MIT license so that people are free to use the code.

knightcrawler25 commented 5 years ago

I think the rasterized preview would probably be good for performance even with complex scenes but with the cost of using twice the GPU memory for meshes. With LTC area lights and the metallic/roughness based material it should be decent replacement for the performance intensive progressive mode (The frame time goes up quite a lot even with half res progressive path tracing for the panther scene, It would only get worse if models are tessellated for displacement mapping). Some experiment: https://www.youtube.com/watch?v=oPsp8EcIWCQ I'm now slightly confused with which direction to take.

CedricGuillemet commented 5 years ago

I've seen in production a lot of scenes faster to render with pathtracer compared to rasterization. And a bet for the future is to use PT. Yes, I've been thinking about the double memory consumption as well. I still think 1 ray (in a light weight shader) per pixel with a simplified lighting computation is the way to go. There are many technics to try in addition: upscaling with a sharpen kernel, bilateral filtering,... Is it possible to profile the shader? Maybe determining what's the Hotpoint in the shader will help us improve the 'quick' 1 ray rendering.

knightcrawler25 commented 5 years ago

Ah, In that case I won't spend time on the rasterizer. Maybe this weekend i'll profile the shader and simplify things further. For the displacement, I found a nice technique that FStorm and Octane/Vray use: https://fstormrender.ru/manual/displacement/ I'm not exactly sure how FStorm is managing that level of quality with raymarching but this is the best paper I found related to the technique: http://tevs.eu/files/i3d08_lowres.pdf Even Mitsuba has an implementation of it. I'll be dropping the pre-tessellation stuff and try to get this method in.

CedricGuillemet commented 5 years ago

Sweet! I moving back to France in 1 week. I'll have more time to do some experiments then.

knightcrawler25 commented 5 years ago

Turns out my code changes to the bvh traversal for the move from buffer textures to 2D textures is causing a massive drop in performance for moderately large scenes. The older code that I had lying around has a 1.5-2.5x improvement in performance. I'll try to fix that and also get a stackless traversal method in, which might be better for low end devices.

Tetsujinfr commented 3 years ago

hi, I was wondering, are the above great discussion and efforts on pause? or is the discussion continuing somewhere else?

knightcrawler25 commented 2 years ago

@Tetsujinfr: Some of the features have already been added but some things like displacement wouldn't be viable for this repo. Also, my focus has been on other things lately like volumetric rendering etc.