sketchbooks99 / PRayGround

GPU ray tracing framework using NVIDIA OptiX 7 and 8
MIT License
41 stars 2 forks source link

Q&A about pathtracing example #15

Closed bipul-mohanto closed 1 year ago

bipul-mohanto commented 1 year ago

Hi @sketchbooks99!

can I ask you a few questions for clarification about your pathtracing example?

sketchbooks99 commented 1 year ago

@bipul-mohanto

Yes, of course!

bipul-mohanto commented 1 year ago

Thanks a lot. I will ask later. So far your code base is really clean.

bipul-mohanto commented 1 year ago

Hi!

Hope you are doing great. Your pathtracing example is pretty self-explanatory. However, due to my limited knowledge, I would like to have a discussion about the example. Later I may add a wiki page to your repository, and I believe that may help any beginners later.

In the device side programs (shaders):

sketchbooks99 commented 1 year ago

Hi @bipul-mohanto

Thank you for your deep understanding and suggestion regarding the wiki!

I think it's a great idea to add a wiki page for the pathtracing example, and I would greatly appreciate your contribution.

Furthermore, I have been updating the code to include examples and a basic library. As such, I will update the wiki page you created with any changes I make to the pathtracing example.

If you're interested in the ongoing development of my projects, please feel free to check out the libdev branch. I have made many more changes there than in the main branch.

bipul-mohanto commented 1 year ago

Your blogs on qiita.com really helped me a lot, so, I think I should also contribute something back although I'm not sure how much I could do right.

I have a question, in your pathtracing example, you used an arbitrary number of samples per pixel. https://github.com/sketchbooks99/PRayGround/blob/4bf899d00d78e22ab905de3513c811d61c30e74c/examples/pathtracing/app.cpp#L79

Will it be possible if I want to have 1 sample for a 2x2 pixel block? Or, may be 2 samples for a 3x3 pixel block?

https://github.com/sketchbooks99/PRayGround/blob/4bf899d00d78e22ab905de3513c811d61c30e74c/examples/pathtracing/cuda/raygen.cu#L23

sketchbooks99 commented 1 year ago

Yes. If you'd like to sample a smaller number of rays than pixels, you should store an obtained radiance to multiple positions on the result buffer.

So, you should modify following parts. In my implantation, bitmap stores pixels with row-major alignment. https://github.com/sketchbooks99/PRayGround/blob/master/examples/pathtracing/cuda/raygen.cu#L174-L188

Further, you can change the total number of ray tracing threads on the GPU here. Currently, I specified the same number of threads as pixels, so you should use a smaller size here. https://github.com/sketchbooks99/PRayGround/blob/master/examples/pathtracing/app.cpp#L469-L478

bipul-mohanto commented 1 year ago

Thanks @sketchbooks99 for your suggestion. I probably already did what you suggested. Last time, I hope you remember I extended your framework to large display setup. Another thing I did was with the code, subdivided (currently) the entire scene and applied different sampling number for each of the regions (see the figure attached, look at the kettle).

However, I stuck with the minimum 1 sample per pixel, I want to make it something like variable rate shading (VRS). In VRS, they used pixel shading for a pixel block, instead of a single pixel. I am also thinking similar to make 2x2, or 2x4 or 4x4 pixel block. The number is just arbitrary. I know it is possible considering the neighborhood, but somehow stuck in implementation. Any suggestions?

fov_path_3bounce_64_4_1_13fps

sketchbooks99 commented 1 year ago

@bipul-mohanto Maybe, the simple way to accomplish it is to use an additional buffer to specify the number of samples depending on pixel location.

If the sampling rate is fixed, you can realize it by preparing bitmap and uploading it on the GPU through the launch parameter.

If you'd like to dynamically update the buffer, you may need to launch a kernel to generate the buffer storing sampling rates in advance of the ray tracing kernel.

Another possible way is to utilize the VRS features provided by nvidia for graphics API. When you would use it, firstly you should generate g-buffers for camera rays which store surface information (e.g. position, normal, texcoords) at the first intersection in your preferred shading rate. Secondly, you can trace the entire scene in arbitrary shading rates by starting ray tracing from positions stored on the g-buffer.

https://developer.nvidia.com/vrworks/graphics/variablerateshading

However, my library doesn't support graphics API features for scene rendering, so you need to implement it in advance :-(

To be honest, I have never implemented VRS, so I'm not sure my suggestions work.