mitsuba-renderer / mitsuba3

Mitsuba 3: A Retargetable Forward and Inverse Renderer
https://www.mitsuba-renderer.org/
Other
2.1k stars 246 forks source link

Intersecting a single mesh of a larger scene #1337

Closed EgeCiklabakkal closed 1 month ago

EgeCiklabakkal commented 1 month ago

Summary

I am looking for an easy way to run ray_intersect with only one of the meshes in my scene during rendering.

System configuration

System information:

OS: Ubuntu 22.04.4 LTS CPU: Intel(R) Xeon(R) Silver 4316 CPU @ 2.30GHz GPU: NVIDIA RTX A5000 Python: 3.10.12 (main, Sep 11 2024, 15:47:36) [GCC 11.4.0] NVidia driver: 550.107.02 CUDA: 12.3.107 LLVM: 15.0.7

Dr.Jit: 0.4.6 Mitsuba: 3.5.2 Is custom build? True Compiled with: GNU 11.4.0 Variants: scalar_rgb cuda_rgb cuda_ad_rgb llvm_ad_rgb

Description

I am trying to implement a simple subsurface scattering integrator where for each point I hit on an object with a special custom BSDF, I would like to sample a tangent plane at the hitpoint and then project that sampled point back on the object by casting rays. However, during projection, I don't want those projecting rays to intersect with the whole scene, but rather only the object in question.

I tried intersecting against the shape (assuming si holds the surface interactions: si.shape) but then I get:

RuntimeError: ​[Shape] OBJMesh::ray_intersect_preliminary(): not implemented!

I tried creating two scenes, one is the whole scene and the other only contains the object of interest and run ray_intersect for the second scene when trying to do the projections, but then I get:

jit_eval(): more than one OptiX pipeline was used within a single kernel, which is not supported. Please split your kernel into smaller parts (e.g. using `dr::eval()`). Disabling the ray tracing operation to avoid potential undefined behavior.
jit_eval(): more than one OptiX shader binding table was used within a single kernel, which is not supported. Please split your kernel into smaller parts (e.g. using `dr::eval()`). Disabling the ray tracing operation to avoid potential undefined behavior.

I'm not sure if it is possible to split my kernel as the projection step is inside the rendering Loop.

Lastly, I saw in the file scene_optix.inl:

    /* When another scene is passed via props, the new scene should re-use
       the same configuration, pipeline and update the shader binding table
       rather than constructing a new one from scratch. This is necessary for
       two scenes to be ray traced within the same megakernel. */

I also tried this, but I feel like this is for rendering one scene after the other rather than what I am trying to do.

Is it possible to do what I want using python bindings? Have I missed something? Maybe have multiple acceleration structures, one for the whole scene and the other for a single mesh and be able to run ray_intersect to either while rendering?

njroussel commented 1 month ago

Hi @EgeCiklabakkal

This is indeed possible, but not through the Python bindings. Unfortunately, something like this requires careful handling of OptiX's configuration and that is only exposed in C++.

We actually have something like this already implemented in Mitsuba. In order to importance sample a textured area emitter, we first sample the texture to get UV coordinated which we need to map back to a point on the mesh. In order to perform that last step, we have a second scene (acceleration structure) that only holds the mesh we're interested in and trace a ray within it. This is all in Mesh::eval_parameterization() (called by AreaLight::sample_position()).

A simpler, albeit potentially more expensive solution is to keep intersecting the scene until you hit your shape. You can also use the shapes AABB to early exit that loop.

EgeCiklabakkal commented 1 month ago

I see, thank you very much!