rnd-team-dev / plotoptix

Data visualisation and ray tracing in Python based on OptiX 7.7 framework.
https://rnd.team/plotoptix
Other
496 stars 26 forks source link

Shoot single ray and retrieve the face it hits #11

Closed robinenrico closed 3 years ago

robinenrico commented 4 years ago

I would like to shoot rays with a specific origin and angle and retrieve the face it hits from a mesh in the scene. Is this currently possible?

robertsulej commented 4 years ago

Shooting single rays is not supported now. If your use case needs this (or maybe a list of rays to shoot together to have a better performance) - it would be straight forward to me to add such camera mode. That may be an interesting option. Reading hit details is already handled and also easy to extend (run e.g. this example and hover the mouse over the object).

Write something more on what you are working on so I can add a more functional option.

robinenrico commented 4 years ago

Thanks for your fast reply, Ideally, I would love to shoot a bundle of rays from a specified origin under an angle. For each ray, I would like to know the faceID that is hit by each ray. By this, you would shoot a bundle of x rays toward an object in 3D and you will receive the faces/triangles that are affected.

I guess that for my use case, it might be possible to work with the existing implementations. Do you mean with your last sentence that I will already be able to retrieve the faces from the 3D object on which the light hits? e.g. in your example, you have a pyramid and some faces are light up and others not, is it possible to retrieve the IDs from those faces? Without doing this graphically by hovering over it?

robertsulej commented 4 years ago

Now you can use 2D image coordinates (pixel x and y index in the output array) to get:

This can be done without GUI, with NpOptiX class, but there is no convenience function. You need to adopt code from _gui_get_object_at() to check which (if any) object is at given 2D position, _gui_get_hit_at() to have the hit 3D and distance. Tk GUI is decoding it in a few lines here. Methods used there self._optix.get_hit_at() and self._optix.get_object_at(), and the dictionary self.geometry_names are available without GUI in NpOptiX.

Use the pinhole camera, e.g. in the mentioned example code:

optix.setup_camera("cam1", cam_type="Pinhole", eye=[1, -6, 4], target=[0, 0, 0], fov=25)

I'll add to NpOptiX convenience methods to get this info. Also, vertex id is not yet the face id, so I'll need to add something to access face id from the library under the Python API.

Camera mode with a list of rays to shoot sounds interesting. Need to think a bit how to organize inputs, and will add that as well. Next release should be ~next weekend, with some more additions.

aluo-x commented 4 years ago

Ideally it would expose something very similar to TriMesh's intersects_location api. Basically it takes as input a an numpy array of ray origins (shape n 3), ray directions (shape n 3); it returns a list of (m * 3) intersection locations, where m is the number of rays that hit a mesh, the indices of the m rays, and the indices of the triangles that it hits. Or empty arrays if no rays are hit. I believe only the first hit is valid, as the multi-hit option is known to give invalid results (search for bugs in the Trimesh bug tracker).

My use case is having this ray tracer in the loop of training a neural network. Trimesh's one is CPU accelerated using embree, but unfortunately it does not support batched processing. I plan on investigating plotoptix, and looking at the overhead (context setup) and investigate potential batched processing.

The downside of embree is that it that it requires a GPU -> CPU transfer, semi-sequential per-mesh ray tracing (already vectorized, so mutliprocessing yields very limited speedups), and another CPU -> GPU transfer.

The upside is that the overhead for context setup is pretty low. Which is often not the case for GPU based libraries...

Edit: To clarify, I literally use the ray tracer in an intermediate layer of the network. Gradients/losses are dealt with separately, so I can use basically any ray-tracing library that allows for specification of individual rays.

robinenrico commented 4 years ago

I totally agree with @aluo-x . I previously used the TriMesh package for the purpose of finding a ray intersection with triangles. However, I noticed the slow speed as I need to shoot billions of rays in my evolutionary optimization algorithm. I am currently investigating the TriMesh implementation, which is already vectorized. I am currently trying to implement a Pytorch version of the TriMesh intersects_location functions as it seems to be reasonably easy to transform the computations to the GPU with Pytorch due to the vectorized implementation of TriMesh. I currently have the ray intersection with triangles implemented on the GPU with Pytorch. However, I still need to figure out a way to implement the KD tree or R tree on the GPU as this will dramatically reduce the number of triangles to evaluate. Not sure if this will be possible in full, but I hope to involve the GPU as much as possible in this.

aluo-x commented 4 years ago

@robinenrico Very cool! Your task seems to be much more compute intensive than mine. I limit myself to maybe 10K~ rays max. I have also considered using Taichi to perform the ray tracing via CUDA, as it can integrate with PyTorch (with gradients), but claims higher performance, but haven't really had time to investigate.

I would be very interested in seeing your batched ray intersection code if the code is public.

robertsulej commented 4 years ago

Leaving plotoptix API aside.. OptiX may be what you are looking for. Ray-triangle intersections and BVH are well optimized and run on RT cores if available.

robinenrico commented 4 years ago

Leaving plotoptix API aside.. OptiX may be what you are looking for. Ray-triangle intersections and BVH are well optimized and run on RT cores if available.

Thanks for the idea robert, I looked into this and didn't find a good way to shoot single rays to find the triangle intersection. If this is supported in some way, this will be amazing for most applications. Do you know a way that this can be done using OptiX?

aluo-x commented 4 years ago

So one possible approach, without going into the nitty gritty of Optix & CUDA kernels. mitsuba2 (with Optix/RTX support) provides a python API, which has functions to compute per ray mesh intersections, which returns an Interaction object which stores the intersection point p and the triangle idx. Unfortunately the documentation is a bit sparse, so I'm not sure if you can batch submit multiple rays.

Also @robertsulej apologies for taking the thread off track from plotoptix. I can also discuss with @robinenrico offline if this is too distracting.

aluo-x commented 4 years ago

Seems like you can! From mitsuba2's documentation:

rays, weights = sensor.sample_ray_differential(
    time=0,
    sample1=sampler.next_1d(),
    sample2=pos * scale,
    sample3=0
)

# Intersect rays with the scene geometry
surface_interaction = scene.ray_intersect(rays)
After computing the surface intersections for all the rays, we then extract the depth values

# Given intersection, compute the final pixel values as the depth t
# of the sampled surface interaction
result = surface_interaction.t

# Set to zero if no intersection was found
result[~surface_interaction.is_valid()] = 0

For GPU usage I would probably use gpu_rgb instead of packet_rgb.

No batched processing, and I think no multi-GPU support? I suspect the context setup cost is also pretty high. But you may be able to re-use the GPU context and just modify the meshes. If you want to discuss this offline, I can be reached at afluo [a;t] andrew.cmu.edu

robertsulej commented 4 years ago

@aluo-x no worries :)

@robinenrico You need to write a ray generation program. You can control all the ray parameters there, and how many rays to shoot. Then you need a very simple "closest hit" program which writes the face id and hit coordinates to the ray payload. That part is simple, but putting all together into the pipeline is rather tedious. Docs are here, and code samples are included in SDK.

robinenrico commented 4 years ago

@robertsulej Thanks for your reply, I will take a look into it. Although I'm not really an expert in Cuda so probably this will be a tedious process. @aluo-x Mitsuba seems to do the trick indeed. The example looks really clean and understandable. But I will look into that in more detail tomorrow. Regarding the multi-gpu, it does not seem to be supported but I think we can work with this by dividing the workload explicitly over different GPUs and call the full example for each GPU in separation. This is not as clean but probably does the trick for our use cases. In my case, I have my main evolution running on the CPU and if I find a way to explicitly mention the GPU to use for the ray tracing in Mitsuba, than I will be able to setup n programs in parallel where each program has its own rays and GPU. Maybe, you are able to something similar, as your gradients do not depend on the ray tracing results. But I'm not sure how this will affect your overall performance as your main program uses the GPU already. I will contact you on you mentioned mail to discuss this further.

robertsulej commented 4 years ago

I still need to update/debug Linux binaries, but the Windows version is already usable on GitHub. The new release should be on PyPI today or tomorrow. Sorry for delay, commercial work was pressing..

I guess you may already have found a good package for your problem. Anyway you gave me an interesting idea. The implementation fits the plotoptix framework and it was also possible to skip major part of calculations if e.g. only the face id is needed. You can check this example for the fast face id reading, and this one for accessing other output data.

aluo-x commented 4 years ago

Very cool! I just took a look at the face id reading example, and it looks extremely clean. The custom projection API would also be very useful.

For now, I'm using an embree based solution with object based parallelization. I understand that someone wrote a python hook into optix prime called pyoptixprime, I've reached out to the author but he thinks more time is needed before a public release.

robertsulej commented 4 years ago

Great! 👍

I'm not sure what are the plans for OptiX Prime. It appeared with the OptiX release 6.5, but not with 7.0/7.1. Fortunately 6.5 supports RTX features so for a short term project it looks OK. But for a longer living code it may happen that Prime is discontinued.

robertsulej commented 3 years ago

If you have more ideas, feel free to share it :) For accessing face index it seems usable now - closing the issue.