Closed chris-woodbeck closed 3 years ago
Sure, I have a placeholder for this, just did not have a use case yet. I can add such a mode.
Please, write what is your application, I am curious, but also can figure out a better interface this way.
There is also one performance note: rays with different origins may suffer from a data divergence problem. But except that the camera mode is absolutely doable.
There are a few things that custom origins would be useful for. One idea is visualizing distances between two deformable sheets. The rays' origin will a curvilinear grid on the first sheet, with the rays' direction being the sheet's normal at that point. As the first sheet deforms, the rays move, and with it the distances to the second sheet.
Another idea is testing if another piece covers a portion of a mesh. Similar setup as above; each vertex on the mesh is a ray origin, and the normal at that point is the direction. This time, the result for each ray is a binary "covered or not" for each ray.
As for the interface, may I suggest two textures of dimensions [width, height, 3], one for ray origins and the other for normalized ray directions? This way the interface remains flexible for other applications and is conceptually easy to understand.
Great, interesting! Textures for origins and targets is pretty much what I have in the code, waiting for a purpose. I'll finish it over the next few days.
Thank you! I am excited to see how much faster plotoptix will be compared to the current implementation.
After thinking about it some more, I see a potential 'gotcha' with the new camera mode's interface. The fact that the camera's eye and target vectors default to 'best fit' and 'center of geometries' respectively will cause the output image to be very different than what is expected, if a user were to assume that the rays are defined w.r.t. global frame (as it is with other APIs like trimesh)
Do you think it would make sense to change the default eye and target vectors for the new camera mode, or to keep the defaults as is and make a note in the documentation?
(this is assuming the rays and origins are defined w.r.t. camera frame and not global frame)
The 'best fit' and 'center of geometries' make sense only when the eye or target is not specified as a texture. I have to update that in the docs, thanks!
Ray angles provided in the texture in the custom projection mode are in the camera frame. But the mode with ray targets in the texture uses the world frame (maybe I should underline also this in the documentation). The same, the world frame, I would use for the ray origins. I think the eye-target-up are not very well defined when rays are free to go from any point in all crazy directions.
I had a quick look at the trimesh description - it is using embree, right? Staying on CPU has many advantages when you want to expose more guts of traced rays. But let's see how fast can be an OptiX based implementation.
You can try this example. There is some code showing what are the two surfaces that is not basically needed for the distance calculation. To make it fast, just use flat shading material, it is noted in the notebook.
What a wonderful way to start the new year; CustomProjXYZtoXYZ works!
A quick implementation already shows 10x performance improvement over my CPU implementation for 1000x1000 rays - this includes the optix setup time. This will only improve as Optix context reuse and other optimizations are implemented.
Very glad to hear that!!! Happy New Year! :)
Is it possible to cast rays with different origins and directions? The CustomProj camera mode handles custom directions, but all the rays have the same origin.