Closed Li-Jiren closed 2 years ago
Hi @Li-Jiren!
I'm not quite sure I fully understand what you're trying to achieve. What would the output look like in your setup? Would it just be a masked version of your input?
If so, I think it would be simpler to completely avoid the projector
emitter. Instead, you could have a simple custom integrator that would serve as a visibility test (take a look at the depth integrator
) and the apply its output as a mask to your input image.
Hi @njroussel,
Thank you for your reply!
I'm not sure what do you mean by 'input image', is it the pattern image? I didn't use other images, and the object is a model in the scene rather than an image. And I actually need a projector-like thing which can move in the space. Anyway, please allow me to clarify my target.
What I'm trying to achieve is something like this image:
Put a projector in a certain place in the scene, project a pattern (like the first image of this issue) to the space. There is an object in the space (like the bunny), so rays from the projector intersect with the object. A camera is used to observe the space.
So yes, it should be a masked version of the object.
The problem is: If a ray is emitted from the projector pixel (X, Y) and the value of the pattern at this pixel is 0.8, and this ray intersects with the object at world coordinate (Xo, Yo, Zo), then the value of point (Xo, Yo, Zo) received by the camera is not 0.8 and is affected by the distance from the projector to the point and the incident angle.
The reason is that the value of the pattern is considered as light intensity if I use the pattern as a bitmap of the projector (and of course light intensity decreases as distance increases and is affected by incident angle because of BSDF).
Is it possible to give each ray a fixed value that it "carries", which is not affected by distance and incident angle, then project it onto the object it intersects, and read this value in the camera?
Thank you again for your help!
This would definitely require your own custom integrator.
An important note: the projector
emitter behaves similarly to a perspective
camera (a pinhole model). From your sketch, it seems that you want the emitted light's direction to be perpendicular to the surface of your "texture emitter". This is not how the projector
plugin emits light.
Hi @njroussel,
I'm sorry, I made a mistake. I don't really need it to be perpendicular. Something like this also works for me:
But I guess I still need to implement an integrator? Is it possible for a projector to cast fixed value? If I have to, could you please give me some specific advice about how to trace the value of a light source? Maybe use an area light instead of a projector? But it seems that we cannot use a certain pattern for an area light.
Hi @njroussel,
Sorry for bothering you again but could you please give me some specific advice about how to trace the true value of a light source? (Or besides using the pattern as a light source, are there other ways to project a certain pattern to an object?) Maybe I should use an area light instead of projector? But it seems we cannot use a certain pattern as an area light. Maybe use emitter.sample_position() and scene.ray_intersect() (like the depth integrator) to locate the intersection point on the object?
Thank you!
Hi @Li-Jiren ,
In your case, it seems to be that a proper light-transport simulation is not what you need. In which case I would recommend you write your own integrator plugin, tailored to your need.
You don't necessarily have to you an emitter for the projector if I understood correctly, since all you need it to access lookup a texture value given a ray direction / 3D position.
Your integrator will work as follow:
If your projector is defined on a surface:
Hi @Speierers,
I see, so I need to put my pattern in the scene space (and between the object and the projector point) as a texture instead of a projector.
Thank you!
Hi @Speierers,
I encountered a small problem, maybe it is a bug. When trace the direction between different points on the object and the projector point, some of the directions hit other parts of the object itself, so they cannot reach the pattern texture.
For these rays, I want to set an invalid value for them, so I tried to use SurfaceInteaction.shape
to see which shape do these rays hit. However, such an error occured:
TypeError Traceback (most recent call last)
~\AppData\Local\Temp\ipykernel_22908\3595017093.py in <module>
56 result[~surface_interaction.is_valid()] = 0.0
57
---> 58 shapes = projector_interaction.shape
59
60 validation = projector_interaction.is_valid()
TypeError: Unable to convert function return value to a Python type! The signature was
(self: mitsuba.render.SurfaceInteraction3f) -> enoki::DiffArray<enoki::CUDAArray<mitsuba::Shape<enoki::DiffArray<enoki::CUDAArray<float> >,mitsuba::Color<enoki::DiffArray<enoki::CUDAArray<float> >,3> > const * __ptr64> >
I am using gpu_autodiff_rgb
variant. Is this a bug?
Thank you!
To avoid self-shadowing, you can try using si.spawn_ray
.
Also you should try porting your code to Mitsuba 3 as we have fix a few anomalies in the new version which might solve your problems.
Hi @Speierers,
I was using si.spawn_ray_to (projector point)
to create the rays and then use scene.ray_intersect(these rays)
to get the intersections of these rays on the pattern texture rectangle. (I also tried rectangle.ray_intersect()
but some other bugs occurred) But there was self-shadowing problem. I guess it should have the same result if I use si.spawn_ray ()
instead because spawn_ray()
should return a same ray as si.spawn_ray_to
?
And I will try to use mitsuba 3 and try something I've tried before and see if they can work properly now, thank you!
Yes both method should prevent self-occlusion. Let me know if the problem persists in Mitsuba 3 (please open a seperate issue on the mitsuba3 repo if this is the case).
I'm trying to project the following pattern (valued from 0 to 1) to an object and get the value reflected by the object in the camera:
I tried to use the pattern as the bitmap of a projector emitter:
The result is like:
Obviously, it is the map of light intensity reflected by the object rather than the true value of the pattern (areas I circled in the image are too dark because the distance and the incident angle of rays)
Is it possible to trace the original value of projector pattern in the camera by changing BDSF/materials/reflectance?
I found this issue https://github.com/mitsuba-renderer/mitsuba2/issues/456 and I think it is similar to my problem to some extent. Maybe I also have to implement my own integrator for this task?
Thank you!