mitsuba-renderer / mitsuba2

Mitsuba 2: A Retargetable Forward and Inverse Renderer
Other
2.05k stars 267 forks source link

Refraction rendering generates sparse spots and irradiance meter is not aware of point/area emitter #196

Closed yaying-00 closed 4 years ago

yaying-00 commented 4 years ago

I am working on a project about optimizing the height profile of a transparent glass panel that refracts a light source to generate certain caustic patterns. I recognize that this work is similar to Fig. 1. (c) in the paper Mitsuba 2: A Retargetable Forward and Inverse Renderer. During my exploration, the refraction rendering and irradiance meter were not working ideally.

  1. I created a screen behind the glass panel and used a perspective sensor to look at the screen. This should be similar to Fig. 1. (c) in that paper. However, I got very sparse light spots on the screen. I used a glass lens to illustrate this problem. I expect to get a smooth light spot with a blurry edge on the screen, but the output image looks like this:

    lens_env lens_noenv
    Left: I used museum.exr to create an environment emitter here to show my setup. Right: I commented out the environment emitter.

    In the path tracer section of the documentation, it is suggested that glass counts as an occluder and makes it hard to render. Maybe this is the reason? How to set up the scene to achieve the clear rendering result as shown in Fig. 1. (c) in the paper?

    The code for reproduction is “lens.py”. It can be found in the zip file attached.

  2. Is there a sensor type which can measure the incident power per pixel of a given plane? Then the sensor can be put at the position of the screen and the caustic pattern can be directly gained, instead of obtaining the reflected light rays from the screen.

    I learn that a perspective pinhole camera has an infinitely small aperture, so when I put one at the position of the screen and make it look at the light source, the rendered image only contains the lights that go through the infinitely small aperture. Irradiance meter only gives the irradiance over all triangular meshes of the object. I did not find such a sensor type by reading the documentation.

  3. Irradiance meter is not aware of the point/area emitter.

    onlyenv env_area env_point

    I created a square to serve as the irradiance meter. The figures illustrate the position and shape of the irradiance meter. In the first scene, there is only one environment emitter. In the second scene, a spherical area emitter is added. In the third scene, a point emitter is added. The irradiance meter should take the area/point emitter into account and output a brighter color in the second and third scenes. However, the irradiance meter gives the same result in all three scenes. I attached the code “irradiance_meter.py” in the zip file below.

refraction_and_irradiancemeter.zip

schunkes commented 4 years ago

I am looking into your problems a little and I can provide some insight for the first part for now: I think you wildly underestimated the number of samples per pixel it takes to render images as smooth as the ones in the mitsuba2 paper.

I repositioned the perspective camera in your scene and rendered it again in `scalar_mono' mode output

Here is the sensor transform, if you want to recreate what I have done:

<transform name="to_world">
    <lookat origin="0.0, 0.0, 5.0"
    target="0.0, 0.0, 10.0"
    up="0.0, 1.0, 0.0"/>
</transform>

Also I reduced the emitter's radiance to 50 so that it wouldn't just appear white on the screen but show some gradient.

This is rendered at 32768 SPP! I had to reduce the image size to 128X128 to get a reasonable render time but I might launch it again tonight with 512x512... Even at 4096 the image was still very grainy!

A big reason for that is that your lens is made of two OBJs which means there are many refraction events to handle.

However, it might not be necessary to use an SPP count that high for the optimization itself. I have not performed any computations like this, so I cannot give any answer to that. I checked the Mitsuba2 paper and there an SPP of 256 for the RGB caustic problem is mentioned. I suppose this is something you have to experiment with but given the optimization times the RGL team mentions in their materials I would guess that they used an SPP close to 256 rather than close to 32768.

schunkes commented 4 years ago

And here is my comment for the third part of your question:

Again I repositioned the camera to give a more complete view of the scene, with the following transform:

<transform name="to_world">
    <lookat origin="4, 0, 8" target="0, 0, 0" up="0, 1, 0"/>
</transform>

output_sphereoutput_point

You are competely right that the irradiance meter cannot see the point emitter directly! But you missed one detail! In the two images I uploaded you can see the scene with the spherical light source on the left and with the point light source on the right.

The point emitter is also not visible on the perspective camera! You can only see the light that is scattered off the rectangular screen from the point emitter.

The reason for that is found in the code for the integrators and the emitters:

The path integrator has the following section:

/ ---------------- Intersection with emitters ----------------
if (any_or<true>(neq(emitter, nullptr)))
    result[active] += emission_weight * throughput * emitter->eval(si, active);

Here you see, that for rays that directly hit an emitter, that emitter's eval() method is called.

If we check the eval() method in the point emitter, we see this:

Spectrum eval(const SurfaceInteraction3f &, Mask) const override { return 0.f; }

The point emitter always returns 0 if you evaluate it directly!

The reason for that is that the probability to intersect a single point in continuous space (which we are trying to simulate even though the numerical precision of computers is finite) is exactly zero!

In conclusion: You can never directly see a point emitter with any sensor in Mitsuba2 and that is physically correct!

However this should not be a problem in your optimization problem, since you are not interested in recording the light that reaches the irradiancemeter directly from the emitter but only in the parts of light, that are scattered by the glass object you are shaping.

Side note: If for some reason you cannot work with the irradiancemeter it might be an option to place a screen in the location you want to optimize for and place a perspective camera such that the screen exactly fills the field of view of the camera. However I have no experience in these optimization tasks and this might not work for reasons that I am unaware of.

I hope I can help you progress in your study with these posts :)

schunkes commented 4 years ago

Unfortunately I cannot provide help on the second part of the question since I do not have enough knowledge about radiometric quantities. However I believe it should be easy to compute the incident power on a surface, since the geometry of the scene, the sizes of all objects and the emitted radiance of the emitter are known.

merlinND commented 4 years ago

Hi,

Thanks @schunkes for taking the time to look into this problem!

I can give more detail about the first point: This type of scenes is very difficult to render with path tracing. That's why in @schunkes's experiment, an extremely high sample count was needed to get a meaningful image.

In the paper, we use a light tracer (starting paths from the light source), and a collimated area light source (all light rays leave perpendicularly to the light source, this is approximates the behavior of distant light sources) to make integration easier. That's why we get reasonably-converged images with 256 spp. Unfortunately, we have not had time to port the light tracer & light source to the latest Mitsuba 2 master (there were significant code improvements between the release of the paper and the release of Mitsuba 2, but we didn't have the chance to port everything over).

yaying-00 commented 4 years ago

Hi,

Thanks @schunkes @merlinND for looking into this!

@merlinND Are you planning to release that new light tracer & light source in the future? If yes, is there a rough timeline? It would be a great help to my project.

@schunkes Thanks for pointing out the point emitter can not be directly seen. I still have questions about the area emitter, since the irradiance meter is not recognizing it as well. I changed the screen in my lens.py to be an irradiance meter. The irradiance meter gave the same result whether I commentted out the lens object or not. Thus I believe the irradiance meter is also not aware of the light generated from the area emitter and refracted by a dielectric material.

leroyvn commented 4 years ago

Hi @yaying-00, I'm afraid that the irradiance meter is just as aware of area emitters as any other sensor: the directions used to create its rays are chosen without accounting for the emitters in the scene. An important thing to remember is that emitters actually do not "generate" light as understood in physically intuitive approaches to the Monte Carlo ray tracing method: the integrator instead searches for a path connecting an emitter and a sensor.

A refracting interface puts roughly as much constraint as a direct line of sight when it comes to sampling the right direction for a sensor to "see" an emitter: if the set of directions which will allow you to eventually connect the sensor and the emitter is too "small," then you'll have a very low chance to sample a path yielding a nonzero radiance. In practice, the irradiancemeter plugin "blindly searches" for a direct line of sight to the area emitter (meaning that its sample method doesn't importance sample the angle space based on emitter location), and if the emitter's angular size as seen from the sensor is too small, the sensor won't "see" the emitter unless you draw a large number of samples.

In order to convince you, I suggest you to try and make your area light larger. When covering a significant portion of the angular region visible to your irradiance meter, I expect that the number of samples required to record a nonzero flux will decrease.

yaying-00 commented 4 years ago

Hi @leroyvn. I see the point. Thank you!

merlinND commented 4 years ago

@yaying-00 I am planning to port and release the light tracer & collimated area light source in the future, but unfortunately not before several months (end of my internship). I would gladly help with a PR in that direction though!