mitsuba-renderer / mitsuba2

Mitsuba 2: A Retargetable Forward and Inverse Renderer
Other
2.05k stars 266 forks source link

[❔ Other Question] No direct hits of emitter on sensor #455

Closed MaximilianBader closed 3 years ago

MaximilianBader commented 3 years ago

Summary

No direct ray hit between emitter and sensor which does not reflect phyiscal reality

System configuration

Platform: Ubuntu 20.04 in Windows WSL2 Compiler: Clang Python version: 3.9.5 Mitsuba 2 version: Own forked repo with extension (https://github.com/MaximilianBader/mitsuba2) Compiled variants: "scalar_rgb", "scalar_spectral", "scalar_mono", "packet_mono"

Description

I'm trying to render a scence consisting of a single spherical area emitter, one scattering interface and a single sensor element including the travel time of the respective rays. Rendering the scene with my forked repository (https://github.com/MaximilianBader/mitsuba2) and a custom python script, I encounter the issue that I do not receive any direct ray hits from the emitter to the sensor, but only a single ray hit via the scattering surface. However, in physical reality I do certainly expect a direct light propagation path from emitter to sensor. Do you have any suggestions or advices, how to overcome this issue?

Steps to reproduce

Python script:

import os
import numpy as np
import h5py
from pathlib import Path
import matplotlib.pyplot as plt
from scipy.io import savemat
import enoki as ek
import mitsuba

# Set the desired mitsuba variant
mitsuba.set_variant('scalar_mono')

from mitsuba.core import Float, UInt32, UInt64, Vector2f, Vector3f
from mitsuba.core import Bitmap, Struct, Thread
from mitsuba.core.xml import load_file
from mitsuba.render import ImageBlock

# Relative path to the scene XML file, appended to FileResolvers's search path & loaded into mitsuba
filename = r'mitsuba_scenes/example_us_scene_3.xml'
Thread.thread().file_resolver().append(os.path.dirname(filename))
scene = load_file(filename)

# Instead of calling the scene's integrator, we build our own small integrator
# This integrator simply computes the depth values per pixel
integrator = scene.integrator()
sensors = scene.sensors()
film = sensors[0].film()
sampler = sensors[0].sampler()
film_size = film.crop_size()
spp = 250

# Seed the sampler
total_sample_count = ek.hprod(film_size) * spp

print("total_sample_count: " + str(total_sample_count))

diff_scale_factor = ek.rsqrt(sampler.sample_count())

if sampler.wavefront_size() != total_sample_count:
    sampler.seed(0, total_sample_count)

pos = Vector2f(0,0)
scale = Vector2f(1.0 / film_size[0], 1.0 / film_size[1])

num_hitting_rays_direct = 0
weight_sum_rays_direct = 0
num_hitting_rays_zero_weight_direct = 0
num_hitting_rays_reflected = 0
weight_sum_rays_reflected = 0
num_hitting_rays_zero_weight_reflected = 0
total_weight_rays = 0

# probe parameters
delay_before_recording = 3
num_of_cropped_samples_at_sinogram_start = 1
frequency_sampling = 4e7
speed_of_sound_above_membrane = 1530
speed_of_sound_below_membrane = 1465
filt_cutoff_min = 1e5
filt_cutoff_max = 12e6

# impulse response of transducer (already filtered, sampled and zero-centered)
impulse_response_file = Path(r'/mnt/e/Max/02-projects/202009-OA_accurate_model_SPMR_refl_correct/10-OA_modeling_ray_tracing_trial_210629/oa_impulse_response_Acuity_MRI_Mammazentrum_noSIR_noEIR.mat')
with h5py.File(impulse_response_file.resolve(), 'r') as file:
    impulse_response = np.squeeze(np.array(file['IR']))
impulse_response_energy = np.matmul(np.transpose(impulse_response),impulse_response)
print("impulse_response_energy: " + str(impulse_response_energy))
impulse_response_half_length = np.floor_divide(len(impulse_response),2)

# Define time vector for sampling
num_time_samples = 2030
time_samples = np.transpose((np.arange(start=num_of_cropped_samples_at_sinogram_start,stop=num_time_samples)+delay_before_recording)/frequency_sampling)
sinogram = np.zeros((len(time_samples),len(sensors)))

i = 0
while i < total_sample_count:
    pos += sampler.next_2d()

    # Sample single ray starting from the camera sensor
    ray, weight = sensors[0].sample_ray_differential(
        time=0,
        sample1=sampler.next_1d(),
        sample2=pos * scale,
        sample3=sampler.next_2d()
    )

    ray.scale_differential(diff_scale_factor)

    (spectrum, last_interaction_point,covered_distances,active) = integrator.sample_with_length_and_origin(scene,sampler,ray,active=True)
    if active:
        for index_list in range(0,len(covered_distances)):
            if len(covered_distances[index_list]) > 1:#2:
                num_hitting_rays_reflected += 1
                weight_sum_rays_reflected += spectrum[index_list]
                if spectrum[index_list] == 0:
                    num_hitting_rays_zero_weight_reflected += 1
                else:
                    travel_time_ray = covered_distances[index_list][0]/speed_of_sound_above_membrane+np.sum(covered_distances[index_list][1:])/speed_of_sound_below_membrane
                    index_travel_time_ray = int(travel_time_ray*frequency_sampling)-delay_before_recording-num_of_cropped_samples_at_sinogram_start
                    scaling_factor_signal_ray = spectrum[index_list]/impulse_response_energy
                    sinogram[index_travel_time_ray+np.arange(start=-impulse_response_half_length,stop=impulse_response_half_length+1),0] += scaling_factor_signal_ray*impulse_response
                    if index_travel_time_ray+impulse_response_half_length < sinogram.shape[0]:
                        sinogram[index_travel_time_ray+np.arange(start=-impulse_response_half_length,stop=impulse_response_half_length+1),0] += scaling_factor_signal_ray*impulse_response
                    else:
                        print("Signal delayed too much: travel_time_ray=" + str(travel_time_ray) + ", travel_time_index=" + str(index_travel_time_ray))
                if num_hitting_rays_reflected == 1:
                    print('reflected ray:')
                    print('  weight 1e-6:', spectrum[index_list]*1e6)
                    print('  last interaction point:', last_interaction_point)
                    print('  distance list:', covered_distances[index_list])

            else:
                num_hitting_rays_direct += 1
                weight_sum_rays_direct += spectrum[index_list]
                if spectrum[index_list] == 0:
                    num_hitting_rays_zero_weight_direct += 1
                else:
                    travel_time_ray = covered_distances[index_list][0]/speed_of_sound_above_membrane+np.sum(covered_distances[index_list][1:])/speed_of_sound_below_membrane
                    index_travel_time_ray = int(travel_time_ray*frequency_sampling)-delay_before_recording-num_of_cropped_samples_at_sinogram_start
                    scaling_factor_signal_ray = spectrum[index_list]/impulse_response_energy
                    signal = scaling_factor_signal_ray*impulse_response

                    if (index_travel_time_ray-impulse_response_half_length < 0):
                        index_signal_start = impulse_response_half_length-index_travel_time_ray
                        if index_signal_start < signal.shape[0]:
                            sinogram[np.arange(start=0,stop=index_travel_time_ray+impulse_response_half_length),0] += signal[index_signal_start:]
                    elif (index_travel_time_ray+impulse_response_half_length > sinogram.shape[0]):
                        index_signal_end = signal.shape[0] - (index_travel_time_ray+impulse_response_half_length-sinogram.shape[0]) -1
                        if index_signal_end > 0:
                            sinogram[np.arange(start=index_travel_time_ray-impulse_response_half_length,stop=sinogram.shape[0]),0] += signal[:index_signal_end]
                    else:
                        sinogram[index_travel_time_ray+np.arange(start=-impulse_response_half_length,stop=impulse_response_half_length+1),0] += signal
                if num_hitting_rays_direct == 1:
                    print('direct ray:')
                    print('  weight:', spectrum[index_list])
                    print('  last interaction point:', last_interaction_point)
                    print('  distance list:', covered_distances[index_list])
                    #print('   travel_time_ray: ' + str(travel_time_ray))
                    #print('   index_travel_time_ray: ' + str(index_travel_time_ray))
                    #print('   scaling_factor_signal_ray: ' + str(scaling_factor_signal_ray))

    sampler.advance()
    i += 1

print("num_hitting_rays_direct: " + str(num_hitting_rays_direct))
print("num_hitting_rays_zero_weight_direct: " + str(num_hitting_rays_zero_weight_direct))
print("weight_sum_rays_direct: " + str(weight_sum_rays_direct))
print("num_hitting_rays_reflected: " + str(num_hitting_rays_reflected))
print("num_hitting_rays_zero_weight_reflected: " + str(num_hitting_rays_zero_weight_reflected))
print("weight_sum_rays_reflected: " + str(weight_sum_rays_reflected))

Scene:

<scene version="2.0.0">
    <!-- Define basic path tracer modeling maximum of 7 scattering events -->
    <!--integrator type="path"-->
    <integrator type="path_ultrasound">
        <integer name="max_depth" value="3"/>
    </integrator>

    <!-- Define sensor type at same position as emitter which detects the incoming irradiance-->
    <shape type="rectangle">
        <!-- Transform sensor type to correct direction -->
        <transform name="to_world">
            <scale x="0.0000001" y="0.0000001" />
            <translate x="0.0002" y="0" z="-0.04" />
            <!-- rotate y="1" angle="-180" /-->
        </transform>

        <sensor type="irradiancemeterdirectional">
            <float name="r_min_bound" value="0.02"/>
            <float name="y_max_bound" value="0.005"/>
            <float name="phi_max_bound" value="1.1459"/> <!-- 32 -->
            <!-- Write to a portable float map containing all luminance values -->
            <film type="hdrfilm">
                <integer name="width" value="1"/>
                <integer name="height" value="1"/>
                <string name="pixel_format" value="luminance"/>
                <string name="file_format" value="pfm" />
            </film>
            <!-- Define easiest sampler of sensor for ray tracing-->
            <sampler type="independent">
                <integer name = "sample_count" value = "250"/>
            </sampler>
        </sensor>
    </shape>

    <!-- Add membrane of transducer -->
    <!-- shape type="rectangle">
        <transform name="to_world">
            <scale x="0.04" y="0.02" />
            <translate x="0" y="0" z="-0.008" />
        </transform>
        <bsdf type="roughdielectric">
            <float name="int_ior" value="0.9616"/> <! gel pad 0.9616 >
            <float name="ext_ior" value="0.9202"/> <! soft tissue 0.9202 >
        </bsdf>
    </shape-->

    <shape type="sphere">
        <point name="center" x="-0.005" y="0" z="0.005"/>
        <float name="radius" value="0.0001"/>

        <emitter type="area">
            <spectrum name="radiance" value="1"/>
        </emitter>
    </shape>

    <!-- Add reflecting interface -->
    <shape type="rectangle">
        <transform name="to_world">
            <scale x="0.001" y="0.001" />
            <translate x="0.002" y="0" z="0" />
            <rotate y='1' angle='-45'/> <!-- counter-clockwise rotation 225 -->
        </transform>
        <bsdf type="roughdielectric">
            <float name="int_ior" value="0.1923"/>  <!-- bone -->
            <float name="ext_ior" value="0.9202"/>  <!-- soft tissue -->
        </bsdf>
    </shape>
</scene>

Output:

021-07-01 09:11:36 INFO  main  [xml.cpp:1221] Loading XML file "mitsuba_scenes/example_us_scene_3.xml" ..
2021-07-01 09:11:36 INFO  main  [xml.cpp:1222] Using variant "scalar_mono"
2021-07-01 09:11:36 INFO  main  [xml.cpp:354] "mitsuba_scenes/example_us_scene_3.xml": in-memory version upgrade (v2.0.0 -> v2.2.1) ..
2021-07-01 09:11:36 INFO  main  [PluginManager] Loading plugin "plugins/uniform.so" ..
2021-07-01 09:11:36 INFO  main  [PluginManager] Loading plugin "plugins/path_ultrasound.so" ..
2021-07-01 09:11:36 INFO  tbb00 [PluginManager] Loading plugin "plugins/area.so" ..
2021-07-01 09:11:36 INFO  tbb01 [PluginManager] Loading plugin "plugins/hdrfilm.so" ..
2021-07-01 09:11:36 INFO  main  [PluginManager] Loading plugin "plugins/independent.so" ..
2021-07-01 09:11:36 INFO  tbb02 [PluginManager] Loading plugin "plugins/roughdielectric.so" ..
2021-07-01 09:11:36 INFO  tbb00 [PluginManager] Loading plugin "plugins/sphere.so" ..
2021-07-01 09:11:36 INFO  tbb00 [PluginManager] Loading plugin "plugins/diffuse.so" ..
2021-07-01 09:11:36 INFO  tbb01 [PluginManager] Loading plugin "plugins/gaussian.so" ..
2021-07-01 09:11:36 WARN  tbb01 [HDRFilm] The PFM format only supports component_format="float32". Overriding..
2021-07-01 09:11:36 INFO  tbb02 [PluginManager] Loading plugin "plugins/rectangle.so" ..
2021-07-01 09:11:36 INFO  tbb01 [PluginManager] Loading plugin "plugins/irradiancemeterdirectional.so" ..
2021-07-01 09:11:36 WARN  tbb01 [IrradianceMeterDirectional] This sensor should only be used with a reconstruction filterof radius 0.5 or lower(e.g. default box)
2021-07-01 09:11:36 INFO  main  [ShapeKDTree] Building a SAH kd-tree (3 primitives) ..
2021-07-01 09:11:36 INFO  main  [ShapeKDTree] Finished. (20 B of storage, took 0ms)
total_sample_count: 250
impulse_response_energy: 1.22321357533649e-15
Direct emitter hit: mis=1, throughput=[1], bsdf_val=[3.09096e-10], emitter_val=[0.000592845], last_interaction_point=[0.00122284, -2.29122e-06, 0.00122284], current_covered_distances=[0.0412353, 0.00724611]
reflected ray:
  weight 1e-6: [3.66492e-07]
  last interaction point: [0.00122284, -2.29122e-06, 0.00122284]
  distance list: [0.041235316544771194, 0.007246114779263735]
num_hitting_rays_direct: 0
num_hitting_rays_zero_weight_direct: 0
weight_sum_rays_direct: 0
num_hitting_rays_reflected: 1
num_hitting_rays_zero_weight_reflected: 0
weight_sum_rays_reflected: [3.66492e-13]
Speierers commented 3 years ago

Hi @MaximilianBader,

receive any direct ray hits from the emitter to the sensor

Not really sure to understand what you mean by this. I see you are using a custom integrator (with a custom API). In this integrator do you trace the rays from the senor or the emitters? Do you expect a call to scene.ray_intersect() to intersect with the camera maybe? This is not supported by the current sensor plugins (e.g. perpective as those are infinitly small, similar to a point light).

MaximilianBader commented 3 years ago

Hi @Speierers,

Thanks for your hints: Indeed I'm using a custom integrator and I'm tracing the rays from the emitter to the sensor. I was indeed hoping to model an intersection of a very small emitter with a direct ray. However, as you said such an intersection cannot be expected without importance sampling the sensor in a realistic manner. Thus, after a discussion with a colleague I will now take a completely different approach to render the scene and define it differently.

Thanks again for your assistance and the enduring patience replying to all my issues!

MaximilianBader commented 3 years ago

Hi @Speierers,

now that I've been trying to model the effects in a different form, I'm wondering about the implementation of a volume emitter. Instead of creating a large number of small emitters in each voxel of a volume, I'd like to define a general volume and then assign an light emitting density function to this volume. Is it possible to implement this with Mitsuba2?

Speierers commented 3 years ago

@MaximilianBader if this is unrelated to this thread, please open a seperate issue 😉

Yes you should be able to implement volumetric emitters in Mitsuba. Although this will require a little bit of work. I would recommend you read the volpath integrators and maybe take a look at the Mitsuba 0.6/PBRT implementation of volumetric emitters to get started.

In this future we might support this natively, but this is a low priority for us.