mitsuba-renderer / mitsuba2

Mitsuba 2: A Retargetable Forward and Inverse Renderer
Other
2.05k stars 266 forks source link

[❔ other question] - Share emitter within a scene #616

Closed jaroslavknotek closed 2 years ago

jaroslavknotek commented 2 years ago

Summary

I want to use differential rendering for a scene with several objects and two point lights. The lights are two identical bulbs of which I don't know the intensity -> the intensity is one of the parameter I want to find.

If you create two separate point lights, you wil get the following values in parameter map.

ParameterMap[
*    PointLight_1.intensity.value,
*    PointLight_2.intensity.value,
...
]

This is not desirable state for running differentiation as you will have to differentiate two parameters and even worse, you have no way to ensure equal value intensities.

To solve this issue, you would want to share the light definitions. I tried two different things:

Both approaches failed. I will provide details about the scene:

1 - Instantiate shapegroup

Scene:

{
        "cam_light_group":
        { 
            "type":"shapegroup", 
            "id":"cam_light",
            "cam":{ 
                 "type":"sphere", # simulating point 
                 "emmiter":{
                    "type":"area",
                    "radiance":{             
                        "type":"rgb",
                        "value":light_intensity            
                    },
                 },
                "to_world":ScalarTransform4f.translate(Vector3f(x+light_offset,y,z))
            },
        },
     "cam_instance_1":{
           "type": "instance",
           "my_ref": {
                "type": "ref",
                "id": "cam_light",
           }
     },
    "cam_instance_2": ... same as cam_instance_1
}

Running this, I get error

RuntimeError: ​[ShapeGroup] Instancing of emitters is not supported

2 - reference emmiter

I tried to create emmiter template and then reference it

        "emmiter_ref":{
            "id":"my_emmiter",
            "type":"area",
            "radiance":{             
                "type":"rgb",
                "value":light_intensity            
            },
       },
        "cam_instance_1":{  
                "type": "ref",
                "id": "emmiter_ref",
     },

This way, I get the error

RuntimeError: ​[xml_v.cpp:208] Reference found at the scene level: cam_instance_1

Question

How do achieve my goals - have two lights, sharing the light intensity that I can differentiate? I expect that the parameter map would look like this

ParameterMap[
*    PointLight_template.intensity.value,
    PointLight_instance_1. ...
    PointLight_instance_2. ...
...
]

Thank you.

Speierers commented 2 years ago

In order for the two emitters to have the same intensity throughout the optimisation process, and the gradients to flow properly to that variable, you will need to do this directly in python.

For instance, you can create a Float variable for the intensity, enable gradient tracking on it and then use it to overwrite the emitter intensity parameters in params. Make sure that the optimiser operates on that variable and also that the params values are properly updated at every iteration step.

jaroslavknotek commented 2 years ago

Hello, I will take a look on a proposed solution and let you know. (I will close the issue if I encounter any problems with referencing the variable). I didn't find any tutorial so it will be a trial and error therefore additional questions are expected.

jaroslavknotek commented 2 years ago

Hello, I tried to implement your suggestion but I keep running into problems. I searched for what you suggested and found this source where there is an example how to create differentiable parameter.

Therefore I created the following scene:

import mitsuba
mitsuba.set_variant('gpu_autodiff_spectral')
from mitsuba.core import ScalarTransform4f, Transform3f,ScalarTransform3f
from enoki.scalar import Vector3f
from mitsuba.core.xml import load_file, load_dict
from mitsuba.python.util import traverse

# these import come from the tutorial
import enoki as ek
from enoki.cuda_autodiff import Float32 as FloatD

# This is the differentiable variable
cam_light_intensity = FloatD(100)
ek.set_requires_gradient(cam_light_intensity)

light_offset = 10
x,y,z = 0,0,0

scene = {
    "type" : "scene",
    "myintegrator" : {
        "type":"volpath",
        "samples_per_pass":4,
        "max_depth": 8,
    },
    "light_1":{
        "type":"point",
        "intensity":{
            "type":"rgb",
            "value":cam_light_intensity},
        "to_world":ScalarTransform4f.translate(Vector3f(x+light_offset,y,z))
    },
    "light_2":{
        "type":"point",
        "intensity":{
            "type":"rgb",
            "value":cam_light_intensity},
        "to_world":ScalarTransform4f.translate(Vector3f(x-light_offset,y,z))
    },
    "sphere":{
        "type":"sphere",
        "to_world":ScalarTransform4f.translate(Vector3f(x,y +1 ,z))
    },

}
scene["mysensor"]=  { } # skipped for clarity

scene= load_dict(scene)
params = traverse(scene)    
params

I didn't even get to the differentiation as I get the error

RuntimeError                              Traceback (most recent call last)
/tmp/ipykernel_190/1602723431.py in <module>
     69 
     70 
---> 71 scene= load_dict(scene)
     72 params = traverse(scene)
     73 params

RuntimeError: Unable to cast Python instance to C++ type (compile in debug mode for details)

Can you help me to resolve it? Or can you provide me with working example? I really struggle to wrap my head around the relationship between mitsuba2 and enoki. On the web, there are mentions from all three version of mitsuba (0.6,2, and even mitsuba 3 -> see mitsuba-tutorials repo).

Thank you very much.

Speierers commented 2 years ago

Mitsuba 3 hasn't been released yet.

The scene description dictionary expects simple scalar types like float. It doesn't support Enoki type.

Here cam_light_intensity should be a float.

You can then use traverse to make it differentiable.

jaroslavknotek commented 2 years ago

Thank you, I managed to share the light but how do I add ti to param list? I tried the following

# ...
# the code was ommited for clarity

scene= load_dict(scene)
params = traverse(scene)    
diff_cam_light = FloatD(100)

# 1 - sharing the light between intensities
params['PointLight.intensity.value']= diff_cam_light
params['PointLight_1.intensity.value']= diff_cam_light

# 2 - marking it for differentiation
ek.set_requires_gradient(diff_cam_light)

# 3 - add it to parameters
# none of the following update works
params.set_property('diff_light', diff_cam_light)
params.properties.update('diff_light', diff_cam_light)
params['diff_light'] = diff_cam_light

# 4 - update the parameters
params.update()

# 5 - mark the parameter for optimization
params.keep('diff_light')

# 6 - optimization itself

How do I perform step three so I can then perform step 5 without error?

Speierers commented 2 years ago

You don't need to add this parameter to params. Instead you use it to overwrite values in params (like you did in step 1 above)

jaroslavknotek commented 2 years ago

Thank you for the suggestion. However, I can't confirm that this works.

Below, there is a complete example you can run to see. There is a scene with one ball and two lights on its both poles. In the beginning I create referece image with light intensity set to 1000. Then I turn the light down to 100 using the suggested approach

Then I marked one light for differentiation (step2, thinking that it would also influence the other light as the value should be shared). It didn't work. The optimiziation optimized only the light kept for optimization, the other one was still having the same value as the one prior to differentiation.

NOTE: I also tried to mark both lights for differentiation (e.g. params.keep(['PointLight.intensity.value', 'PointLight.intensity.value']) which didn't work. Either light was differentiated on its own -> in the end I print out the differentiated values and they differed.

How do I achieve having one intensity value optimized for both of the light?

The code:

import mitsuba
mitsuba.set_variant('gpu_autodiff_rgb')

from mitsuba.core import ScalarTransform4f, Transform3f,ScalarTransform3f 
from enoki.scalar import Vector3f
from mitsuba.core.xml import load_file, load_dict
from mitsuba.python.util import traverse

from mitsuba.core import Bitmap, Struct

import enoki as ek
from enoki.cuda_autodiff import Float32 as FloatD

import numpy as np
import matplotlib.pyplot as plt

from mitsuba.python.autodiff import render, write_bitmap
from mitsuba.python.autodiff import Adam

is_replaced = True
light_offset = 10
x,y,z = 0,0,0

w = h = 100
spp = 64

cam_light_intensity = 1000

scene = {
    "type" : "scene",
    "myintegrator" : {
        "type":"path",
        "samples_per_pass":4,
        "max_depth": 8,
    },
    "light_1":{
        "type":"point",
        "intensity":{
            "type":"rgb",
            "value":cam_light_intensity},
        "to_world":ScalarTransform4f.translate(Vector3f(x+light_offset,y,z))
    },
    "light_2":{
        "type":"point",
        "intensity":{
            "type":"rgb",
            "value":cam_light_intensity},
        "to_world":ScalarTransform4f.translate(Vector3f(x-light_offset,y,z))
    },
    "sphere":{
        "type":"sphere",
        "to_world":ScalarTransform4f.translate(Vector3f(x,y +1 ,z))
    },

}
scene["mysensor"]= {
    "type" : "perspective",
    "fov":45,
    "fov_axis":"smaller",
    "near_clip": 0.001,
    "far_clip": 10000.0,
    "focus_distance": 1000,
    "to_world": ScalarTransform4f.look_at(Vector3f(x,y-2,z),Vector3f(x,y+1,z), Vector3f(0,0,1)),
    "myfilm" : {
        "type" : "hdrfilm",
        "rfilter" : { "type" : "box"},
        "width" : w,
        "height" : h,
        "pixel_format": "rgb",
        "component_format":"float32"
    },
    "mysampler" : {
        "type" : "independent",
        "sample_count" : 64,
    }
}

scene= load_dict(scene)
#image_ref = develop_img(scene)
image_ref = render(scene, spp=spp)

params = traverse(scene)  
print(params)
print('initial light', params['PointLight.intensity.value'])

changed_intensity = 100
# 1 - sharing the light between intensities
diff_cam_light = FloatD(changed_intensity)
ek.set_requires_gradient(diff_cam_light)
params['PointLight.intensity.value']= diff_cam_light
params['PointLight_1.intensity.value']= diff_cam_light

params.update()

# 2 - mark the parameter for optimization
params.keep(['PointLight.intensity.value'])

# 3 - optimization itself
print("updated cam 1",params['PointLight.intensity.value'])

opt = Adam(params, lr=100)
before_ref = render(scene, optimizer=opt, unbiased=True, spp=spp)
image = None
for it in range(100):
    image = render(scene, optimizer=opt, unbiased=True, spp=spp)
    error = ek.hsum(ek.sqr(image - image_ref)) / len(image)
    ek.backward(error)

    opt.step()
    if it %20 == 0:
        print('Iteration', it, "error", error)

print("after cam 1",traverse(scene)['PointLight.intensity.value'])

fig,axs = plt.subplots(1,3, figsize = (16,12))
axs[0].imshow(np.array(image_ref).reshape(h,w,3))
axs[0].set_title("original")
axs[1].imshow(np.array(before_ref).reshape(h,w,3))
axs[1].set_title("before optimization")
axs[2].imshow(np.array(image).reshape(h,w,3))
axs[2].set_title("after optimization")

Result: image

Speierers commented 2 years ago

It is important that the Adam optimizer optimizes the "latent variable" diff_cam_light and not the scene parameters in your case. IIRC the Optimizer constructor takes any arbitrary dict so you can build it like this:

opt = Adam({'var': diff_cam_light}, lr=100)

Edit: actually this is not really supported in Mitsuba 2, so you will need to edit the Adam.step code so that it doesn't call self.params.update() and do it manually in your script. Note that this whole optimizer API was greatly improved in the upcoming Mitsuba 3 version.

Then in you optimization loop, make sure to update the scene parameters with the new optimized value and propagate those changes through the scene:

params['PointLight.intensity.value'] = opt.params['PointLight.intensity.value']
params['PointLight_1.intensity.value'] = opt.params['PointLight_1.intensity.value']
params.update()
jaroslavknotek commented 2 years ago

I changed the code above in the following way:

# 2 - mark the parameter for optimization
params.keep(['PointLight.intensity.value','PointLight_1.intensity.value'])

# 3 - optimization itself
print("updated cam 1",params['PointLight.intensity.value'])

opt = Adam(params, lr=100)
before_ref = render(scene, optimizer=opt, unbiased=True, spp=spp)
image = None
for it in range(100):
    image = render(scene, optimizer=opt, unbiased=True, spp=spp)
    params['PointLight.intensity.value'] = opt.params['PointLight.intensity.value']
    params['PointLight_1.intensity.value'] = opt.params['PointLight.intensity.value']
    params.update()

    error = ek.hsum(ek.sqr(image - image_ref)) / len(image)
    ek.backward(error)

    opt.step()

    if it %20 == 0:
        print('Iteration', it, "error", error)

print("after cam 1",traverse(scene)['PointLight.intensity.value'])

And now it works. The values after the optimization are correct and equal

after cam 1 [[997.956, 997.956, 997.956]]
after cam 2 [[997.956, 997.956, 997.956]]

Thank you.