BachiLi / redner

Differentiable rendering without approximation.
https://people.csail.mit.edu/tzumao/diffrt/
MIT License
1.38k stars 139 forks source link

Using Redner with Mitsuba scene files and differentiating w.r.t object transformations #120

Closed abhinavvs closed 4 years ago

abhinavvs commented 4 years ago

I am trying to load a simple Mitusba scene file and render it using Redner and I have a couple of issues about this:

Q1) I am able to do this (by modifying the XML scene file a bit), but the image rendered using Mitusba and that from Redner look very different.

(Left) Mitusba w/ 128 spp; (Right) Redner w/ 256 spp. Both renderings take approximately the same time though.

The rendered image using Redner seems darker and also has a lot of pixel noise on the sidewalls compared to the one from Mitsuba. What could be the reason for this? Is there an easier way to get cleaner rendered images using Redner for e.g. any different settings I could use to make things better? I am currently using the following code w/ redner:

scene = pyredner.load_mitsuba('hallway/hallway_redner.xml')
scene_args = pyredner.RenderFunction.serialize_scene(\
    scene = scene,
    num_samples = 256,
    max_bounces = 32) # Set max_bounces = 5 for global illumination
render = pyredner.RenderFunction.apply
img = render(0, *scene_args)
pyredner.imwrite(img.cpu(), 'hallway/redner/result.png')

Q2) Next, I would like to perform some differentiable rendering with the scene shown above. In particular, I have added an object (a teapot) to the scene above, and I want to compute derivatives w.r.t different transformations of the object (translation, scaling, and rotations). The object description in the XML file is as follows:

<shape type="obj" id="teapot">
    <string name="filename" value="fragments/teapot.obj"/>
    <!-- Need to diff wrt some/all of the following Transformation variables -->
    <transform name="toWorld">          
        <scale value="0.002"/>
        <translate x="-1.5" y="0.2" z="0"/>             
    </transform>
    <ref id="red-paint"/>
</shape>

I am not able to find these transformation parameters in scene loaded using pyredner.load_mitsuba(). How can something like this be done? I essentially want to compute gradients similar to the ones in Fig 5 and 8 of your paper (the movement of the Stanford bunny).

Also, can I load a scene file from Mitusba as shown in Q1 above and then add new objects to the scene, for e.g. the teapot object in Q2 separately using the Python interface? Any guidance on how this can be done would be very useful for me. Thanks in advance!

BachiLi commented 4 years ago

Looks like a bug. Can you attach the XML file? The transformation is baked in but I can potentially make the parser outputs a dictionary for the transformation parameters.

abhinavvs commented 4 years ago

Thanks for the super quick response, @BachiLi.

I am attaching the XML file as a ZIP: hallway_redner.zip. Please let me know if you need any other details.

Re: transformation parameters - that would be really useful for me. If there is an easier way to this as opposed to loading an XML file, I can do that too (my scene is relatively simple so I can hopefully define the whole scene in Python too).


Edit: I am including the XML scene file with all the required auxiliary (.obj) files here so that you will be able to render the scene: hallway_redner_full.zip

abhinavvs commented 4 years ago

I also noticed another small bug with the load_mitsuba function. It swaps the width and the height values while parsing the film attributes. For e.g. I had to modify the XML as follows to get a 384x512 (width x height) image:

<film type="hdrfilm">
            <!-- Width and height values swapped --> 
            <integer name="width" value="512"/>
            <integer name="height" value="384"/>
            <string name="pixelFormat" value="rgb"/>
            <rfilter type="box"/>           
</film>

This is a very minor bug, but I just wanted to bring this to your attention.

BachiLi commented 4 years ago

I don't have the .obj files so I couldn't check but I think the bug is caused by the following line: <srgb name="diffuseReflectance" value="#f7f8f4" /> (we ignored srgb attributes before) It should be fixed now in 0.4.9, it might not fully match Mitsuba's rendering since I haven't implemented GGX and the internal scattering between diffuse/specular layers. Adding GGX is relatively easy, so if you think this is important I can add it.

abhinavvs commented 4 years ago

Thanks for the updates and corrections to the above-reported bugs. Re: ggx - it would be great if you can add that functionality (and if it's very easy to do)!

I installed the latest version of redner and ran the same code. It is able to parse the srgb attributes now (and the film attributes work well now, thanks!), but the results are the same. It still looks a lot like the image in my first comment. Here is the full scene file with the .obj files: hallway_redner_full.zip for you to check.

Also, I get this error when I try to write the output as an EXR image (.png format still works):

---------------------------------------------------------------------------
NameError                                 Traceback (most recent call last)
<ipython-input-3-33e3b151ac9e> in <module>
     24 render = pyredner.RenderFunction.apply
     25 img = render(0, *scene_args)
---> 26 pyredner.imwrite(img.cpu(), 'hallway/redner/result2.exr')
     27 pyredner.imwrite(img.cpu(), 'hallway/redner/result2.png')
     28 # target = pyredner.imread('hallway/redner/result.exr')

~/anaconda3/lib/python3.7/site-packages/pyredner/image.py in imwrite(img, filename, gamma, normalize)
     46         pixels_g = img_g.astype(np.float16).tostring()
     47         pixels_b = img_b.astype(np.float16).tostring()
---> 48         HEADER = OpenEXR.Header(img.shape[1], img.shape[0])
     49         half_chan = Imath.Channel(Imath.PixelType(Imath.PixelType.HALF))
     50         HEADER['channels'] = dict([(c, half_chan) for c in "RGB"])

NameError: name 'OpenEXR' is not defined

Any chance the latest commit could have caused this?

BachiLi commented 4 years ago

The OpenEXR issue is fixed in 0.4.10 (accidentally uploaded my debug code). Looking into the Mitsuba parser issue.

BachiLi commented 4 years ago

This is what I get with Mitsuba 0.6: hallway_redner and it looks very different from what you posted. Any idea why? The internal scattering (nonlinear=true in white-paint) might also be one major source of difference. Also I think Mitsuba and redner handle Fresnel differently (I used Schlick approximation) so that could also be one reason.

In general I guess we want to add more options to the material models, and ultimately we want a programmable BSDF.

abhinavvs commented 4 years ago

OpenEXR issue is fixed and works fine now, thanks!

I am using Mitsuba 2.0 actually. Not sure if that makes things different. I re-ran everything and here is how the rendered images look: image Left is Redner and right is Mitsuba 2.0. Both are rendered with 128spp.

This is the white-paint definition I used:

    <!-- White paint definition -->
    <bsdf type="twosided" id="white-paint">
        <bsdf type="roughplastic">
            <!-- The BEHR Premium paint is documented as having an RGB color
                 of (247, 248, 244) = #f7f8f4. This is used, although the
                 semi-gloss/eggshell sheen effects this too.
            <srgb name="diffuseReflectance" value="#f7f8f4" /> -->
            <rgb name="diffuse_reflectance" value="0.964, 0.968, 0.953" />
            <string name="distribution" value="ggx" />
            <float name="alpha" value="0.08" />
            <!-- BEHR paint has an acrylic finish, this is an approximate match.
            <string name="intIOR" value="acrylic glass" />
            <boolean name="nonlinear" value="true" /> -->

        </bsdf>
    </bsdf>

I removed the nonlinear attribute (internal scattering) and also changed the srgb to rgb values for diffuse_reflectance to make a fair comparison, but the results are still quite different.

Also, we can see that the teapot's lid seems to be missing in the Redner image while it is present in Mitsuba's. Any idea why that's happening?

BachiLi commented 4 years ago

The teapot lid issue is fixed in 0.4.12 (will be uploaded in ~30 mins). I could get similar brightness in renderings between redner and Mitsuba. Make sure you set the number of bounces and gamma correction correctly.

Redner still produces more fireflies comparing to Mitsuba. I'm looking into it.

abhinavvs commented 4 years ago

Thanks! How do I set the gamma correction for Mitsuba? I am using the default code the authors provided in their tutorials (of Mitsuba2), which used Bitmap.

I will stay tuned for the other fixes.

BachiLi commented 4 years ago

I don't know ; ) Comparing .exr files is the safe bet. I'm using Mitsuba 0.6.

abhinavvs commented 4 years ago

Thanks, @BachiLi - I compared the .exr files. They look exactly like the ones I posted above too.

I compared the raw rendered output and that's much smaller (per pixel) for Redner than for Mitsuba as well. Not sure why that's the case!

BachiLi commented 4 years ago

So, there are three differences between redner and Mitsuba's material model. 1) The default values for roughplastic are different. (This is fixed in 0.4.13) 2) The diffuse reflection in Mitsuba roughplastic is multiplied by an internal scattering term by integrating over the hemisphere in the diffuse layer (even if nonlinear is set to false). 3) Redner uses an approximated Fresnel reflection model and parameterizes differently. In particular the specular reflectance represents index of refraction and tint in metal materials simultaneously.

I've verified that if I manually remove the Fresnel term (3) and the internal scattering term (2) in both redner and Mitsuba the rendering matches after setting specular reflectance to (1,1,1):

redner redner Mitsuba mitsuba

(There are more fireflies in Mitsuba because redner disables certain caustics light paths)

I'm not sure if we want to move to Mitsuba's model. Redner's model is actually more commonly used (it's the Cook Torrence model with the Fresnel term replaced by Schilick's approximation). Game engines also use a very similar parameterization (e.g., https://learnopengl.com/PBR/Theory or https://google.github.io/filament/Filament.md.html#materialsystem).

Since matching Mitsuba's rendering is not the ultimate goal of redner, can you explain why you want the particular material model implemented in Mitsuba? I can certainly implement a more complex material model, but I want to know about the use cases first.

BachiLi commented 4 years ago

By the way, since Schilick's approximation approximates R0 using ((n1-n2)/(n1+n2))^2, setting specular reflectance to that value (~=0.04 for Mitsuba's default setting) can make redner's result closer to Mitsuba.

Mitsuba's rendering with only internal scattering term removed (Fresnel is there, specular reflectance is (1, 1, 1)): mitsuba redner with specular reflectance set to (0.04, 0.04, 0.04): redner

Mitsuba's rendering with everything include: mitsuba_scatter

The full model used by Mitsuba takes the energy absorption of the specular layer into account, so the walls's diffuse reflection is darker.

abhinavvs commented 4 years ago

Thanks for working with me on this. I am trying to emulate the behavior of a particular type of paint and render images that are as physically accurate as possible.

The white-paint definition used in the scene file was given to me by a collaborator who said that it was a close (and hence an ideal) approximation of the paint of interest. All I need is to have a BSDF that closely matches the parameters I am currently using and for the rendering engine to output a physics-based accurate image (like what Mitsuba promises).

BachiLi commented 4 years ago

I would say both redner's model and Mitsuba's model are capable of describing this BRDF. You may set the diffuse reflectance lower (or even set them as unknowns and optimize) to match Mitsuba's rendering better. Even currently I would say redner matches the result reasonably.

Ultimately all models are wrong and both Mitsuba and redner's models approximate heavily. I would suggest trying the current redner's model out first. If that's too inaccurate and you found Mitsuba's internal scattering model fits the data better let me know and I'll implement Mitsuba's model. I can implement GGX these days since I expect people like heavy tail highlights.

abhinavvs commented 4 years ago

Sounds good to me. I will use your pointers for now and get back to you if I need a better BRDF model. Thanks for all your help with this.

Finally, re: (Q2) of my original post - is it possible to obtain the transformation attributes directly when parsing the XML file? Currently, I am using the technique mentioned in your Pose Estimation tutorial to extract the translation attribute and then manually add rotations and scaling. But if you do make the parser output a dictionary of the transformations, please let me know and I can modify my code accordingly.

BachiLi commented 4 years ago

Ah yes. Almost forgot. I'll do that in my free time as well.

abhinavvs commented 4 years ago

Great, thanks!

As for the "more fireflies in Redner" issue, what's the solution for that? Is that fixed with one of the recent commits too?

BachiLi commented 4 years ago

Yes that is fixed. That was simply caused by different default parameter values.

abhinavvs commented 4 years ago

Quick question: I re-installed the latest version of Redner by running: pip install --upgrade redner-gpu. However, the new changes discussed in this thread don't seem to get installed. The results are the same as they were before.

Any idea what's wrong? Should I do something with the cloned git repository as well to make things work (I had cloned the redner repo long time back when the installation had to be done from source)?

BachiLi commented 4 years ago

maybe uninstall redner/redner-gpu first make sure the version is 0.4.13.

BachiLi commented 4 years ago

Wait, the GPU version was not uploaded. One moment.

BachiLi commented 4 years ago

Uploaded. Should be able to update with pip install --upgrade