cmu-ci-lab / inverseTransportNetworks

Towards Learning-based Inverse Subsurface Scattering
http://imaging.cs.cmu.edu/inverse_transport_networks/
GNU General Public License v3.0
29 stars 6 forks source link

Inverse Transport Networks

This repository is an implementation of the method described in the following paper:

"Towards Learning-based Inverse Subsurface Scattering" [project website]\ Chengqian Che, Fujun Luan, Shuang Zhao, Kavita Bala, and Ioannis Gkioulekas\ IEEE International Conference on Computational Photography (ICCP), 2020

Getting Started

These instruction constains three parts:

Rendering Scripts and Dataset

We used Mitsuba to generate our dataset. Our image file name follows the convention as: [shape]_e[sunlight_direction]_d[sigmaT]_a[albedo]_g[g]_q[sampleCount].exr

For example, one can render the following scenes:

mitsuba scenes/cube_sunsky.xml -Dmeshmodel=cube -DsigmaT=100 -Dalbedo=0.8 -Dg=0.2 -DnumSamples=4096 -Dx=0.433 -Dy=0.866 -Dz=0.25 -o cube_e30_d100_a0.8_g0.2_q4096.exr

One can also render a class of images using the following bash scripts with Sun Grid Engine:

./create_jobs_sge.sh

Differentiable Renderer

We developed our differentiable renderer based on Mitsuba 0.5.0 and it compiles the same way as compiling Mistuba. To compile and render a signle image with derivatives:

cd renderer
scons
mitsuba scenesAD/cube_sunsky.xml -Dmeshmodel=cube -DsigmaT=100 -Dalbedo=0.8 -Dg=0.2 -DnumSamples=4096 -Dx=0.433 -Dy=0.866 -Dz=0.25 -o cube_e30_d100_a0.8_g0.2_q4096.exr

The current renderer supports computing derivatives with respect to the following parameters:

In the sceneAD files, one can define The output image contains multiple channels with corrsponding channel names: channel name Description
forward forward rendering
sigmaT derivatives with respect to extinction coefficient
albedo derivatives with respect to volemetric albedo
g derivatives with respect to averaged cosine Henyey-Greenstein phase function
reflectance derivatives with respect to surface albedo
alpha derivatives with respect to surface roughness
weight derivatives with respect to weights in a mixture models

Learning code

Codes used to train and evaluate our approach is inside folder learning/. Pre-trained models with 5 different networks can be downloaded here.

Built With

The networks were trained using Amazon EC2 clusters. All image names are in ITNSceneFiles/imgNames/. One can evaluate our model by doing:

python eval.py

And to use our models to initialize analysis by synthesis, one can run:

 python eval_calibrated.py