BachiLi / redner

Differentiable rendering without approximation.
https://people.csail.mit.edu/tzumao/diffrt/
MIT License
1.39k stars 139 forks source link

Optimization of diffuse reclection texture #64

Closed dkasuga closed 5 years ago

dkasuga commented 5 years ago

Hello, I have a similar question to issue #52 . Can we optimize diffuse reflection texture? I've tried to implement the code to do this, but it raised "ValueError: can't optimize a non-leaf Tensor." Isn't there diffuse reflection texture support yet?

Thanks.

import pyredner
import torch

# Use GPU if available
pyredner.set_use_gpu(torch.cuda.is_available())

material_map, mesh_list, light_map = pyredner.load_obj('teapot.obj')

for _, mesh in mesh_list:
    mesh.normals = pyredner.compute_vertex_normal(mesh.vertices, mesh.indices)

# Setup camera
cam = pyredner.Camera(
    position=torch.tensor([0.0, 30.0, 200.0]),
    look_at=torch.tensor([0.0, 30.0, 0.0]),
    up=torch.tensor([0.0, 1.0, 0.0]),
    fov=torch.tensor([45.0]),  # in degree
    clip_near=1e-2,  # needs to > 0
    resolution=(256, 256),
    fisheye=False)

material_id_map = {}
materials = []
count = 0
for key, value in material_map.items():
    material_id_map[key] = count
    count += 1
    materials.append(value)

shapes = []
for mtl_name, mesh in mesh_list:
    shapes.append(pyredner.Shape(\
        vertices = mesh.vertices,
        indices = mesh.indices,
        uvs = mesh.uvs,
        normals = mesh.normals,
        material_id = material_id_map[mtl_name]))

envmap = pyredner.imread('sunsky.exr')
if pyredner.get_use_gpu():
    envmap = envmap.cuda()
envmap = pyredner.EnvironmentMap(envmap)

scene = pyredner.Scene(cam, shapes, materials, area_lights=[], envmap=envmap)
# Like the previous tutorial, we serialize and render the scene,
# save it as our target
scene_args = pyredner.RenderFunction.serialize_scene(\
    scene = scene,
    num_samples = 512,
    max_bounces = 1)
render = pyredner.RenderFunction.apply
img = render(0, *scene_args)
pyredner.imwrite(img.cpu(), 'results/texture_estimation/target.exr')
pyredner.imwrite(img.cpu(), 'results/texture_estimation/target.png')
target = pyredner.imread('results/texture_estimation/target.exr')
if pyredner.get_use_gpu():
    target = target.cuda()

# Init diffuse texture image
diffuse_reflectance = torch.ones([128, 128, 3],
                                 dtype=torch.float32,
                                 device=pyredner.get_device(),
                                 requires_grad=True) * 0.5
scene.materials[-1].diffuse_reflectance = pyredner.Texture(diffuse_reflectance)
# We need to serialize the scene again to get the new diffuse_reflectance
scene_args = pyredner.RenderFunction.serialize_scene(\
    scene = scene,
    num_samples = 512,
    max_bounces = 1)
# Render the initial guess.
img = render(1, *scene_args)
# Save the images.
pyredner.imwrite(img.cpu(), 'results/texture_estimation/init.png')
# Compute the difference and save the images.
diff = torch.abs(target - img)
pyredner.imwrite(diff.cpu(), 'results/texture_estimation/init_diff.png')

# Optimize for pose parameters.
optimizer = torch.optim.Adam([texture_img], lr=1e-2)
# Run 200 Adam iterations.
for t in range(200):
    print('iteration:', t)
    optimizer.zero_grad()
    # Reassign the texture for differentiating mipmap construction
    diffuse_reflectance = pyredner.Texture(diffuse_reflectance)
    scene.materials[-1].diffuse_reflectance = pyredner.Texture(
        diffuse_reflectance)
    # Forward pass: apply the mesh operation and render the image.
    scene_args = pyredner.RenderFunction.serialize_scene(\
        scene = scene,
        num_samples = 4, # We use less samples in the Adam loop.
        max_bounces = 1)
    # Important to use a different seed every iteration, otherwise the result
    # would be biased.
    img = render(t + 1, *scene_args)
    # Save the intermediate render.
    pyredner.imwrite(img.cpu(),
                     'results/texture_estimation/iter_{}.png'.format(t))
    # Compute the loss function. Here it is L2.
    loss = (img - target).pow(2).sum()
    print('loss:', loss.item())

    # Backpropagate the gradients.
    loss.backward()
    # Print the gradients
    print('texture.grad:', diffuse_reflectance.grad)

    # Take a gradient descent step.
    optimizer.step()
    # Print the current pose parameters.

# Render the final result.
scene_args = pyredner.RenderFunction.serialize_scene(\
    scene = scene,
    num_samples = 512,
    max_bounces = 1)
img = render(202, *scene_args)
# Save the images and differences.
pyredner.imwrite(img.cpu(), 'results/texture_estimation/final.exr')
pyredner.imwrite(img.cpu(), 'results/texture_estimation/final.png')
pyredner.imwrite(
    torch.abs(target - img).cpu(), 'results/texture_estimation/final_diff.png'
BachiLi commented 5 years ago

I don't have time to look at the code yet but take a look at tests/test_svbrdf.py or tests/test_teapot_normal_map.py for examples of optimizing textures. Both diffuse reflectance & normal maps are supported.

dkasuga commented 5 years ago

OH, I accidentally overlooked tests / test_svbrdf.py. Thank you very much! However, another problem came up. I actually tried to execute tests/test_svbrdf.py after reflecting the latest repository, but the following Assertion Error appeared.

File "test_svbrdf.py", line 187, in <module>
    max_bounces = 1)
  File "/opt/conda/lib/python3.7/site-packages/pyredner/render_pytorch.py", line 109, in serialize_scene
    assert (torch.isfinite (material.diffuse_reflectance.mipmap) .all ())
AssertionError

In details, line 187:

args = pyredner.RenderFunction.serialize_scene (\
        scene = scene,
        num_samples = 4,
        max_bounces = 1)

Since this Assertion Error occurred at 100 iteration, I tried changing the number of iterations from 200 to 100 and then it worked.

Why is torch.isfinite (material.diffuse_reflectance.mipmap) broken if it is over 100 iterations?

BachiLi commented 5 years ago

Looks like a bug on my side. I think it is caused by the numerical issue of small roughness. Will fix it asap.

BachiLi commented 5 years ago

Should be fixed now. The problem was indeed due to small/negative roughness causing NaNs. I added some clamping code in the optimization script.

dkasuga commented 5 years ago

Thank you so mush for your prompt response and fix!!