mitsuba-renderer / mitsuba3

Mitsuba 3: A Retargetable Forward and Inverse Renderer
https://www.mitsuba-renderer.org/
Other
2.08k stars 244 forks source link

Wrong shading with normal map and pbr material. #1158

Closed saedrna closed 6 months ago

saedrna commented 6 months ago

Summary

System configuration

System information:

OS: Windows-10 CPU: AMD64 Family 23 Model 96 Stepping 1, AuthenticAMD GPU: NVIDIA GeForce RTX 2060 with Max-Q Design Python: 3.8.19 | packaged by conda-forge | (default, Mar 20 2024, 12:38:07) [MSC v.1929 64 bit (AMD64)] NVidia driver: 531.68 CUDA: 11.8.89 LLVM: -1.-1.-1

Dr.Jit: 0.4.4 Mitsuba: 3.5.0 Is custom build? False Compiled with: MSVC 19.38.33133.0 Variants: scalar_rgb scalar_spectral cuda_ad_rgb llvm_ad_rg

Description

I add a normal map, which is fixed to [0,0,1]. I think this will not add any difference. The nested bsdf is Principaled BSDF. I loaded an obj file, and use the texture for kd, and use 0.3 fixed for roughness and 0.0 for metalic. When normal map is applied, I get black border and clear seam line effect on the rendering. But when normal map is removed, the results seems fine. I attached two renderings with and without normal map below. image image

Steps to reproduce

I attached some data file for the test. The first contains the obj file and the second contains a data batch. Huaping4Quan.zip data.zip

import os

import matplotlib.pyplot as plt
import mitsuba as mi
import numpy as np
import torch

mi.set_variant('cuda_ad_rgb')

if __name__ == '__main__':
    hide_emitter = True
    path_mesh = "path\\to\\Huaping4Quan.obj"
    render_crop = True
    camera_space_light = True

    # replace extension with .jpg
    path_texture = os.path.splitext(path_mesh)[0] + '.jpg'
    tex_size = 2048

    # load mesh and texture using mitusba
    kd = mi.Bitmap(path_texture)
    kd = kd.convert(mi.Bitmap.PixelFormat.RGB, mi.Struct.Type.Float32)
    kd = kd.resample([2048, 2048])
    kd = mi.load_dict({"type": "bitmap", "bitmap": kd})

    # roughness
    kr = np.full([tex_size, tex_size], 0.3, dtype=np.float32)
    kr = mi.load_dict(
        {"type": "bitmap", "bitmap": mi.Bitmap(kr, mi.Bitmap.PixelFormat.Y)}
    )

    # metalic
    km = np.full([tex_size, tex_size], 0.0, dtype=np.float32)
    km = mi.load_dict(
        {"type": "bitmap", "bitmap": mi.Bitmap(km, mi.Bitmap.PixelFormat.Y)}
    )

    # principaled bsdf
    pbr = mi.load_dict(
        {
            "type": "principled",
            "base_color": kd,
            "roughness": kr,
            "metallic": km,
        }
    )

    # normal
    kn = np.full(
        [tex_size, tex_size, 3], np.array([0, 0, 1]), dtype=np.float32
    )
    kn = mi.load_dict(
        {
            "type": "bitmap",
            "bitmap": mi.Bitmap(kn, mi.Bitmap.PixelFormat.RGB),
            "raw": True,
        }
    )
    pbr = mi.load_dict({"type": "normalmap", "normalmap": kn, "bsdf": pbr})

    mesh = mi.load_dict({
        "type": "obj",
        "filename": path_mesh,
        "bsdf": pbr
    })

    light = mi.load_dict({
        "type": "envmap",
        "filename": "envmap.exr"
    })
    integrator = mi.load_dict({
        "type": "aov",
        "aovs": "depth:depth",
        "integrator": {
            "type": "direct",
            "hide_emitters": hide_emitter,
        }
    })
    scene = mi.load_dict({
        "type": "scene",
        "integrator": integrator,
        "shape": mesh,
        "light": light
    })

    params = mi.traverse(scene)
    print(params)

    data = torch.load('data.pth')
    image = data['image'][0].numpy()
    to_world = data['to_world'][0].numpy()
    fov_x = data['fov_x'][0].item()
    film_width = data['film_width'][0].item()
    film_height = data['film_height'][0].item()
    crop_offset_x = data['crop_offset_x'][0].item()
    crop_offset_y = data['crop_offset_y'][0].item()
    crop_width = data['crop_width'][0].item()
    crop_height = data['crop_height'][0].item()
    image_width = data['image_width'][0].item()
    image_height = data['image_height'][0].item()

    name = data['name'][0]

    # set camera
    if render_crop:
        film = mi.load_dict({
            "type": "hdrfilm",
            "width": film_width,
            "height": film_height,
            "crop_offset_x": crop_offset_x,
            "crop_offset_y": crop_offset_y,
            "crop_width": crop_width,
            "crop_height": crop_height,
            "pixel_format": "rgba",
        })
    else:
        film = mi.load_dict({
            "type": "hdrfilm",
            "width": image_width,
            "height": image_height,
            "pixel_format": "rgba",
        })
    sensor: mi.Sensor = mi.load_dict({
        "type": "perspective",
        "fov": fov_x,
        "to_world": mi.ScalarTransform4f(to_world),
        "film": film,
        "near_clip": 0.1,
        "far_clip": 1000
    })

    if camera_space_light:
        params['light.to_world'] = sensor.world_transform()
        params.update()

    mi.render(scene, params=params, spp=1, sensor=sensor)
    channels = sensor.film().bitmap()

    image = dict(channels.split())['<root>']
    image = image.convert(mi.Bitmap.PixelFormat.RGBA, mi.Struct.Type.Float32, srgb_gamma=True)

    plt.imshow(mi.util.convert_to_bitmap(image))
    plt.show()
saedrna commented 6 months ago

I found that the bitmap should be of the image color, so the color range is [0, 1]. Therefore it is transformed as 2*x-1, e.g. 0 -> -1, 0.5 -> 0 and 1 -> 1 to the normal map in rendering. Therefore, if I use bitmap for kn, it should be fixed to [0.5, 0.5, 1] to make no difference.

merlinND commented 6 months ago

Hi @saedrna,

Yes, exactly. Thanks for taking the time to report back after fixing your issue.