Open Lizb6626 opened 5 months ago
Hi, Lizb, thanks for the question! This might be caused by the inconsistency of the color space. Try to apply gamma correction on your output to see if it will fix the problem. Also, notice that our albedo map is generated from NVDiffRec (thus it's called "Pseudo" albedo). It is designed to provide a reference of what the albedo may look like.
Thank you for your prompt response. I still have some confusion regarding the color space. Are the provided diffuse texture maps and pseudo albedo in the sRGB color space? Additionally, in the Blender rendering settings, both the input base color and output rendering results are in sRGB space, eliminating the need for gamma correction.
Furthermore, I attempted to apply the rgb_to_srgb
and srgb_to_rgb
functions to my diffuse image, but neither of the results aligned with the pseudo ground truth albedo.
In this case, use the albedo maps (which is in sRGB space) as the reference, which is what we did in the supplementary. The rendering script we used to generate the albedo maps (the camera conventions might be different):
for it, c2w in tqdm.tqdm(enumerate(camera_dict['cam_c2w'])):
img_name = all_img_list[it]
if img_name not in test_img_list:
continue
original_c2w = c2w.clone().cpu().detach().numpy()
c2w[:,1:2] *= -1
c2w[:,2:3] *= -1
w2c = torch.linalg.inv(c2w)
R = w2c[None,:3,:3].to(device)
T = w2c[None,:3,3].to(device)
R_pytorch3d = R.clone().permute(0, 2, 1)
T_pytorch3d = T.clone()
R_pytorch3d[:, :, :2] *= -1
T_pytorch3d[:, :2] *= -1
fov = camera_dict['cam_focal'][it] # * 180 / np.pi
focal_ratio = 1 / np.tan(fov / 2)
focal_ratio = focal_ratio / (FLAGS.resize / (FLAGS.resize-2*FLAGS.pad))
fov = 2 * np.arctan(1 / focal_ratio)
fov = fov * 180 / np.pi
cameras = FoVPerspectiveCameras(device=device, R=R_pytorch3d, T=T_pytorch3d, fov=fov)
raster_settings = RasterizationSettings(
image_size=FLAGS.resize,
blur_radius=0.0,
faces_per_pixel=1,
)
lights = AmbientLights(device=device)
# Create a rasterizer using the settings
rasterizer = MeshRasterizer(cameras=cameras, raster_settings=raster_settings)
renderer = MeshRenderer(
rasterizer=rasterizer,
shader=SoftPhongShader(
device=device,
cameras=cameras,
lights=lights,
blend_params=blend_params
)
)
albedo_map = renderer(mesh.extend(len(cameras)))
albedo_map = albedo_map.squeeze().cpu().numpy() # HxWx4
albedo_map = albedo_map[...,:3] * albedo_map[...,3:4]
albedo_map = (albedo_map.clip(0, 1) * 255).astype(np.uint8)
# Rasterize the mesh to get the fragments
fragments = rasterizer(mesh)
# np.save(os.path.join(albedo_output_dir, img_name.replace(".png", ".npy")), albedo_map)
imageio.imsave(os.path.join(albedo_output_dir, img_name), albedo_map)
Thank you. But the albedo maps don't seem to be aligned with the texture_kd map you provided. The white part of the baking can appears brighter in the texture_kd map. Is there anything wrong with the color space?
Thanks for excellent work!
I was rendering the diffuse color of the scene "baking_scene001/test/0000", using the provided ground truth mesh (
ground_truth/baking_scene001/mesh_blender
). And I found a misalignment with the corresponding pseudo_gt_albedo(ground_truth/baking_scene001/pseudo_gt_albedo
).The code I used for rendering:
The pseudo ground truth albedo appears darker than my rendered results. Additionally, the pseudo ground truth albedo seems to be darker than the corresponding texture_kd map. I am unsure of the cause behind this discrepancy and would appreciate any insights you can provide.
pseudo_gt_albedo my rendered albedo