Closed BingCS closed 3 years ago
The depth images were rendered from blender. We never used them in the project so I can't vouch for their correctness (in fact they were included in the folder by mistake but we decided to keep them incase someone found them useful). We have no plans on rendering depth images for the training and validation datasets.
Thanks for your kind reply. Which version of the Blender do you use? @tancik
I believe I was using 2.82 at the time.
Great thanks!
FYI: you could check the blender file bpy script to find the depth mapping, unfortunately, the lego far value here is not 6.0, but 8.0 as suggested in the blender file, this will align the 8bit depth
FYI: you could check the blender file bpy script to find the depth mapping, unfortunately, the lego far value here is not 6.0, but 8.0 as suggested in the blender file, this will align the 8bit depth
Hi @ray8828 , your visualization looks great! Could you plz help me take a look if my parsing for depth map in "Lego" is correct? Thanks!
import cv2
import numpy as np
depth_file = ....
depth = cv2.imread(depth_file, 0)
near = 2.
far = 8.
depths = near + (far - near) * (1. - (depth / 255.).astype(np.float32))
@ray8828 Could you please provide some details about how to generate points cloud using the depth map you mention? I have tried lots methods, but still fail to reconstruct the point clouds.
Hi, I just want to confirm that the depth images in the testing dataset of nerf_synthetic are the ground truth or not?
If they are ground truth, could you please also release the depth images for the training and validation datasets?
Many Thanks!