Closed sunshineywz123 closed 1 year ago
How is our depth map generated?
Hi! What dataset are you using? If it's in blender format, could you please share the transforms.json
file? Thanks!
Hi! What dataset are you using? If it's in blender format, could you please share the
transforms.json
file? Thanks!
blender
Hi! What dataset are you using? If it's in blender format, could you please share the
transforms.json
file? Thanks!
Could you try to pull the latest code and replace the following lines https://github.com/bennyguo/instant-nsr-pl/blob/4f70db328827dee6596f1553df0177255f32a1c2/datasets/blender.py#L39-L43 with
self.directions = \
get_ray_directions(self.w, self.h, meta['fl_x']), meta['fl_y']), meta['cx']), meta['cy']), self.config.use_pixel_centers).to(self.rank)
and adapt img_wh
in the config file to your setting (for example 960x720)?
Could you try to pull the latest code and replace the following lines
with
self.directions = \ get_ray_directions(self.w, self.h, meta['fl_x']), meta['fl_y']), meta['cx']), meta['cy']), self.config.use_pixel_centers).to(self.rank)
and adapt
img_wh
in the config file to your setting (for example 960x720)? Ok, I'll try, what is causing my current problem and how to understand this depth map?
I'm not quite sure. If you set img_wh
to be 800x800
in your experiment, there could be a mismatch between the camera intrinsics and the image resolution, leading to wrong rays_o
and rays_d
.
In the depth map, blue means near and red means far.