Open pierremerriaux-leddartech opened 11 months ago
I check the original paper for the urban_radiance_field_depth_loss and the loss is given with the expectation of the lidar depth, so I think your method is more appropriate. Anyway the loss is scaled by a constant and the training process may no be greatly influenced.
thanks @KevinXu02 , about the scale sure, I needed to adjust it to get the appropriate convergence. Do you think I push a PR for that ?
Hi, I use lidar scan to generate depth images. As the lidar is less dense than the image resolution, a large part of my depth images are empty and filled with 0.
In urban_radiance_field_depth_loss and ds_nerf_depth_loss function, a mask is computed to ignore depth estimation where it equal to 0: depth_mask = termination_depth > 0 ... loss = (expected_depth_loss + line_of_sight_loss) * depth_mask But at the end we return: return torch.mean(loss)
So the loss value will depend of the density of the depth image. In my case, with 1024 pixel rays sampled, around twenty depths are available. Do you think compute the loss as below would be more appropriate ? return torch.mean(loss[depth_mask])
thanks