dunbar12138 / DSNeRF

Code release for DS-NeRF (Depth-supervised Neural Radiance Fields)
https://www.cs.cmu.edu/~dsnerf/
MIT License
743 stars 126 forks source link

Scanned depth #14

Open dnlwbr opened 2 years ago

dnlwbr commented 2 years ago

In your video you said, that you tested DSNeRF with scanned depth data. How can I train the model with my own depth data. Which format is needed? Thanks in advance.

bigdimboom commented 2 years ago

The input in the paper is sparse 3d/depth. I think they are extending the concept to using dense depths in the video? I guess the challenge is how you gonna acquire the weights for the dense depths,. You would still need compute reprojection error for them. I think even the SOA depth sensors have certain amount of errors.

dnlwbr commented 2 years ago

Wouldn't it be possible to just use a constant weight for all depth points? I think the Redwood dataset used in the paper does not provide confidence/error values either.

Unfortunately, the code in this repo seems to be incomplete and different from the code used in the paper, since the function load_sensor_depth() in load_llff.py is almost the same as load_colmap_depth().

mertkiray commented 2 years ago

Any update on how to use sensor depth?

Also, I am using ndc space and I have depth values collected from a sensor, should I normalize these values between 0 and 1 or should I convert them into ndc space?

endlesswho commented 2 years ago

I just want to test whether blender lego depth file works for this frameworks, only 2 view for better novel view generation!

endlesswho commented 2 years ago

@dunbar12138

I just want to test whether blender lego depth file works for this frameworks, only 2 view for better novel view generation!

MaxChanger commented 1 year ago

Hi guys, is there any progress on raw seneor depth as input An immature idea is to convert the pose and depth image into the colmap bin file format by a script, but I think this is superfluous, rewriting load_sensor_depth may be better, but the changes involved may be more