Intant-ngp can add depth supervison like this:
https://github.com/NVlabs/instant-ngp/discussions/647
But when i tried to add depth supervision following the link above, i found the mesh generated is the same as the one without depth supervision(just use rgb imgs for supervision), so i'm wondering is there something needed to be modified in NeuS2 code.
I tried to enable depth supervision by modifing json like this:
{
"w": 1280.0,
"h": 720.0,
"aabb_scale": 1.0,
"scale": 0.5,
"offset": [
0.5,
0.5,
0.5
],
"from_na": true,
"enable_depth_loading": true,
"integer_depth_scale": 1,
"frames": [
{
"file_path": "./images/089.png",
"depth_path": "./depth_images/89_depth.png",
……
and add "testbed.nerf.training.depth_supervision_lambda = 1" "testbed.render_mode = ngp.RenderMode.Depth" in run.py.
Intant-ngp can add depth supervison like this: https://github.com/NVlabs/instant-ngp/discussions/647 But when i tried to add depth supervision following the link above, i found the mesh generated is the same as the one without depth supervision(just use rgb imgs for supervision), so i'm wondering is there something needed to be modified in NeuS2 code. I tried to enable depth supervision by modifing json like this: { "w": 1280.0, "h": 720.0, "aabb_scale": 1.0, "scale": 0.5, "offset": [ 0.5, 0.5, 0.5 ], "from_na": true, "enable_depth_loading": true, "integer_depth_scale": 1, "frames": [ { "file_path": "./images/089.png", "depth_path": "./depth_images/89_depth.png", ……
and add "testbed.nerf.training.depth_supervision_lambda = 1" "testbed.render_mode = ngp.RenderMode.Depth" in run.py.