Open liuxiaozhu01 opened 11 months ago
Hi, bakedangelo use nerf for modeling background region not sdf so the background is not reconstructed. If you also want to reconstruct background with sdf, you can try bakedsdf.
Hi! Thanks for your timely reply! But im still a little confused. If i make the bbox controlled by scene_scale bigger, shouldn't part of background also be in the bbox and thus modeled by sdf? Not sure whether my understanding was right.
Hi, could you post your rendered image in tensorboard or wandb?
Sure. Here is the rendered rgb, depth and normal image at 185k iter
And here is the rendered rgb, depth and normal image at 185k iter
As you can see from the rendered normal map, the background is not reconstructed. So it think the mesh you extracted is correct.
So that means no matter how big the bbox might be, the background still cannot be reconstructed,right?
You should scale it before training not after.
but i set the scene_scale in mipnerf 360 dataparser to make the bbox bigger, scene_scale=4
but still cannot contruct the background. All above result is in such settings.
Oh, scene_scale is not used. You could set the scale_factor. https://github.com/autonomousvision/sdfstudio/blob/370902a10dbef08cb3fe4391bd3ed1e227b5c165/nerfstudio/data/dataparsers/mipnerf360_dataparser.py#L50
From the code behind I assumed that it is used to scale the pose so i didn't dare modify it. I will try it latter. Thanks for your help!
I tried
bakedangelo
in mipnerf360-byclce, and i set the scene_scale in mipnerf 360 to make it learn the background mesh. And it only trained for 180k iters because of the the instability of my machine.ns-train bakedangelo --pipeline.model.sdf-field.use-grid-feature True \ --pipeline.model.sdf-field.hidden-dim 256 \ --pipeline.model.sdf-field.num-layers 2 \ --pipeline.model.sdf-field.num-layers-color 2 \ --pipeline.model.sdf-field.use-appearance-embedding True \ --pipeline.model.sdf-field.geometric-init True \ --pipeline.model.sdf-field.inside-outside False \ --pipeline.model.sdf-field.bias 0.5 \ --pipeline.model.sdf-field.beta-init 0.1 \ --pipeline.model.sdf-field.hash-features-per-level 2 \ --pipeline.model.sdf-field.log2-hashmap-size 19 \ --pipeline.model.level-init 4 \ --trainer.steps-per-eval-image 1000 \ --pipeline.datamanager.train-num-rays-per-batch 2048 \ --trainer.steps_per_save 10000 --trainer.max_num_iterations 500001 \ --pipeline.model.background-model grid \ --vis wandb --experiment-name bakedangelo-bicycle \ --machine.num-gpus 2 mipnerf360-data \ --data data/nerfstudio-data-mipnerf360/bicycle
for mesh extraction, i set
--bounding-box-min -2.0 -2.0 -2.0 --bounding-box-max 2.0 2.0 2.0 --resolution 4096 --simplify-mesh True
I notice that the forground is much better. Shown in the figure below, the wire for bicycle wheels can be reconstructed which is difficult for others methods. however, the background mesh is still miss, even though i set --bounding-box-min -4.0 -4.0 -4.0 --bounding-box-max 4.0 4.0 4.0 while the background tree and bush appears in the rendered depth map. I am wondering why and how i can get the forground and background mesh at the same time. Does anyone know?
Hi @liuxiaozhu01 , can I ask a question? how did you decide the dimension of the bounding box? And for how long did you train?
I tried
bakedangelo
in mipnerf360-byclce, and i set the scene_scale in mipnerf 360 to make it learn the background mesh. And it only trained for 180k iters because of the the instability of my machine.for mesh extraction, i set
--bounding-box-min -2.0 -2.0 -2.0 --bounding-box-max 2.0 2.0 2.0 --resolution 4096 --simplify-mesh True
I notice that the forground is much better. Shown in the figure below, the wire for bicycle wheels can be reconstructed which is difficult for others methods. however, the background mesh is still miss, even though i set --bounding-box-min -4.0 -4.0 -4.0 --bounding-box-max 4.0 4.0 4.0 while the background tree and bush appears in the rendered depth map. I am wondering why and how i can get the forground and background mesh at the same time. Does anyone know?