autonomousvision / sdfstudio

A Unified Framework for Surface Reconstruction
Apache License 2.0
1.9k stars 179 forks source link

Can anyone share a train option that produces a good mesh? #299

Closed hanjoonwon closed 3 months ago

hanjoonwon commented 3 months ago

image

https://drive.google.com/drive/folders/1lHsiW8MGQcVTrQT9LtJjH8Z6ChMiiVlf?usp=drive_link https://drive.google.com/drive/folders/1Q9zSM8sCsQ4n-6QKl4GEujuznx-abxvh?usp=drive_link

These are the images I've tried so far

and my train steps

  1. ns-process-data images --data

2.python path/process_nerfstudio_to_sdfstudio.py --data-type colmap --mono-prior --scene-type object --data path --output-dir path/ --omnidata-path sdfstudio/omnidata/omnidata_tools/torch/ --pretrained-models sdfstudio/omnidata/omnidata_tools/torch/pretrained_models/

3.ns-train neus-facto --pipeline.datamanager.train-num-rays-per-batch 2048 \ --pipeline.model.sdf-field.use-grid-feature False \ --pipeline.model.sdf-field.inside-outside False \ --pipeline.model.background-model mlp \ --pipeline.model.mono-depth-loss-mult 0.0 \ --pipeline.model.mono-normal-loss-mult 0.1 \ --vis wandb \ --trainer.steps_per_save 5000 \ --trainer.steps-per-eval-image 5000 \ --trainer.max-num-iterations 300000 \

https://github.com/autonomousvision/sdfstudio/blob/370902a10dbef08cb3fe4391bd3ed1e227b5c165/docs/sdfstudio-examples.md?plain=1#L24

I've tried the parameters in this link, and I've also tried other people's options. What could be the problem? I've tried monosdf, volsdf, and bakedangelo.

niujinshuchong commented 3 months ago

Hi, I can get reasonable results with your data.

image

I used the following command:

ns-train neus --pipeline.model.sdf-field.inside-outside False --pipeline.model.sdf-field.bias 0.3 --pipeline.model.sdf-field.beta-init 0.3 --pipeline.model.sdf-field.use-appearance-embedding True --trainer.steps-per-eval-image 500 --pipeline.model.near-plane 0.05 --pipeline.model.far-plane 2. --pipeline.model.overwrite-near-far-plane True --vis wandb nerfstudio-data --data data/owl2-colmap --downscale-factor 2 

I also tested with monocular prior.

OMP_NUM_THREADS=4 CUDA_VISIBLE_DEVICES=6 ns-train neus --pipeline.model.sdf-field.inside-outside False --pipeline.model.sdf-field.bias 0.3 --pipeline.model.sdf-field.beta-init 0.3 --pipeline.model.sdf-field.use-appearance-embedding True --trainer.steps-per-eval-image 500 --pipeline.model.near-plane 0.05 --pipeline.model.far-plane 2. --pipeline.model.overwrite-near-far-plane True --pipeline.model.mono-normal-loss-mult 0.01 --vis wandb sdfstudio-data --data data/owl2-sdfstudio --include-mono-prior True

The table is reconstructed better with the prior: image

hanjoonwon commented 3 months ago

Hi, I can get reasonable results with your data. image

I used the following command:

ns-train neus --pipeline.model.sdf-field.inside-outside False --pipeline.model.sdf-field.bias 0.3 --pipeline.model.sdf-field.beta-init 0.3 --pipeline.model.sdf-field.use-appearance-embedding True --trainer.steps-per-eval-image 500 --pipeline.model.near-plane 0.05 --pipeline.model.far-plane 2. --pipeline.model.overwrite-near-far-plane True --vis wandb nerfstudio-data --data data/owl2-colmap --downscale-factor 2 

I also tested with monocular prior.

OMP_NUM_THREADS=4 CUDA_VISIBLE_DEVICES=6 ns-train neus --pipeline.model.sdf-field.inside-outside False --pipeline.model.sdf-field.bias 0.3 --pipeline.model.sdf-field.beta-init 0.3 --pipeline.model.sdf-field.use-appearance-embedding True --trainer.steps-per-eval-image 500 --pipeline.model.near-plane 0.05 --pipeline.model.far-plane 2. --pipeline.model.overwrite-near-far-plane True --pipeline.model.mono-normal-loss-mult 0.01 --vis wandb sdfstudio-data --data data/owl2-sdfstudio --include-mono-prior True

The table is reconstructed better with the prior: image

@niujinshuchong thank you so much :) I tried applying this train option to the battery image as is, but with poor quality results. How can I find the right parameters to get good results for other images?