autonomousvision / sdfstudio

A Unified Framework for Surface Reconstruction
Apache License 2.0
1.9k stars 179 forks source link

how to get good mesh? #293

Closed hanjoonwon closed 3 months ago

hanjoonwon commented 4 months ago

meshoutput image

my image like this image

I went through the following process

  1. ns-process-data images --data data/owl --output-dir process/owl

2.python scripts/datasets/process_nerfstudio_to_sdfstudio.py --mono-prior --data-type colmap --scene-type object --data process/owl --output-dir sdfdata/owl
--omnidata-path omnidata/omnidata_tools/torch --pretrained-models omnidata/omnidata_tools/torch/pretrained_models

3 .ns-train neus-facto --pipeline.datamanager.train-num-rays-per-batch 2048 --pipeline.model.sdf-field.geometric-init True --pipeline.model.sdf-field.use-grid-feature True --pipeline.model.sdf-field.inside-outside False --pipeline.model.background-model mlp --pipeline.model.mono-depth-loss-mult 0.01 --pipeline.model.mono-normal-loss-mult 0.01 --pipeline.model.sdf-field.bias 0.3 --vis wandb --trainer.steps_per_save 5000
--trainer.steps-per-eval-image 5000 --trainer.max-num-iterations 300000 --experiment-name neusfactoowl sdfstudio-data --data sdfdata/owl

I've run neus-facto training several times with different parameters, but I'm still not getting a good mesh. It works fine in nerfstudio and sugar, so what could be the reason?

niujinshuchong commented 3 months ago

Hi, Could you try to disable using grid-features with --pipeline.model.sdf-field.use-grid-feature False since the hash grids is very sensitive.

hanjoonwon commented 3 months ago

Hi, Could you try to disable using grid-features with --pipeline.model.sdf-field.use-grid-feature False since the hash grids is very sensitive.

@niujinshuchong also i got poor result, my parameters are wrong??

niujinshuchong commented 3 months ago

Oh, Could you also try to disable the mono-depth loss? The mono-depth loss uses scale-invariant loss and this needs to compute the alignment between rendered depth map and monocular depth map. Therefore, we should sample the training rays from the same image. The default setting of neus-facto will sample rays across all training images and this will likely introduce issues during optimisation.

hanjoonwon commented 3 months ago

Oh, Could you also try to disable the mono-depth loss? The mono-depth loss uses scale-invariant loss and this needs to compute the alignment between rendered depth map and monocular depth map. Therefore, we should sample the training rays from the same image. The default setting of neus-facto will sample rays across all training images and this will likely introduce issues during optimisation.

@niujinshuchong
image

I trained with ns-train neus-facto --pipeline.datamanager.train-num-rays-per-batch 2048 \ --pipeline.model.sdf-field.use-grid-feature False \ --pipeline.model.sdf-field.inside-outside False \ --pipeline.model.background-model mlp \ --pipeline.model.mono-depth-loss-mult 0.0 \ --pipeline.model.mono-normal-loss-mult 0.01 \ --vis wandb \ --trainer.steps_per_save 5000 \ --trainer.steps-per-eval-image 5000 \ --trainer.max-num-iterations 300000 \ --experiment-name omnineusowl sdfstudio-data

but got weired result.. my images are here https://drive.google.com/drive/folders/1lHsiW8MGQcVTrQT9LtJjH8Z6ChMiiVlf?usp=drive_link and wandb log https://wandb.ai/ju805604/sdfstudio/runs/sqw2b32u?workspace=user-ju805604