autonomousvision / sdfstudio

A Unified Framework for Surface Reconstruction
Apache License 2.0
1.96k stars 183 forks source link

Please help me poor quality on custom dataset, wandb looks fine #277

Closed hanjoonwon closed 6 months ago

hanjoonwon commented 9 months ago
  1. process data with nerfstudio ns-process-data images --data /battery --output-dir /workspace/battery

  2. python3 /sdfstudio/scripts/datasets/process_nerfstudio_to_sdfstudio.py --mono-prior --data-type colmap --scene-type object --data /battery --output-dir /workspace/batterydata --omnidata-path /omnidata/omnidata_tools/torch --pretrained-models /omnidata/omnidata_tools/torch/pretrained_models get normal,rgb,depth

3.i trained with ns-train neus-facto --pipeline.datamanager.train-num-rays-per-batch 1024 --pipeline.model.mono-depth-loss-mult 0.0 --pipeline.model.mono-normal-loss-mult 0.0 --vis wandb --pipeline.model.sdf-field.inside-outside False --pipeline.model.background-model mlp --trainer.steps_per_save 5000 --trainer.max-num-iterations 30000 --experiment-name facbattery sdfstudio-data --data /workspace/batterydata

My environment was run using docker on wsl2 ubuntu22.04 on windows10 Graphics card is RTX2060 it's my meta meta_data.json data

image

image

https://wandb.ai/ju805604/sdfstudio/runs/r7wtqije?workspace=user-ju805604

davidceka commented 8 months ago

You're running the model on a rtx 2060 and it ran for only 20k iterations. I don't know about neus-facto but training with bakedangelo needs at least 200k~ iterations to have some good results. Have you tried training for longer?

davidceka commented 8 months ago

Also, you can tell that the model didn't quite learn the shape of the object since the evaluations are really fuzzy (left is the original, right is the predicted one) and that is also confirmed by the depth image directly on top, all is purple so i think the model needs more training image

hanjoonwon commented 8 months ago

Also, you can tell that the model didn't quite learn the shape of the object since the evaluations are really fuzzy (left is the original, right is the predicted one) and that is also confirmed by the depth image directly on top, all is purple so i think the model needs more training image

Thank you so much. Unlike models like instant-ngp, surface models require more iterations and time. For the DTU dataset, I got good results starting at 20K, so I only went up to 50K.

hanjoonwon commented 8 months ago

Also, you can tell that the model didn't quite learn the shape of the object since the evaluations are really fuzzy (left is the original, right is the predicted one) and that is also confirmed by the depth image directly on top, all is purple so i think the model needs more training image

@davidceka https://wandb.ai/ju805604/sdfstudio/runs/hfb0p7g6?workspace=user-ju805604 but got bad result... Is this also due to a lack of iterations?

image

2.ns-train neus-facto --trainer.load-dir outputs/-workspace-processdata-modifyvidbat/neus-facto/2024-01-22_022432/sdfstudio_models --pipeline.datamanager.train-num-rays-per-batch 1024 --pipeline.model.sdf-field.bias 0.3 --pipeline.model.sdf-field.use-grid-feature False --pipeline.model.mono-depth-loss-mult 0.5 --pipeline.model.mono-normal-loss-mult 0.05 --vis wandb --pipeline.model.sdf-field.inside-outside False --pipeline.model.background-model grid --trainer.steps_per_save 5000 --trainer.steps-per-eval-image 5000 --trainer.max-num-iterations 300000 --experiment-name nomaskmodifyvidbat sdfstudio-data --data /workspace/processdata/modifyvidbat --include_mono_prior True

(Because of the large number of iterations, I proceeded by loading the previous progress)

3.ns-extract-mesh --resolution 512 --bounding-box-min -0.6 -0.6 -0.6 --bounding-box-max 0.6 0.6 0.6 --load-config outputs/nomaskmodifyvidbat/neus-facto/2024-01-22_234123/config.yml --output-path meshoutput/nomask200kbat.ply

here is my data https://drive.google.com/file/d/1WV9wOjWhwsOkBC1g9cNBKaaKMMaYgp5G/view?usp=drive_link here is my proceesd data I added a foreground mask and the same thing happened.