Open yuxuJava789 opened 1 year ago
Hi @yuxuJava789
The 8 GB default parameters will lead to performance degradation. However, you may be able to try different parameters to see what works best over the suggested values. My intuition is that increasing the dim
to 8 and lowering dict_size
to something like 19 or 18 may work better and require only 8 GB of VRAM.
Let us know how it goes!
@yuxuJava789 if you are training with the default config, this is expected at 20k iterations. You would need to run to 500k iterations to get the final results. If you want some faster experiment turnarounds, please also consider checking out the new Colab notebook.
@mli0603 @chenhsuanlin Hi,I follow your advice run to 500k iterations (120 hours of operation),results is :
2
my config is :
checkpoint: save_epoch: 9999999999 save_iter: 20000 save_latest_iter: 9999999999 save_period: 9999999999 strict_resume: true cudnn: benchmark: true deterministic: false data: name: dummy num_images: null num_workers: 4 preload: true readjust: center:
How I get higher quality and with color mesh 。
With parameters (RTX 3060 12GB): dict_size=21, dim=4, after 500K iterations,10K epoches,I have got 25 pt files. When I extract mesh with the last pt file(epoch_10000_iteration_000500000_checkpoint.pt),the final mesh is not what I want, like following:
Did I do somthing wrong? What I can do to improve the quality.
I Use ,
Run the command as follows: torchrun --nproc_per_node=1 projects/neuralangelo/scripts/extract_mesh.py \ --config=logs/video2/config.yaml \ --checkpoint=logs/video2/epoch_00400_iteration_000020000_checkpoint.pt \ --output_file=video3.ply \ --resolution=2014 \ --block_res=128 \ --textured
val/vis/normal is :
val/vis/inv_depth is :
I use this notebook result is : Root Directory Path: /home ============== 1/home/video3d/code/neuralangelo/datasets/lego_ds2/sparse
images: 100
How to get better results。