Open ClayOfMan opened 7 months ago
( ● ) NerfAcc: Setting up CUDA (This may take a few minutes the first time)run_neuralangelo-colmap_sparse.sh: line 15: 4103 Killed python launch.py --config configs/neuralangelo-colmap_sparse.yaml --gpu 0 --train dataset.root_dir=$INPUT_DIR
Sorry to bother. Seems the same issued as https://github.com/hugoycj/Instant-angelo/issues/41 . I will try to fix it this week
I encountered the same issue with a dataset with 90 images, on a Ubuntu 20.04 laptop with RTX 3080 Mobile 16GB and 32 GB main memory RAM. When I reduce the dataset to 23 images the code runs, but already consumes 13 GB of VRAM. About 10 GB of CPU RAM is used, so that is less of a concern. If there's any way that the GPU memory consumption can be reduced and larger datasets can be supported, that would be fantastic. I will also try downscaling the images; right now they are 1920x1080 and I haven't checked if any downscaling happens during the execution of Instant-angelo. If that's not the case, then this could be a solution as well.
Update: the training stage completes almost entirely when I use a subset of 23 images, but it gives an OOM in the Validation stage. The mesh extraction script also does this, no matter how I try to change the parameters for the isosurface, so I'm hypothesizing that it's not related to that. For example, when I set the --res parameter to 512 on the command line, the size of the grid being created is reported as 768 768 768, down from 1536 1536 1536 with the default value of 1024, but the OOM remains.
OS: Ubuntu 22.04 GPU: gtx 3060 TI 12gb Cuda 11.3 gcc/g++ are gcc-9/g++-9 using update-alternative
Upon completion of a model being trained the expected .obj and are not within the resulting
/exp/neuralangelo-colmap_sparse-gerrard-hall/@xxx-xxx
folder.In following https://github.com/hugoycj/Instant-angelo/issues/36 I tried exporting and it resulted in this error:
Showing that the ckpt files are not present either.
Full Log:
How can I get a result from this training?