Open wen-yuan-zhang opened 10 months ago
Easy fix, the problem is as simple as that 24GB is not enough in CUDA mode. I've got an RTX 4090 myself. Just set it to use CPU for source image data. It barely affects speed and offloads almost everything into normal RAM. You lose like 5% performance on the optimizer part. With that, you can also skip lowering the resolution. I use full resolution 4K images for source, no resizing, with CPU instead, and it works like a charm. Just add --data_device cpu
I'll try it. Thank you!
I used TitanXP, only 12G GPU RAM. Simple scene/object(a resin toy), it is OK.
Simple object first?
@AtlasRedux Where should this --data_device cpu
be added? As it's not an argument for train.py
@AtlasRedux Where should this
--data_device cpu
be added? As it's not an argument for train.py
As with all SuGaR arguments, they do the same as the original 3DGS arguments and they go into the gs_model.py. Change self.data_device = "cuda" to self.data_device = "cpu"
@AtlasRedux Where should this
--data_device cpu
be added? As it's not an argument for train.pyAs with all SuGaR arguments, they do the same as the original 3DGS arguments and they go into the gs_model.py. Change self.data_device = "cuda" to self.data_device = "cpu"
Thanks!
@AtlasRedux I found gs_model.py in the sugar_scene directory and changed cuda to cpu, but I experience the exact same behaviour, so I might be missing something? Or other things that need to be adjusted?
I get the same problem as @Iliceth, even after adopting the aforementioned suggestions. I also tried reducing the number of refinement iterations and the initial iterations to load, but still to no avail. Any more suggestions would be greatly appreciated.
Same here, I have a 16GB GPU and I usually get CUDA out of memory error, changing data_device from 'cuda' to 'cpu' in gs_model.py does not solve the issue
Update: I didn't find a useful solution for this, but is seeems that this problem is not stably reproducible. I didn't meet this problem later. Possibly it is affected by the python environment, linux status, unexpected code bugs, or other factors.
Thanks for your excellent work! I met an OOM problem when running
The training process is normal but the mesh cannot be extracted because of OOM problem. I am using a 24GB 3090Ti GPU and I think there is no problem of my GPU. I tried to set image_resolution=4 in gs_model.py but it doesn't help. Could you please give some advice on this problem? Thank you!