Closed ttsesm closed 2 years ago
This program needs a lot of memory. What will be probably be sufficient to fit into 11 gigs is if you load just the training images lego/transforms_train.json
rather than all images/transforms in the folder.
This program needs a lot of memory. What will be probably be sufficient to fit into 11 gigs is if you load just the training images
lego/transforms_train.json
rather than all images/transforms in the folder.But I was able to load the same setup on another computer with even less memory though a newer gpu (2080) :thinking:
Can I somehow take advantage of the multi-gpu setup?
The reason you could load it on the newer GPU is because it supports efficient half-precision arithmetic on TensorCores. Older GPUs need to run full precision to be efficient, which unfortunately increases memory usage by quite a bit.
As for multi-GPU support: there is none at this point in time, sorry.
Ok, I see. I've managed to have it running only with the training images as you suggested. Thanks.
For having the resolution of an extracted mesh at 1024x1024x1024 do you know how much memory approx. would be sufficient? In general if I want to get the best result which would be the suggested gpu to go with?
Would a tesla V100/32Gb be a good start?
@Tom94 in a multi gpu system can I set somehow in which gpu to be used for running the application?
@ttsesm use CUDA_VISIBLE_DEVICES=0 mycommand
to let mycommand only see GPU 0. note that sometimes the gpu numbering doesn't match the numbering in nvidia-smi
.. usually it does but not for me
Closing due to the (now) much lower memory usage & multi-gpu info in the FAQ
For a record, I encountered the same issue when I re-plugged the laptop charger. The only solution for this is to restart the laptop. For instance, this error appears when I run i-ngp on a scene, remove the charger, and plug it back again. It seems the OS stopped/killed some services used to save the battery.
Hi guys,
Thanks for sharing your work. I've tried to run the code on a headless server session with multiple gpus but for some reason I am getting an out of memory error:
Any idea what could be wrong? I've compiled the project with the gui flag off
-DNGP_BUILD_WITH_GUI=OFF
and as you can see from the screenshot above all my gpus are fully available.Thanks.