threestudio-project / threestudio

A unified framework for 3D content generation.
Apache License 2.0
6.17k stars 475 forks source link

NEW ERROR #402

Open pavankay opened 8 months ago

pavankay commented 8 months ago

I just got a new gpu that has 40 gb of VRAM but then got this error: I ran python launch.py --config configs/stable-zero123.yaml --train --gpu 0 data.image_path=./load/images/hamburger_rgba.png

torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 310.00 MiB. GPU 0 has a total capacty of 39.56 GiB of which 158.81 MiB is free. Process 286123 has 39.40 GiB memory in use. Of the allocated memory 33.31 GiB is allocated by PyTorch, and 1.15 GiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF Epoch 0: | | 20/? [00:42<00:00, 0.47it/s, train/loss=66.70]

I have 40gb of vram and I and here is my nvidia smi output

---------------------------------------------------------------------------------------+ | NVIDIA-SMI 535.104.05 Driver Version: 535.104.05 CUDA Version: 12.2 | |-----------------------------------------+----------------------+----------------------+ | GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. | | | | MIG M. | |=========================================+======================+======================| | 0 NVIDIA A100-SXM4-40GB Off | 00000000:00:04.0 Off | 0 | | N/A 32C P0 46W / 400W | 5MiB / 40960MiB | 0% Default | | | | Disabled | +-----------------------------------------+----------------------+----------------------+

+---------------------------------------------------------------------------------------+ | Processes: | | GPU GI CI PID Type Process name GPU Memory | | ID ID Usage | |=======================================================================================| | No running processes found | +---------------------------------------------------------------------------------------+