The OOM error happens when I run the command: python demo/run.py configs/maicity/maicity_00.yaml. Since I have ran the code on the GPU of RTX 4090 with 24GB memory available, the OOM shouldn't happen as you mentioned in your README. And which confused me greatly is that there is still memory avaliable when OOM happens. The outputs are shown as follows:
RuntimeError: CUDA out of memory. Tried to allocate 5.54 GiB (GPU 0; 23.65 GiB total capacity; 1.91 GiB already allocated; 5.50 GiB free; 1.92 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
And it is worth noting that the OOM tends to happen in the process of 246/699 in sequence00 of maicity.
The OOM error happens when I run the command:
python demo/run.py configs/maicity/maicity_00.yaml
. Since I have ran the code on the GPU of RTX 4090 with 24GB memory available, the OOM shouldn't happen as you mentioned in your README. And which confused me greatly is that there is still memory avaliable when OOM happens. The outputs are shown as follows:RuntimeError: CUDA out of memory. Tried to allocate 5.54 GiB (GPU 0; 23.65 GiB total capacity; 1.91 GiB already allocated; 5.50 GiB free; 1.92 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
And it is worth noting that the OOM tends to happen in the process of 246/699 in sequence00 of maicity.