The first time it worked well on my computer, but the second time I got this error.
RuntimeError: CUDA out of memory. Tried to allocate 128.00 MiB (GPU 0; 2.95 GiB total capacity; 1.86 GiB already allocated; 112.31 MiB free; 1.88 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
I tried the solutions below, and still facing the same issue.
Solution 1: add torch.cuda.empty_cache() at the beginning of main.py
Solution 2: reduce batch_size to 1 in /models/face_parsing/modules)/functions.py
The first time it worked well on my computer, but the second time I got this error.
RuntimeError: CUDA out of memory. Tried to allocate 128.00 MiB (GPU 0; 2.95 GiB total capacity; 1.86 GiB already allocated; 112.31 MiB free; 1.88 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
I tried the solutions below, and still facing the same issue. Solution 1: add
torch.cuda.empty_cache()
at the beginning of main.py Solution 2: reduce batch_size to 1 in /models/face_parsing/modules)/functions.pyAnyone has an idea?