I have been trying to train your awesome work on my custom dataset, however, I get the following error on a k80 irrespective of Batch Size (even tried with BS = 1).
RuntimeError: CUDA out of memory. Tried to allocate 124.00 MiB (GPU 0; 11.17 GiB total capacity; 10.65 GiB already allocated; 94.31 MiB free; 10.68 GiB reserved in total by PyTorch)
Would appreciate any help in this regard so that I might be able to put the model on training as soon as possible.
I've just submitted a change that should substantially reduce memory use during training and inference. Let me know if that works out for you -- if not, there may be some other factor at play.
Hello,
I have been trying to train your awesome work on my custom dataset, however, I get the following error on a k80 irrespective of Batch Size (even tried with BS = 1).
RuntimeError: CUDA out of memory. Tried to allocate 124.00 MiB (GPU 0; 11.17 GiB total capacity; 10.65 GiB already allocated; 94.31 MiB free; 10.68 GiB reserved in total by PyTorch)
Would appreciate any help in this regard so that I might be able to put the model on training as soon as possible.
Thanks.