Closed megyux closed 1 year ago
Under the default setting of training image size, batching 2 images in one gpu requires about 17GB. You can try to change the image size during training and testing. The performance will be affected by using batchsize 2.
thanks, reducing image size works
Hello, I'm trying to train the model on a custom toy dataset using google's colab, the free version of colab "usually" offers t4 gpu (16gb memory), using a batch size of 8 images produces the
cuda out of memory error
, even when changed it to only 2 images per patch I still run into that error after ~200 iters, what can I do to reduce the memory usage ? 2 images per patch will make the loss fluctuate too much (I guess) Edit: even when using resnet50