Closed khanhnd0408 closed 3 years ago
Find out that when I put a new image with a different size into the network, CUDNN will recalculate and take some more VRAM. If the dataset has a lot of images of different sizes, it will cause CUDA out of memory. My solution: Resize all to the same size before feeding to the network.
Hi guys, I'm trying to make new bboxes for CelebA data set because the available ones are not tight enough for my use.
I find this project is what I need, but when running detect.py, I had some issues after some small changes like making a file list then performing the face detection and writing out.
After over 20000 images, the program crashed, and throughout this: CUDA was out of memory.
I had googled and found some solutions such as: using "with torch.no_grad()" or trying to free memory when it's not being used. But they still didn't work out as I expected.
Could you guys help me solve this problem?
Thanks in advance.