Closed anjugopinath closed 3 years ago
follow the message to reduce the gpu memory usage
In config.py under 'main' folder, I tried values of 12 and 6 for num_thread (it was earlier 40).. I don't get the warning anymore. But, I still get "CUDA out of memory' error.
Could you give some suggestions please?
reduce train_batch_size
I reduced train_batch_size to 8 and then to 4 and now, it's working. Thank You so much!!
I think the problem was that I tried to load all the input images at once (python train.py --gpu 0 --annot_subset all) I tried with only one subset and it worked with 40 threads and train batch size 16 (config.py) python train.py --gpu 0 --annot_subset human_annot
Data loading takes the main memory (RAM), not the GPU memory (VRAM). Anyway, good for you to find the solution!
Please see attached image