Closed dearhoper closed 5 years ago
May be you can decrease your batch size
Thanks, decreasing the batch size can fix the problem. And can you tell me the resources of your computer for the training?
parser.add_argument('--model', type=str, default='VGG19', help='CNN architecture') parser.add_argument('--dataset', type=str, default='FER2013', help='CNN architecture') parser.add_argument('--bs', default=128, type=int, help='batch size') parser.add_argument('--lr', default=0.01, type=float, help='learning rate')
I mean the GPU resource for supporting the 128 batch size training.
nvidia 1080 Ti
It's kind of weird, I can use batch_size 64 for training, but can't use use 64 for testing. I'm pretty sure my GPU memory is quiet enough.
y
i met this problem too.did u figure it out?
When I run the training process, it reports the RuntimeError. The GPU in my Linux system is Tesla K40c. Why it prompts "out of memory"? Can you help me?