Closed LuoPeng-CV closed 5 years ago
try to use a smaller batch size for testing
Thanks for your reply,I've tried to set batch size as 128,64,32,24, but all get the same mistake,RuntimeError: CUDA error: out of memory
,and if i set batch size as 20(or less than 20), I got another problem as follow:
try to set the num of threads to 0 in the testing data loader
I've tried your way,but it still get the same problem that everytime it reach 5500/16483. I will try other ways to solve this problem.Anyway,sincerely thanks for your help!
Hello, I encounter the same problem. Have you solved it?
Hello, I encounter the same problem. Have you solved it?
I remember I solved it at last,but I didn't remember it in detail.
It's probably about modifying inputs = Variable(inputs, volatile=True).cuda()
as
with torch.no_grad(): inputs = Variable(inputs).cuda()
.
Hope it will help you.
Can you point it in detail? "with torch.no_grad(): inputs = Variable(inputs).cuda()" does not work. Thanks.
Excuse me,I got a problem when I tested the best_net_E.pth in stage III. I can test the First stage evaluation, but when it tried to test the second stage,it always gave this information that
RuntimeError: CUDA error: out of memory
, and after I tried test it using several gpus(CUDA_VISIBLE_DEVICE),it didn't work as well. I'll deeply appreciate it if you could give some suggestions.Thank you!