Closed Usernamezhx closed 6 years ago
input_var = data.cuda()
Does it matter? I find your code the input data run on the cpu. but when i change it to input_var = data
it will show me that :
I think I find the problem. I am not familar to pytorch. I only have one graph card . so I replace the code model = torch.nn.DataParallel(model, device_ids=args.gpus).cuda()
to mode.cuda() . when I recover it was normal........
Hi, how to test the online recognition? Is Real-time Video Streaming ?
sooorry to bother you again :p I want to test the online recognition. but the inference erro like this:
**RuntimeError: cuda runtime error (2) : out of memory at /pytorch/aten/src/THC/generic/THCStorage.cu:58**
only i set the batchsize to 2 and then it can work. but I find the eval will success when the batchsize is 15. below is the code: I test it on UCF101. thanks in advande