Closed GowthamGottimukkala closed 3 years ago
return features.cpu()
if you're storing these features in a list somewhere?
Yes, I'm storing the returned features in a list. I tried with return features.cpu()
, but I still got the error. It ran the same no. of times as before and resulted in that error. Is there anything else I can try?
Hey, since these are just forward passes I followed this SO answer and the error is gone.
There are two codes that solved this:
.
.
with torch.no_grad():
features = i3d(inp)
return features.cpu()
.
.
or
.
.
features = i3d(inp)
return features.detach().cpu()
.
.
The former one took less time, which do you think is the right way to do this?
Yes, torch.no_grad()
is the right way to do this. I thought volatile=True
would work, but it may be deprecated by now.
First of all, I want to thank you for the work
For extracting features, I returned x after the avg_pool layer in the forward_single function as mentioned here #12.
My code has a for loop wherein each iteration, a tensor of dimension
[1, 3, 16, 224, 224]
is passed to the network. So basically I'm dividing my n frame video to 16frame clips and getting(2048,)
dimensioned output for each 16frame clip. But during this iteration, I got the below-mentioned error after few iterations i.e after few calls.RuntimeError: CUDA out of memory. Tried to allocate 20.00 MiB (GPU 0; 3.95 GiB total capacity; 2.83 GiB already allocated; 55.69 MiB free; 2.88 GiB reserved in total by PyTorch)
For reference here is my code where myfunc() is called multiple times from a loop
What do I need to do to solve it? Is there anything like freeing the gpu memory after each iteration since it is accumulating? Any help is appreciated