Closed Jee-King closed 7 years ago
If your GPU memory is less than 3G, there might be such out of memory problem.
The example testing code evaluates one image on a single scale in each forward pass. So there is no room for reducing memory. On my own GPUs (Titan black 6G), it consumes about 2400 Mib memory during testing.
We may provide smaller model later for running on GPUs with limited memory.
So much cost !! I have another question : can this construction guarantee real-time ?Or how about speed ?
The running time is about 1.3 second per image.
I run your test code using larger memory GPU successfully. And I wonder that how did you get ‘ test.h5’ ? I just tested 7 images which you provide. Whether it is related to test.h5 ? Excuse me,I am a newer
As demonstrated in the README, you need to first download the MPII dataset, then replace the images
folder by the images from the MPII dataset.
Since your original question is solved. I close this issue.
@bearpaw hi,i have only one 12GB gpu.how can i train this model in mpii?In the readme of train,it has to use 4 gpu.so i dont know how to solve with it.thanks:)
Hello , I download your pretrain model(model.t7).But when I run 'qlua main.lua demo ',the terminal give me an error : {{{ THCudaCheck FAIL file=/home/zhangjiqing/torch/extra/cutorch/lib/THC/generic/THCStorage.cu line=66 error=2 : out of memory qlua: ...zhangjiqing/torch/install/share/lua/5.1/nn/Container.lua:67: In 1 module of nn.Sequential: In 1 module of nn.ConcatTable: In 1 module of nn.Sequential: /home/zhangjiqing/torch/install/share/lua/5.1/nn/THNN.lua:110: cuda runtime error (2) : out of memory at /home/zhangjiqing/torch/extra/cutorch/lib/THC/generic/THCStorage.cu:66 }}}
Is it just out of memory of GPU ? Can I turn the batchsize to a small value yet using your model.t7 ? Or could you provide a samll batchsize for me? Thanks!