eldar / deepcut

Multi Person Pose Estimation
222 stars 83 forks source link

GPU out of memory error #3

Open mrcharlie90 opened 8 years ago

mrcharlie90 commented 8 years ago

Hello everyone, I'm testing your code for a project and I'm getting some of the following errors when I run demo_multiperson.m:

(in the console) F0923 15:47:45.735599 4029 syncedmem.cpp:56] Check failed: error == cudaSuccess (2 vs. 0) out of memory *** Check failure stack trace: *** Killed

(in matlab) Cleared 0 solvers and 0 stand-alone nets save dir .../git/deepcut/data/mpii-multiperson/scoremaps/test testing from net file /home/marco/Desktop/mauro-skeletal-tracker/git/deepcut/data/caffe-models/ResNet-101-mpii-multiperson.caffemodel deepcut: test (MPII multiperson test) 2/1758

with a Matlab crash.

My video card is an NVidia GeForce GTX 760 Ti 2GB.

I'm new to deep learning and Caffe, but I've read on the web that sometimes is possible to run tests on less memory capable graphics cards, as in my case, by changing some parameters (like the batch-size). Is that possible in deepcut? Where could I change those parameters in your code? Thank you in advance!

minhtriet commented 8 years ago

My NVidia card has 4 GB and Matlab still crashes when running this code. I had to run the code on a 12 GB Titan card.

mrcharlie90 commented 8 years ago

Thank you for your answer. I've tried also on a computer with set_cpu_mode on and the demo works.

minhtriet commented 8 years ago

CPU_ONLY gives me this error src/caffe/layers/softmax_loss_vec_layer.cpp:254:10: error: redefinition of ‘void caffe::SoftmaxWithLossVecLayer<Dtype>::Forward_gpu(const std::vector<caffe::Blob<Dtype>*>&, const std::vector<caffe::Blob<Dtype>*>&)’ STUB_GPU(SoftmaxWithLossVecLayer);. I had opened a new issue here.