mitmul / deeppose

DeepPose implementation in Chainer
http://static.googleusercontent.com/media/research.google.com/ja//pubs/archive/42237.pdf
GNU General Public License v2.0
408 stars 129 forks source link

cupy.cuda.runtime.CUDARuntimeError: cudaErrorMemoryAllocation: out of memory #17

Open karunaahuja opened 8 years ago

karunaahuja commented 8 years ago

I want to run the training on a GPU with ID 1. So to added the argument 1 to function call [ model.to_gpu(1) ] in rrain.py . While I have ~4gb available on the GPU, when I run the network with batchsize of 32, I get the following error

cupy.cuda.runtime.CUDARuntimeError: cudaErrorMemoryAllocation: out of memory

Here are the parameters of training model

--model models/AlexNet_flic.py \ --gpu 0 \ --epoch 1000 \ --batchsize 32 \ --snapshot 10 \ --datadir data/FLIC-full \ --channel 3 \ --flip 1 \ --size 220 \ --crop_pad_inf 1.5 \ --crop_pad_sup 2.0 \ --shift 5 \ --lcn 1 \ --joint_num 7 \

Am I doing something wrong in the way I am changing the GPU ID or is there some other problem?