Closed kylemcdonald closed 9 years ago
That depends on batchsize
. If you run the train.py
with the sample parameters written in README, the batchsize is 128, so that the required GPU memory size is over 2870MB. If you run train.py
with batchsize=64, like:
python scripts/train.py \
--model models/AlexNet_flic.py \
--gpu 0 \
--epoch 1000 \
--batchsize 64 \
--prefix AlexNet_LCN_AdaGrad_lr-0.0005 \
--snapshot 10 \
--datadir data/FLIC-full \
--channel 3 \
--flip True \
--size 220 \
--crop_pad_inf 1.5 \
--crop_pad_sup 2.0 \
--shift 5 \
--lcn True \
--joint_num 7
The required GPU memory size is about 1890MB. If it still cause the memory error, please decrease batchsize
more (e.g., --batchsize 32
).
I should add the above comments to README
on my macbook pro with nvidia 750M the batch size of 64 was still too big, but 32 was small enough. thanks a bunch for the clarification!
I tried running this on a basic card with 2GB of GPU RAM but i get an out of memory error as the training script is starting up. what is the minimum requirement to run this on the FLIC data?