My 8G memory is all used when I train the train_test_BCBN_C10plusWithBias.prototxt and the batch size is only 1.
The GPU's memory consumption is indeed reduced, but it looks like this version will consume memory indefinitely. When the memory is consumed, it begins to occupy virtual memory, causing the system to die.
May I ask why could this happen?
this is my train.sh:
cd /home/c/caffe
./build/tools/caffe train \
--solver="/home/cll/DN_CaffeScript-master/solver/prototxt" \
--gpu 0
My 8G memory is all used when I train the train_test_BCBN_C10plusWithBias.prototxt and the batch size is only 1. The GPU's memory consumption is indeed reduced, but it looks like this version will consume memory indefinitely. When the memory is consumed, it begins to occupy virtual memory, causing the system to die. May I ask why could this happen? this is my train.sh: cd /home/c/caffe ./build/tools/caffe train \ --solver="/home/cll/DN_CaffeScript-master/solver/prototxt" \ --gpu 0