mahyarnajibi / SSH

SSH: Single Stage Headless Face Detector
Other
835 stars 280 forks source link

What's the exact training batch size using default setting? #45

Closed pyupcgithub closed 5 years ago

pyupcgithub commented 5 years ago

when i run the paper and the code. i am confused about the batch size. in the default_config.yml file, i saw the TRAING BATCH SIZE, which equals to 128. and the default iter_size in the solver_ssh.prototxt is 2. But in paper, the 4 GPUs are using a mini-batch of 4 images?

so, what's the number of mini-batch ?

pyupcgithub commented 5 years ago

@po0ya @mahyarnajibi

dechunwang commented 5 years ago

I believe that the batch size is one. But it runs 4 process for 4 GPU, so that each GPU have it's own data loader and model, and each of them has batch size of one. They are using modified faster-rcnn's anchor_target , proposal_layer , imdb and roidb. So that it only support one image per batch.

pyupcgithub commented 5 years ago

@dechunwang how about the iter_size=2 in the solver.protxt ? will it change the batch_size ?

dechunwang commented 5 years ago

That's the different thing, it is a caffe prototxt setting. If you look at the source code of caffe, which is Dtype loss = 0; for (int i = 0; i < param_.iter_size(); ++i) { loss += net_->ForwardBackward(); } loss /= param_.iter_size(); .... ApplyUpdate(); It means after process iter_size*batch_size pictures, do an update (SGD in this case). The batch_size is still one, but you only update your weight once after two forward passing.

pyupcgithub commented 5 years ago

thanks for your reply. @dechunwang