shamangary / FSA-Net

[CVPR19] FSA-Net: Learning Fine-Grained Structure Aggregation for Head Pose Estimation from a Single Image
Apache License 2.0
612 stars 155 forks source link

Question about the batch size - 16 / 8 #46

Open MingxiLi opened 4 years ago

MingxiLi commented 4 years ago

Hi, thanks for your great work.

I have a question about the batch size you used in the experiments. For protocol 1, you used batch size 16 and 8 for protocol 2. It seems that researchers in the area of head pose estimation prefers small batch size. But as far as I know, the training process can be more stable with a larger batch size.

Did you do any experiments on how batch size affect the final performance of the model?

shamangary commented 4 years ago

Hello, I don't have the experiment against the batchsize, but we do observe small batchsize is much better for the head pose learning. Comparing to general understanding of the batchsize, for example image classification, head pose is a shared concept between each training data while general classification contains high-level semantic meaning. Is it possible different natures of different tasks cause such results.