To maintain a similar number of parameters to original models for fair comparisons, the channel number of these backbones (CSRNet , MCNN , SANet , and BL) in our framework is respectively set to 70%, 60%, 60%, and 60% of their original values. The kernel parameters are initialized by Gaussian distribution with a zero mean and a standard deviation of 1e-2.
Our BL+IADM does not adopt the original VGG as the backbone, so the pre-trained VGG model is not used and BL+IADM is trained from scratch. In fact, you can train a lightweight VGG on ImageNet as the initialization of BL+IADM,which may lead to better performance.
To maintain a similar number of parameters to original models for fair comparisons, the channel number of these backbones (CSRNet , MCNN , SANet , and BL) in our framework is respectively set to 70%, 60%, 60%, and 60% of their original values. The kernel parameters are initialized by Gaussian distribution with a zero mean and a standard deviation of 1e-2.
Our BL+IADM does not adopt the original VGG as the backbone, so the pre-trained VGG model is not used and BL+IADM is trained from scratch. In fact, you can train a lightweight VGG on ImageNet as the initialization of BL+IADM,which may lead to better performance.