MarcoForte / FBA_Matting

Official repository for the paper F, B, Alpha Matting
MIT License
464 stars 95 forks source link

about the input channels #11

Closed Liupengshuaige closed 4 years ago

Liupengshuaige commented 4 years ago

Thanks for your great job, it seems that you normlize the conv' weight, why ? and how about the performance

MarcoForte commented 4 years ago

Hi glad you liked the paper. We normalize the conv weight in layers preceding group-normalisation because of the advice in this paper. https://arxiv.org/abs/1903.10520 https://github.com/joe-siyuan-qiao/WeightStandardization

Note I changed their implementation slightly to avoid nan during training, https://github.com/joe-siyuan-qiao/WeightStandardization/issues/1#issuecomment-528050344

The effect of this normalization is significant, it reduces the average number of clicks necessary to reach 90% accuracy by around 0.25-0.5 clicks.