Open wangyujie413 opened 6 years ago
I guess that the feature vectors are generated better maybe because of the increase in the amount of information. 182 is the default parameter, you can change it according to your wish. Look at: https://github.com/davidsandberg/facenet/blob/master/src/align/align_dataset_mtcnn.py#L147
Edit: The images which are 182x182 pixels are later converted to 160x160 pixels when training. This enables random cropping of the image, which prevents over-fitting.
When I crop face thumbnails to 182182 pixels,the systerm performs better than when crop to 160160 directly,so why? How to get the size of 182?