Open tyrink opened 1 year ago
According to their paper: "To improve the quality of our training set, we remove images with size smaller than 250 × 250. We align and crop the images to a standard position with size 224 × 224". So yes, they choose a subset of VGGFace2 and use 224x224 for all images in the training data set. You can find a download link for this data set in the readme of this repo.
Hi, the results of your method seems good, but I wonder if you chose a subset of the original vggface-2 for training and what's the size of your selected training set?