neuralchen / SimSwap

An arbitrary face-swapping framework on images and videos with one single trained model!
Other
4.45k stars 881 forks source link

dataset size for training #387

Open tyrink opened 1 year ago

tyrink commented 1 year ago

Hi, the results of your method seems good, but I wonder if you chose a subset of the original vggface-2 for training and what's the size of your selected training set?

sjokic commented 1 year ago

According to their paper: "To improve the quality of our training set, we remove images with size smaller than 250 × 250. We align and crop the images to a standard position with size 224 × 224". So yes, they choose a subset of VGGFace2 and use 224x224 for all images in the training data set. You can find a download link for this data set in the readme of this repo.