Open oiwio opened 6 years ago
I thought about XGAN and cycleGAN, didn't come up with a good way to combine consistency loss (or cyclic loss) with the current model. Deepfakes' model is based on a denoising autoencoder, and introducing consistency loss seems a little weird to me: feeding warped faceA into model_A2B
to get fake faceB; then feeding this fake faceB into model_B2A
to get fake faceA for consistency loss (in the case of cycleGAN).
But cycleGAN itself is absolutely capable of face swapping, e.g, this well-known youtube video twitted by Ian Goodfellow.
Regarding training data, I did not tried any face except the two I showed in readme since, you know, collecting/cleaning data is a painful task π. However, I have a feeling that the target faces are better to have variety. i.e., different source videos, light condition and so on. My target face (Emi Takei, ζ¦δΊε²) dataset contains 1k faces from pinterest and other 4k faces from ~5 different videos.
Besides, its promising to pre-train our model using dataset like celebA that makes model grasp the concept of human face (I did not do this because of limited computing power).
Hello, after using more training data, I become more and more like Daniel Wu, except the chin, so I want to know that can I modify the face_recognition API to capture the face including the chin, instead of only the middle part of the face. Thanks for your help, bro.
Since face_recognition
API returns xy coordinates of detected faces, you can simply expand the xy values so that the bounding boxes covers larger area.
for (δ½ιγ²γͺγ) which we want to changed the face, how much the amount of the data set? does its amount will affect the performance?
Hello, after using more training data, I become more and more like Daniel Wu, except the chin, so I want to know that can I modify the face_recognition API to capture the face including the chin, instead of only the middle part of the face. Thanks for your help, bro.
hi, oiwio, you mean you increase your face data and train again with Daniel Wu and get the more acceptable output?
Have you tried the two methods? XGAN: Unsupervised Image-to-Image Translation for Many-to-Many Mappings Improved Training of Wasserstein GANs I tried to use your code, but it's hard to achieve the effect you showed in the "readme". I used a video of myself and a video of Daniel Wu, maybe I am too ugly that the network can not transform??? It really hurts me.