Closed JeyesHan closed 4 years ago
Unfortunately no. I was just testing the concept. First of all, I downscaled the network to use 64x64 images. And also, I did only 50 epochs (in the paper they did 200K). But conceptually it works and I'll use this technique for face swap.
Also, you can refer to the original repository to see the network for 256x256 input. It will take much more time to train, but the results will be significantly better. https://github.com/taotaonice/FaceShifter
Close to inactivity
Thank you so much for your brilliant work. I am wondering the performance of your implementation currently. Have you gotten similar results?