shaoanlu / fewshot-face-translation-GAN

Generative adversarial networks integrating modules from FUNIT and SPADE for face-swapping.
793 stars 132 forks source link

weird results #2

Open ak9250 opened 5 years ago

ak9250 commented 5 years ago

i tried these two images but the result was like this badresult Screen Shot 2019-07-27 at 11 28 24 PM

shaoanlu commented 5 years ago

The iris detector I used in this project seems not performing well due to its deviated pre-processing from official implementation. Disabling draw_iris() function in utils.py might help.

Also, the face alignment is a little bit different from my dev environment, where MTCNN, instead of S3FD+FAN, was used. This has relative small impact on the translation results as far as I can tell.

ak9250 commented 5 years ago

@shaoanlu ok, can you please add mtcnn, i think it also reducing flickering/jittering as shown in your other face-swap gan repo. I will try it again by disabling draw_iris() thanks

ak9250 commented 5 years ago

tried it again Screen Shot 2019-08-29 at 11 46 12 AM I changed the return aligned_face, colored_parsing_map, aligned_im, (x0, y0, x1, y1), landmarks to return the colored_parsing_map instead of the parsing_map_with_iris

ak9250 commented 5 years ago

@shaoanlu also noticed occlusions cause jitter as well as different skin colors with source and target dont always result in a good swap outcombined (2)

gstark0 commented 5 years ago

@ak9250 How did you change the resolution and manage to swap faces on a GIF you posted?

ak9250 commented 5 years ago

@gstark0 i used the google colab notebook here see PR https://github.com/shaoanlu/fewshot-face-translation-GAN/pull/4#issue-313354486 and this is the link to the notebook https://github.com/ak9250/fewshot-face-translation-GAN/blob/master/colab_demo.ipynb I have been getting some good results but trying to see how to improve it ezgif com-resize (7)

gstark0 commented 5 years ago

@ak9250 Thanks! That's what I was actually looking for. Great job!

gstark0 commented 5 years ago

@ak9250 BTW, any idea why #5 happens to me?

ak9250 commented 5 years ago

@gstark0 is that happening in the google colab notebook? Are you able to get a result?

gstark0 commented 5 years ago

@ak9250 I'm using Jupyter Notebook installed locally to run it but I guess it shouldn't make a difference. I'm not able to get a result, in fact I'm not even able to load images.

ak9250 commented 5 years ago

@gstark0 go to First, you'll need to enable GPUs for the notebook: Navigate to Edit→Notebook Settings select GPU from the Hardware Accelerator drop-down I have not tested it in a local environment, I am using the gpu in google colab

gstark0 commented 5 years ago

@ak9250 Unfortunately I already have the accelerator set to GPU, seems more like something with image dimensions but afaik it should run on any size, right?

gstark0 commented 5 years ago

@ak9250 Could you upload sample source and single sample raw image you used for swap? I'm wondering if this will make any difference.

ak9250 commented 5 years ago

@gstark0 images I get were various sizes and from youtube videos and gifs as well as google images. Can you share the images you are using to see if i can test what the problem is? Also are the images jpg or png, did you update accordingly?

gstark0 commented 5 years ago

@ak9250 I've used these files --> https://filebin.net/l36dca5kb00hh93e <-- They're pretty much first results from Google just to test things and see if it will actually run, I don't care about accuracy for now. Nico as a source and Trump as a target.

ak9250 commented 5 years ago

@gstark0 although the result isnt good, I was able to do a test with nicolas cage and donald trump, maybe try another set of images? outputstack (2)

gstark0 commented 5 years ago

Already tried, still the same ;/

ak9250 commented 5 years ago

@gstark0 are you using youtube-dl to get the video and split it into frames with ffmpeg or some other method? where is your test image coming from? Also is the error still the same error?

gstark0 commented 5 years ago

@ak9250 No I don’t, for now. I just wanted to run it first and make sure everything works correctly. The images come from the internet, found in Google. Funny thing is - I checked the exact same images on Google colab and it works fine. I guess it may be because of different TensorFlow version or some other library. Will check this tomorrow.

shaoanlu commented 5 years ago

The unnatural contrast and highlight in the output faces are the intrinsic characteristics of the released model. These artifacts can also be observed in the readme figures. I did not spend much time taking care of the training process and tuning objective functions. Anyway this is the status quo of the performance of the current model.

ak9250 commented 5 years ago

@shaoanlu anyway the model can be improved? I am looking into training the model if the training code is released and release the trained model. Also is the training process similar to FUNIT as they do pet swapping instead of faces with good results but it is trained for 2 weeks on 8 v100 gpus with a dataset of about 100k animals

gstark0 commented 5 years ago

@shaoanlu So when can we expect the training code to be released? We’d like to experiment more and improve the model :)

shaoanlu commented 5 years ago

I might not have to time to update the code until mid or late Oct. 😔.

ak9250 commented 5 years ago

@shaoanlu ok, any way I can help?

decajcd commented 5 years ago

@shaoanlu anyway the model can be improved? I am looking into training the model if the training code is released and release the trained model. Also is the training process similar to FUNIT as they do pet swapping instead of faces with good results but it is trained for 2 weeks on 8 v100 gpus with a dataset of about 100k animals

Have you got better results?

ak9250 commented 5 years ago

@shaoanlu anyway the model can be improved? I am looking into training the model if the training code is released and release the trained model. Also is the training process similar to FUNIT as they do pet swapping instead of faces with good results but it is trained for 2 weeks on 8 v100 gpus with a dataset of about 100k animals

Have you got better results?

I got some decent results outputstack2 (2)

decajcd commented 5 years ago

@shaoanlu anyway the model can be improved? I am looking into training the model if the training code is released and release the trained model. Also is the training process similar to FUNIT as they do pet swapping instead of faces with good results but it is trained for 2 weeks on 8 v100 gpus with a dataset of about 100k animals

Have you got better results?

I got some decent results outputstack2 (2) It looks good except for the eyes.What changes do you made in this code?

shaoanlu commented 5 years ago

The author of FUNIT actually showed results on face identity translation in this video.

I've also trained a FUNIT model on VGGFace2 dataset using the official implementation. The figure below is the result after ~90k iters of training. The 1st and 4th rows are real images.

gen_train_00087500_03

ak9250 commented 5 years ago

@shaoanlu anyway the model can be improved? I am looking into training the model if the training code is released and release the trained model. Also is the training process similar to FUNIT as they do pet swapping instead of faces with good results but it is trained for 2 weeks on 8 v100 gpus with a dataset of about 100k animals

Have you got better results?

I got some decent results outputstack2 (2) It looks good except for the eyes.What changes do you made in this code?

I have not made any changes other than https://github.com/shaoanlu/fewshot-face-translation-GAN/pull/4#issue-313354486

ak9250 commented 5 years ago

@shaoanlu great, are you planning to release the training code for this repo or update the pretrained models?

gstark0 commented 4 years ago

@shaoanlu When will the code be released?