ESanchezLozano / GANnotation

GANnotation (PyTorch): Landmark-guided face to face synthesis using GANs (And a triple consistency loss!)
Other
194 stars 30 forks source link

reproduce #3

Closed voa18105 closed 5 years ago

voa18105 commented 5 years ago

I've launched the code with your weights and tests... and it provided me the video. Not sure it is a face... Can you please provide any comments on this?

test_1.zip

ESanchezLozano commented 5 years ago

The whole demo is built to return a video for all the points contained in test_1.txt, if you dig a bit into the demo.py code you can find that the specific instruction is

images, cropped_pts = myGAN.reenactment(image,points)

You can define points manually to be a matrix of 66 x 2 x num_of_images, where num_of_images can be even 1. Then the reenactment function will return the corresponding images, which you can display in any way of your convenience.

Hope this helps

voa18105 commented 5 years ago

@ESanchezLozano Thank you for a prompt reply. I understand this, I've checked the code today... just... did you see the video I've uploaded? it is your image and your points worked in this way for me. No any changes from my side. I was expecting something more facial :-)

voa18105 commented 5 years ago

@ESanchezLozano btw, I've tried your triple consistency loss with StarGAN and it does work pretty better than the original StarGAN. That is why it looked strange for me not to get any face with GANnotation

ESanchezLozano commented 5 years ago

I will have a look at it because I have now seen it and yes it is really weird and completely unexpected. May I know whether during the training is there any error that might not initialise the network properly? Even when black images are sent to the network with different target locations the network yields different results, the fact that this is static suggests that maybe the points are not correctly loaded. Could you please validate this is not the case? Thanks!

ESanchezLozano commented 5 years ago

@ESanchezLozano btw, I've tried your triple consistency loss with StarGAN and it does work pretty better than the original StarGAN. That is why it looked strange for me not to get any face with GANnotation

Many thanks, it is good to hear this!

voa18105 commented 5 years ago

I will have a look at it because I have now seen it and yes it is really weird and completely unexpected. May I know whether during the training is there any error that might not initialise the network properly? Even when black images are sent to the network with different target locations the network yields different results, the fact that this is static suggests that maybe the points are not correctly loaded. Could you please validate this is not the case? Thanks!

I will check it out

voa18105 commented 5 years ago

@ESanchezLozano I found the reason, its my fault, I've used Python 2.7. Now it works! Thanks anyway for your attention.