oawiles / X2Face

Pytorch code for ECCV 2018 paper
MIT License
246 stars 58 forks source link

Weird results except for examples #17

Closed csh589 closed 5 years ago

csh589 commented 5 years ago

Thank you for your great job! Following the guidance, I correctly installed pytorch and other dependence. Then I ran the notebook and got a fair result for examples as showed originally.

However, when I tried to use other images as input, for example, out I got a total failure like: Unknown1

I also doubted that wrong cropping causes the weird results, so I did some additional experiments. First, referring to the issue #4 , I corrected the cropping size and tried again, then got similar results Second, I apply the same cropping method to the example images, then the results of cropped examples looked good. Shortly, I completely get confused by the result, do you have any suggestions for it?

ghost commented 5 years ago

@csh589 were you able to resolve this and what method did you use?

oawiles commented 5 years ago

Sorry I didn't respond: @csh589 did you ensure you were getting the same results when you just run the notebook initially as the stuff that is already there? If you didn't you have to check your pytorch version, else I'm not sure what the cause would be.

melih-unsal commented 5 years ago

Hi when I give the same input as you gave, it gives very bad result. I also give my input then the output is more terrible. Do you know what is the problem?

new_image

csh589 commented 5 years ago

@melih1996 seems the same result like #15 , maybe you need to check your pytorch version.

melih-unsal commented 5 years ago

@csh589 Thank you it works for the input that is given however is it normal such a wierd output below? Screenshot from 2019-07-18 15-07-44

Screenshot from 2019-07-18 15-07-32 I also get very slow run when I downgraded pytorch version. Normally I could run the notebook in 4-5 seconds but after I downgraded, I could run in around 10-15 minutes

oawiles commented 5 years ago

I am not sure why it is slower -- I think it wasn't that slow for me. Maybe you didn't compile with cuda? For the results. You need to make sure the faces are cropped similarly to the examples given. The model won't work properly if the crops are too different.

vade commented 5 years ago

Ive not seen any documentation of the code fixes for Python 3 - so ive been able to get what appears to be better results on Ubuntu CUDA with a NV 1070 TI.

Ubuntu 18.04 LTS, IDIA-SMI 430.14 Driver Version: 430.14 CUDA Version: 10.2 Python 3.7.3 Pytorch 0.4.1

My install was to install via Conda:

conda install pytorch=0.4.1 torchvision cuda92 -c pytorch

I then checked out a branch which has some preliminary python 3 fixes (there is an unmerged PR for these fixes)

See: https://github.com/oawiles/X2Face/pull/16

The last remaining step is to update the alignment for the upscale call, which requires one to update a few calls to nn.Upsample()

Update NoSkipNet_X2Face.py and NoSkipNet_X2Face_pose.py

And change calls to nn.Upsample to read:

upsample = nn.Upsample(scale_factor=2, mode='bilinear', align_corners=True)

I now have output like:

After

I am curious if this appears like an appropriate fix from the authors. Thank you for sharing this code!

ghost commented 5 years ago

@oawiles what are you using to crop the faces?

akoepke commented 5 years ago

Ive not seen any documentation of the code fixes for Python 3 - so ive been able to get what appears to be better results on Ubuntu CUDA with a NV 1070 TI.

Ubuntu 18.04 LTS, IDIA-SMI 430.14 Driver Version: 430.14 CUDA Version: 10.2 Python 3.7.3 Pytorch 0.4.1

My install was to install via Conda:

conda install pytorch=0.4.1 torchvision cuda92 -c pytorch

I then checked out a branch which has some preliminary python 3 fixes (there is an unmerged PR for these fixes)

See: #16

The last remaining step is to update the alignment for the upscale call, which requires one to update a few calls to nn.Upsample()

Update NoSkipNet_X2Face.py and NoSkipNet_X2Face_pose.py

And change calls to nn.Upsample to read:

upsample = nn.Upsample(scale_factor=2, mode='bilinear', align_corners=True)

I now have output like:

After

I am curious if this appears like an appropriate fix from the authors. Thank you for sharing this code!

You can now find code in the branch 'py37_pytorch_0.4.1' to use the demo notebooks with python 3.7 and pytorch 0.4.1. Those allow to reproduce the results without the artefacts that you seem to be getting in the example that you posted.

ghost commented 5 years ago

@akoepke what is being used to crop the faces? I just tried the new branch torch==0.4.1 and got this result, also noticed that the mouth interior is not generated Screen Shot 2019-09-03 at 3 49 52 PM

akoepke commented 5 years ago

See here. We wouldn't expect the model to be able to generate the mouth interior if the source image does not contain it.

ghost commented 5 years ago

@akoepke ok, I cropped the face to 256x256 using https://github.com/leblancfg/autocrop autocrop -i ./sourceframe/ -w 256 -H 256 and got this result crop256result