RenYurui / Global-Flow-Local-Attention

The source code for paper "Deep Image Spatial Transformation for Person Image Generation"
https://renyurui.github.io/GFLA-web
Other
568 stars 84 forks source link

pre-trained model doesn't show well on pose guided image generate #81

Open Aurora-Xuan opened 3 years ago

Aurora-Xuan commented 3 years ago

the picture generated successfully but the texture is poor. What werid is that 6 month ago I've tested the pre-trained model and the generated pictures are OK. Recently I try it again but the results are not good. I only run "setup.bash" again, I haven't changed the other things. I aslo try to delete the cuda_extension files and rebuild the files but nothing changed. Any idea?

before.jpg

after.jpg

hanchaoyuan commented 3 years ago

hello,I found that it does not exist of Script.PerceptualSimilarity.models in the repo.have you meet this trouble?

Aurora-Xuan commented 3 years ago

hello,I found that it does not exist of Script.PerceptualSimilarity.models in the repo.have you meet this trouble?

I haven't test the metrics yet, sry for that.

shilongshen commented 3 years ago

hello,I found that it does not exist of Script.PerceptualSimilarity.models in the repo.have you meet this trouble?

hi ! Did you solve this problem?

hanchaoyuan commented 3 years ago

hello,I found that it does not exist of Script.PerceptualSimilarity.models in the repo.have you meet this trouble?

hi ! Did you solve this problem?

NO,The code still doesn't work

KHao123 commented 3 years ago

The url of pretrained model is broken? Could you please provide me a url to download the model? Thank you !

RenYurui commented 3 years ago

The url of pretrained model is broken? Could you please provide me a url to download the model? Thank you !

Hi! The URL of Google Drive is available. Can you use these links?

RenYurui commented 3 years ago

hello,I found that it does not exist of Script.PerceptualSimilarity.models in the repo.have you meet this trouble?

hi ! Did you solve this problem?

NO,The code still doesn't work

I will check and upload the file tomorrow.

RenYurui commented 3 years ago

the picture generated successfully but the texture is poor. What werid is that 6 month ago I've tested the pre-trained model and the generated pictures are OK. Recently I try it again but the results are not good. I only run "setup.bash" again, I haven't changed the other things. I aslo try to delete the cuda_extension files and rebuild the files but nothing changed. Any idea?

before.jpg

after.jpg

Have you update your cuda or nvidia driver? Can you check the output of the local attention? Are they all zeros?

RenYurui commented 3 years ago

hello,I found that it does not exist of Script.PerceptualSimilarity.models in the repo.have you meet this trouble?

hi ! Did you solve this problem?

NO,The code still doesn't work

I will check and upload the file tomorrow.

Hi, The official repository of LPIPS has been updated. Therefore, the evaluation script cannot work now.
Please refer to their git for the evaluation

Aurora-Xuan commented 3 years ago

the picture generated successfully but the texture is poor. What werid is that 6 month ago I've tested the pre-trained model and the generated pictures are OK. Recently I try it again but the results are not good. I only run "setup.bash" again, I haven't changed the other things. I aslo try to delete the cuda_extension files and rebuild the files but nothing changed. Any idea? before.jpg after.jpg

Have you update your cuda or nvidia driver? Can you check the output of the local attention? Are they all zeros?

Hi, thanks for replying!

  1. we deleted the nvidia driver by accident 2 months ago but we reinstall it with the same version.
  2. we check the local attention(attn_param in base_function.py), e.g. attn_param, block_source, block_target, result, they're all zeroes. is it ok for that?
RenYurui commented 3 years ago

the picture generated successfully but the texture is poor. What werid is that 6 month ago I've tested the pre-trained model and the generated pictures are OK. Recently I try it again but the results are not good. I only run "setup.bash" again, I haven't changed the other things. I aslo try to delete the cuda_extension files and rebuild the files but nothing changed. Any idea? before.jpg after.jpg

Have you update your cuda or nvidia driver? Can you check the output of the local attention? Are they all zeros?

Hi, thanks for replying!

  1. we deleted the nvidia driver by accident 2 months ago but we reinstall it with the same version.
  2. we check the local attention(attn_param in base_function.py), e.g. attn_param, block_source, block_target, result, they're all zeroes. is it ok for that?

It is not correct. Try to remove all the building blocks with "rm -rf build dist *info" and reinstall.

Aurora-Xuan commented 3 years ago

the picture generated successfully but the texture is poor. What werid is that 6 month ago I've tested the pre-trained model and the generated pictures are OK. Recently I try it again but the results are not good. I only run "setup.bash" again, I haven't changed the other things. I aslo try to delete the cuda_extension files and rebuild the files but nothing changed. Any idea? before.jpg after.jpg

Have you update your cuda or nvidia driver? Can you check the output of the local attention? Are they all zeros?

Hi, thanks for replying!

  1. we deleted the nvidia driver by accident 2 months ago but we reinstall it with the same version.
  2. we check the local attention(attn_param in base_function.py), e.g. attn_param, block_source, block_target, result, they're all zeroes. is it ok for that?

It is not correct. Try to remove all the building blocks with "rm -rf build dist *info" and reinstall.

Hey it works! Thank you so much. btw, I wonder what's the purpose for input changed from 176256 to 256256. Does it has specific reason on training or something else?

imbinwang commented 3 years ago

@Aurora-Xuan Do you know the reason for changing input size? When I trace the code in base_dataset.py, I found the size of input image is changed from 750x1101 to 256x256. Because of not keeping image ratio, this results into a 'fat' input image for network. The generated image seems also 'fatter' than ground-truth, as following figure shows. f1