WisconsinAIVision / few-shot-gan-adaptation

[CVPR '21] Official repository for Few-shot Image Generation via Cross-domain Correspondence
https://utkarshojha.github.io/few-shot-gan-adaptation/
Other
290 stars 47 forks source link

Questions about the paper. #18

Open Hsintien-Ng opened 3 years ago

Hsintien-Ng commented 3 years ago

This work is interesting and the results reported in the paper surprised me! I have the following two questions about this work, and would be appreciated if you can solve my confusion.

  1. The qualitative results show that this work seems to achieve the image-to-image translation while the architecture of the proposed GAN model is represented as random noises to images. I am not really sure how to achieve the image-to-image translation. I guess it might be because of the proposed anchor space that restrict the identity preservation between source and target domains. Is it right? Could you explain it in details?
  2. I am not really sure about the motivation on two discriminators. Why do you use D{img} for anchor space sampling while use D{patch} for entire space sampling?
  3. Only qualitative results on ablation study are reported in the paper. But I think the quantitative results could be more convincing since it is difficult to judge the performance of different components according to Fig.5.

Best regards!