Ha0Tang / AttentionGAN

AttentionGAN for Unpaired Image-to-Image Translation & Multi-Domain Image-to-Image Translation
Other
646 stars 97 forks source link

Confusion about Attention Mask、Foreground Attention Mask、Background Attention Mask #12

Closed chenwwayne closed 4 years ago

chenwwayne commented 4 years ago

Hi,HaoTang:

Thanks for you great work! I have a puzzle about your paper.

In “AttentionGAN:Unpaired Image-to-Image Translation using Attention-Guided Generative Adversarial Networks”, the attention mask mentioned in fig.23 corresponds to the foreground attention mask or background attention mask in Fig.3?

Ha0Tang commented 4 years ago

Thank you for your interest in our work.

Those are background attention masks. We observed that background attention mask are more visually obvious than foreground attention masks, so in order to visualize the attention better, we used the background attention mask. But the sum of background and foreground attention masks is 1, so foreground attention masks can be obtained by 1-background attention mask. Also, note that we only generate one background attention mask while multiple foreground attention masks, so there is no need to choose which foreground attention masks to display.

chenwwayne commented 4 years ago

Thank you for your interest in our work.

Those are background attention masks. We observed that background attention mask are more visually obvious than foreground attention masks, so in order to visualize the attention better, we used the background attention mask. But the sum of background and foreground attention masks is 1, so foreground attention masks can be obtained by 1-background attention mask. Also, note that we only generate one background attention mask while multiple foreground attention masks, so there is no need to choose which foreground attention masks to display.

Thank you very much!Your answer perfectly solved my confusion!