Closed chenwwayne closed 4 years ago
Thank you for your interest in our work.
Those are background attention masks. We observed that background attention mask are more visually obvious than foreground attention masks, so in order to visualize the attention better, we used the background attention mask. But the sum of background and foreground attention masks is 1, so foreground attention masks can be obtained by 1-background attention mask. Also, note that we only generate one background attention mask while multiple foreground attention masks, so there is no need to choose which foreground attention masks to display.
Thank you for your interest in our work.
Those are background attention masks. We observed that background attention mask are more visually obvious than foreground attention masks, so in order to visualize the attention better, we used the background attention mask. But the sum of background and foreground attention masks is 1, so foreground attention masks can be obtained by 1-background attention mask. Also, note that we only generate one background attention mask while multiple foreground attention masks, so there is no need to choose which foreground attention masks to display.
Thank you very much!Your answer perfectly solved my confusion!
Hi,HaoTang:
Thanks for you great work! I have a puzzle about your paper.
In “AttentionGAN:Unpaired Image-to-Image Translation using Attention-Guided Generative Adversarial Networks”, the attention mask mentioned in fig.23 corresponds to the foreground attention mask or background attention mask in Fig.3?