AlamiMejjati / Unsupervised-Attention-guided-Image-to-Image-Translation

Unsupervised Attention-Guided Image to Image Translation
MIT License
329 stars 47 forks source link

If I want to make attention in background. #27

Open deep0learning opened 5 years ago

deep0learning commented 5 years ago

Hi, Thank you for this task. If I want to make attention in the background that I need to change rather than the object. For example, in domain A and B, we have horse images, when translating from domain A - B, I want to keep the same horse in domain B but the background will be changed as domain A. How I can do that? Thank you in advance.

jian3xiao commented 4 years ago

I have the same question with you. In other word, how the Attention Network can output a mask (attention map) to keep eye on the foreground object in unsupervised setup? According to the paper, the network architecture of Generators and Attention Networks are almost same except the final activation function. When the final activation function is sigmoid with output channel is 1, output of the network is attention map. I don‘t know how that works.

Moreover, Figure 7 in the paper shows Attention Network can focus on foreground object in early of training. It is amazing! The losses are adversarial loss and cycle-consistency loss during early of training. There are no label information guides the Attention Network to focus on foreground object.

I am looking forward to discussing with you and the author @deep0learning @AlamiMejjati