PengtaoJiang / OAA-PyTorch

The PyTorch Code for our ICCV 2019 paper "Integral Object Mining via Online Attention Accumulation"
58 stars 10 forks source link

Why you dont use saliency map in this repository? #3

Closed UdonDa closed 2 years ago

UdonDa commented 4 years ago

Hi, Jiangpengtao.

Thank you for sharing your implementation. I have a question.

According to your pape, you use the saliency map to caliculate cam loss. But, I guess you do not use it in this repository.

Thanks.

PengtaoJiang commented 4 years ago

Hello, in this repository, we just re-implement the attention generation process, incluing the online attention accumulation process and integral attention learning process. For cam loss, we use the normalized accumulated attention maps as labels to supervise the output of the last convolutional layer. To generate the proxy segmentation labels, we combine the saliency maps and the attention maps.

发件人:HoritaDaichi notifications@github.com 发送日期:2019-12-11 09:31:53 收件人:PengtaoJiang/OAA-PyTorch OAA-PyTorch@noreply.github.com 抄送人:Subscribed subscribed@noreply.github.com 主题:[PengtaoJiang/OAA-PyTorch] Why you dont use saliency map in this repository? (#3) Hi, Jiangpengtao. Thank you for sharing your implementation. I have a question. According to your pape, you use the saliency map to caliculate cam loss. But, I guess you do not use it in this repository. Thanks. — You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub, or unsubscribe.