Closed hhhhh0220 closed 3 years ago
Hello, thanks for your interest. May I know for which setting, you are getting 65.7%?
I trained the classification network according to readme ,then use gen_gt.py to generate the proxy labels. Then i trained the seg net with these proxy labels. I'm not sure what you mean by settings.The classification network include co-attention and contrastive co-attention.The training parameters are all given by you. ------------------ 原始邮件 ------------------ 发件人: "GuoleiSun/MCIS_wsss" <notifications@github.com>; 发送时间: 2021年2月22日(星期一) 晚上6:48 收件人: "GuoleiSun/MCIS_wsss"<MCIS_wsss@noreply.github.com>; 抄送: "爱笑"<2583967840@qq.com>;"Author"<author@noreply.github.com>; 主题: Re: [GuoleiSun/MCIS_wsss] some questions about the final performance (#10)
Hello, thanks for your interest. May I know for which setting, you are getting 65.7%?
— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub, or unsubscribe.
This result is obtained on the val set of voc2012
------------------ 原始邮件 ------------------ 发件人: "GuoleiSun/MCIS_wsss" <notifications@github.com>; 发送时间: 2021年2月22日(星期一) 晚上6:48 收件人: "GuoleiSun/MCIS_wsss"<MCIS_wsss@noreply.github.com>; 抄送: "爱笑"<2583967840@qq.com>;"Author"<author@noreply.github.com>; 主题: Re: [GuoleiSun/MCIS_wsss] some questions about the final performance (#10)
Hello, thanks for your interest. May I know for which setting, you are getting 65.7%?
— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub, or unsubscribe.
@hhhhh0220, thanks for your interest. The segmentation network is trained once with the proxy label. In our paper, we report 66.2% (the trained model is also shared in this repo). The difference between your results (65.7%) and ours (66.2%) is not much. The small gap may due to experimental randomness. As you may know, our wsss approach adopts the pipeline used by many previous works and consists the following steps: train classification network, generate pseudo ground truth, and train the fully supervised segmentation model. These steps can introduce randomness. Also, the benchmark (PASCAL VOC) is not big (~10K training and ~1.5K test/val images). For this point, we suggest to run experiment for multiple times. Also, you can do model selection on val set.
Is the training of segmentation network one-stage or multi-stage?I mean that you train the segmentation network only with the proxy label , or after that use the predict mask to train the seg net again. Because I use the proxy mask to train the set net, and only got 65.7% miou. I want to know why?