Closed JingweiQu closed 4 years ago
Hi, @bestwei
Very sorry for my late reply.
Thank for your interest in this paper.
The proposed can be regarded as the unsupervised or self-supervised method. Therefore, the training data and the test data are the same. I also found our paper setting is similar to this paper in CVPR19, Hung et al., SCOPS: Self-Supervised Co-Part Segmentation, https://arxiv.org/pdf/1905.01298.pdf.
Yes, you are right, and you could refer to my first comment. In the paper submission, we avoid using these terms, such as unsupervised or weakly supervised because, in my previous paper submission, the reviewers have different options for these terms. In fact, compared to the traditional unsupervised object localization or object segmentation cited in my paper, our method has the same steps. First, given any dataset, the losses could be optimized. Then, with the optimized model, we can produce the results for the given dataset.
Yes, dataset shuffling is used. However, I don't try our proposed method without shuffling.
If you still have any questions, welcome to ask me in this issue, and I will give you the responses as soon as possible.
@KuangJuiHsu Thank you for the useful reply.
Hi, Great work! It is very promising.
I have some problems about the training procedures and datasets: (1) The training of co-peak module, and the testing of final instance co-segmentation share the same datasets, i.e. COCO-VOC, COCO-NONVOC, VOC12, and SOC? I am confused about this. (2) The training of co-peak module is unsupervised or weakly supervised (images sharing the semantically related objects), thus the validation is conducted upon the three losses, is that right? Because you indicate that the optimization procedure stops after 40 epochs. (3) The training and validation data of co-peak module are the same, but shuffling in each batch?
Thanks.