Hi, @Na-Z ,thanks for opening source the code, I have two question after reading the paper.
1.I see that you say "Note that our SESS is initialized by the VoteNet weights pre-trained on the corresponding labeled data", I just wonder why you use pretrained weight, does the result that training from scratch is bad? Or using pretrained weight is a common method in semisupervised training? What's more, I just advise that you can use the pre-trained weight trained from SUN and train SESS with ScanNetV2 or reverse,I just think this is the right way to use pretrained weight, as we don't have a pretrained weight on a new collected dataset without label in reality.
2.Does more unlabeled data, more better performance? how is the performance if you training with all train,val,test data?
a) Yes. The result training from scratch is not good. b) Pre-training the backbone with labeled data will make the following training easier, which is also suggested by original mean-teacher work. c) Our SESS is proposed for semi-supervised learning setting, where there always have a group of labeled samples. Of course, it would be interesting if we can use pretrained weights from other datasets, albeit some other problems such as domain adaptation might need to be solved.
a) Theoretically, if more unlabeled data follows a wider distribution, we should be able to achieve better performance. b) We did not conduct such an experiment. You are encouraged to explore if interested (Please let me know the result if you are willing to do this :P).
Hi, @Na-Z ,thanks for opening source the code, I have two question after reading the paper. 1.I see that you say "Note that our SESS is initialized by the VoteNet weights pre-trained on the corresponding labeled data", I just wonder why you use pretrained weight, does the result that training from scratch is bad? Or using pretrained weight is a common method in semisupervised training? What's more, I just advise that you can use the pre-trained weight trained from SUN and train SESS with ScanNetV2 or reverse,I just think this is the right way to use pretrained weight, as we don't have a pretrained weight on a new collected dataset without label in reality. 2.Does more unlabeled data, more better performance? how is the performance if you training with all train,val,test data?