machengcheng2016 / CrossRectify-SSOD

Official code of "CrossRectify: Leveraging Disagreement for Semi-supervised Object Detection" (PR'2023)
https://arxiv.org/abs/2201.10734
13 stars 4 forks source link

Missing ssd.py / train_pseudo137.py / train_pseudo151.py? #1

Closed vadimkantorov closed 2 years ago

vadimkantorov commented 2 years ago

Hi! Various train_* files include from ssd import build_ssd, but ssd.py is missing in this repo.

Is it a typo? Should it be instead from csd import build_ssd_con? Or are some files from ssd.pytorch missing in this repo?

What is the difference between csd.py and isd.py?

Also, train_pseudo137.py and train_pseudo151.py mentioned at https://github.com/machengcheng2016/CrossTeaching-SSOD#33-reproduce-table3 are missing from the repo...

Thanks!

machengcheng2016 commented 2 years ago

Hello, thanks for your attention. I've just uploaded ssd.py, which is absolutely drawn from the original SSD repo ssd.pytorch. Both csd.py and isd.py are drawn from the original ISD repo ISD-SSD. Both scripts are to create a SSD detector sharing the same architecture with ssd.py, so there is no difference between them with respect to this point. The function build_ssd_con additionally allows the SSD detector to output intermediate feature maps. I've just uploaded train_pseudo137.py, please check it out.

vadimkantorov commented 2 years ago

From what I understand, this repo contains at least two implementations of CrossTeaching. Can I refer just to detectron2 impl? Is it complete?

machengcheng2016 commented 2 years ago

Yes, you can. The proposed cross-teaching is actually a training paradigm. So whatever the platform you choose to implement on, the core idea is always the same.

vadimkantorov commented 2 years ago

This is good news. Thanks!

Yeah. I understand about the paradigm, was just wondering if the detectron2 impl is complete and fully represents description in the paper.

machengcheng2016 commented 2 years ago

Never mind. The core idea of cross-teaching is to rectify the possibly incorrect pseudo labels, through the "confidence comparison" operation as put in Eq.(8) in the manuscript. As one detector can never rectify the misclassified pseudo labels by itself (only have chance to discard some of them), it is necessary to involve another detector into the training paradigm.

vadimkantorov commented 2 years ago

Do you sample half of the batch from the supervised subset as does unbiasedteacher codebase?

machengcheng2016 commented 2 years ago

As far as I know, hyper-parameters (such as batch size, learning rate, and augmentations) are usually set differently among recent semi-supervised object detection papers. In fact, these settings are vital to the model performance. In my experiments, I choose to follow the hyper-parameters provided by the official detectron2 platform for fair comparison.

vadimkantorov commented 2 years ago

Sampling in unbiasedteacher codebase is done at https://github.com/machengcheng2016/CrossTeaching-SSOD/blob/534b7f993e58d0c19f26871a073647267f70e311/detectron2/VOC07-sup-VOC12-unsup-self-teaching-0.7/ubteacher/data/common.py#L125

It samples half of the batch from supervised subset, and half of the batch from unsupervised subset. Then both halfs are subject to weak and strong augs...

vadimkantorov commented 2 years ago

detectron2 directory has five different subfolders / impls. what are the difference? just configs?

vadimkantorov commented 2 years ago

Yeah I know that, so what's your question?

My question was whether crosstraining does the sampling the same way as the original ubteacher. Now I see that it does the same sampling as the original ubteacher. So no more open question about sampling

machengcheng2016 commented 2 years ago

Oh I know where the problem is. Please check the script ubteacher/engine/trainer.py. I only utilize the strong aug data for training, since I want to avoid the effect brought by different augs between labeled and unlabeled batch data. In fact I've tested with strong/weak augs for supervised training, and I found strong aug can improve the supervised baseline mAP.

machengcheng2016 commented 2 years ago

detectron2 directory has five different subfolders / impls. what are the difference? just configs?

These are only used in the COCO experiments. The differences between 5 configs only lie in the random seed. You can json.load the COCO_supervision.txt in the dataseed folder, and you will find out what is changed.

vadimkantorov commented 2 years ago

I see! Would be great to have some recipes for the COCO experiments too..