Closed yuchen2580 closed 5 years ago
Please download all the data following instructions under 'datasets/README.md'. You can see the ground truth annotations. pseudo_label.py
is for my proposed method.
Please download all the data following instructions under 'datasets/README.md'. You can see the ground truth annotations.
pseudo_label.py
is for my proposed method.
Thanks for the quick reply, I follow the instructions, but there is no bndbox annotations for image data.
For example in watercolor (.xml):
<bndbox>
<xmin>1</xmin>
<ymin>1</ymin>
<xmax>1</xmax>
<ymax>1</ymax>
</bndbox>
I see, it seems only test set in watercolor are annotated while train set are dummy annotations.
Please refer to https://github.com/naoto0804/cross-domain-detection/tree/master/datasets#dummy-annotations. We only annotate 1000/1000 images for training/test, the rest of images in watercolor dataset are for extra source of image-level annotations. If you want to test unsupervised object detection on this test set, --subset train
flag will ignore such extra data.
Thanks for the clarification!
@yuchen2580 Hi, how did you set your own different number of classes with their class labels?
Hi,
Im actually coming from other repo: https://github.com/VisionLearningGroup/DA_Detection
The repo direct me here to prepare data for clipart and watermark. However after scan through your code, it seems that your label for clipart and watermark are generated by applying SSD detection on original image? (pseudo_label.py)
Could you please give a hint on the label generation for clipart and watermark?
Are the annotation files stored in Annotations/ after I follow the 'Training using virtually created instance-level annotations' are the ground truth label?