zhengye1995 / Tianchi-2019-Guangdong-Intelligent-identification-of-cloth-defects-rank5

天池2019广东工业智造创新大赛 布匹疵点检测 天池水也太深了 季军解决方案
399 stars 142 forks source link

In the training results, the recall is very high, and the precision is very low #25

Closed ingbeeedd closed 3 years ago

ingbeeedd commented 3 years ago

Thanks to you, I have succeeded in training and performing the model.

I received through baidu, 17,000 annotations from the training data, and divided them into 9:1 and conducted training and testing.

Nothing has changed from the model, but the test results are not sure if it's normal or not.

Could you give me some advice?

image

My optimizer setting is only one GPU, so I set it as below.

optimizer = dict(type='SGD', lr=0.00125, momentum=0.9, weight_decay=0.0001)
ingbeeedd commented 3 years ago

There are some good detections, but there are many test pictures that cause recall and precision problems as follows. How can I solve this?

image

image

zhengye1995 commented 3 years ago
  1. The number of class 1's gt in test set is 0, you may need split some data of class 1 to test set to evaluate the performance of class 1.
  2. The IoU threshold in this competition is [0.1,0.3,0.5], but in mmdetection code, the default threshold is the same with COCO (0.5:0.95:0.05). I don’t know if you have modified the threshold in eval code. Under the COCO mAP metric, the score will be very low. You can change the IoU threshold to [0.1, 0.3, 0.5] and evaluate your model again.
ingbeeedd commented 3 years ago

Oh, class 1 is pointing to number 2 in that table. I think the number of the results is a little behind, but I think I need to modify the code a little bit. I don't know where the part that needs to be modified((0.5:0.95:0.05)->[0.1,0.3,0.5]) right now, but I'll try to modify it.

ingbeeedd commented 3 years ago

First, I didn't have any test data, so I divided the training images and used them as test images. Comparing the results from the real model with the annotation of the image, Annotation failed to contain the results from the model. In other words, the actual image did not contain any defective parts in the annotation. So, when I tested it, the mAP had to be low. Of course, there were many patterns, so I think the results were wrong. Thank you in many ways. Is there any paper or data that you have referred to in the model part?

zhengye1995 commented 3 years ago

First, I didn't have any test data, so I divided the training images and used them as test images. Comparing the results from the real model with the annotation of the image, Annotation failed to contain the results from the model. In other words, the actual image did not contain any defective parts in the annotation. So, when I tested it, the mAP had to be low. Of course, there were many patterns, so I think the results were wrong. Thank you in many ways. Is there any paper or data that you have referred to in the model part?

Since this competition only allows us to use COCO and imageNet as additional data, I only used the model pre-trained on COCO (from the official model zoo of mmdetection). I think there may be similar data in the field of change detection in remote sensing image analysis (the picture before the change is used as the template image, and the picture after the change is the image to be detected).
In addition, there is a recently concluded tile defect detection competition on the Aliyun Tianchi platform, which also requires reasonable use of template images. The address is: https://tianchi.aliyun.com/competition/entrance/531846/introduction. Hope these are helpful to you.

ingbeeedd commented 3 years ago

Thank you