Open chunguangqu opened 9 months ago
@chunguangqu How is your performance evaluated? If the evaluation performance is possible, it indicates that there is inconsistency between the prompts you receive and the evaluation.
Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.777 Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=1000 ] = 0.985 Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=1000 ] = 0.903 Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=1000 ] = 0.696 Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=1000 ] = 0.831 Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=1000 ] = 0.880 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.854 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=300 ] = 0.854 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=1000 ] = 0.854 Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=1000 ] = 0.789 Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=1000 ] = 0.896 Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=1000 ] = 0.917
This is my test result in my own validation set. My data has 10 categories from 0 to 9. However, when using image_demo.py to infer a single image, I can only detect the 0 category, and the other 9 categories are all Not recognized
I also used the trained model to test the confusion matrix of 10 classes, and the effect of each class was also very good.
Your information provided is not enough. I finetune the model on multiply classes, val and test is consist.
Regarding the network training of grounding-dino, the size of the image data I used for target detection is below 500*500. The training accuracy obtained is okay, but no targets can be detected during inference. What is the reason?