Closed BigBuffa1o closed 1 year ago
hi @BigBuffa1o , thanks for your interest! for these images, can you creat a breakpoint here to see the length of the predictions, i.e., len(scores)? https://github.com/deeplearning-wisc/stud/blob/c356bbec749ef74c2d338df742d2d88f9dc776d3/src/engine/defaults.py#L689
It is somewhat wired that there are no predictions for the image.
Thanks for your reply,yes I have create a break point and checked the predictions label and for some images (for example sheep) OOD is been detected successfully but for these images there are no labels in it which means no object are detected. I guess is it the problem of backbone network or limitations of STUD network?Becuase in my opinion if the object in these images are not in training set maybe for traditional detection network they may trade whole image as background instead of detect the OOD in these images although it’s obvious to eyes. I am going to implement STUD for industry use so I need the OOD been detected stably.On your side these OOD in images are all can be detected very well?Any guide would be appreciated.
@BigBuffa1o , i got your question. Yes, STUD indeed does not aim for detecting comprehensively all of OOD proposals. One related paper is https://arxiv.org/pdf/2108.06753.pdf.
The other small trick is to reduce the confidence threshold for filtering during inference time.
Hi,i have read your code carefully and visualize some result obtained by your algorithm and pre trained model(vis as ID and coco2017 as OOD), get some result that contains OOD and it looks amazing! However by observing these results i have a question.
1.i choose some images that contains OOD in my opinion and throw into the STUD and the result has no bounding box,which means no object is detected,for example the food image has no object been detected,by reading your code i think the OOD is been created when comparing energy scores so if there is no bbox, is that the problem of my implementation or the limitation of the backbone algorithm?