xueeinstein / darknet-vis

Visualize YOLO feature map in prediction for easily checking your model performance
Other
71 stars 8 forks source link

[INQUIRY] How did you design this visualization feature? #5

Closed khaldonminaga closed 3 years ago

khaldonminaga commented 3 years ago

@xueeinstein Hey, I'm not sure where to message you such that you can see it the soonest, but I would just like to know if you could lead/help me to the right methods/way of implementing your visualization feature design in this repository to the newer or latest versions of YOLO. Currently, the latest versions of YOLO has no feature like this one you made for the version two. I would like to further understand what you did here difference to produce a visualization so that I can apply it on the latest versions, and upcoming ones. Additionally, this would also help other people in wanting to understand their own trained models.

Lastly, How can I utilize this repo to apply it on YOLO v3 custom model

Edit: I also emailed you just in case. you haven't seen this post.

xueeinstein commented 3 years ago

@donnx32 The idea of this visualization is from the original YOLO paper. Specifically, YOLO, a one-stage detector try to map the object to a neuron on the feature map that its center belongs to. And the corresponding neuron works to regress the relative bounding box. YOLOv3 and YOLOv4 both inherit this character. Therefore, we can visualize a neuron via the i-th region proposal with a maximum objectness score. Ideally, there is an object that its center should be very close to the location of this neuron. See https://github.com/xueeinstein/darknet-vis/commit/88f1d4daae7bcfdb3149b7f8929c150091e29860 commit get_obj_map() for details.

khaldonminaga commented 3 years ago

@xueeinstein Thank you so much for your response, this is truly helpful. I will try to explore on your what you stated and attempt to apply it to the later versions.