ultralytics / ultralytics

Ultralytics YOLO11 🚀
https://docs.ultralytics.com
GNU Affero General Public License v3.0
33.16k stars 6.38k forks source link

How to determine which predict layer is responsible for detection ? #14458

Open EzraKenig opened 4 months ago

EzraKenig commented 4 months ago

Search before asking

Question

Hi, I am writing this issue because I have trouble understanding the way ground truth labels are created in YOLOv8. From my understanding (correct me if I am wrong), in previous versions of YOLO (the ones that used anchor boxes), the prediction of an object was assigned to one of the three predict layers depending on the best IoU value the ground truth bounding box had with the anchor set. But now that YOLOv8 has stopped using anchor box, I was wondering how to decide to which predict layer one should assign the prediction of a given object.

I hope my question is clear. Best regards and thanks in advance for helping me to answer it.

Additional

No response

github-actions[bot] commented 4 months ago

👋 Hello @EzraKenig, thank you for your interest in Ultralytics YOLOv8 🚀! We recommend a visit to the Docs for new users where you can find many Python and CLI usage examples and where many of the most common questions may already be answered.

If this is a 🐛 Bug Report, please provide a minimum reproducible example to help us debug it.

If this is a custom training ❓ Question, please provide as much information as possible, including dataset image examples and training logs, and verify you are following our Tips for Best Training Results.

Join the vibrant Ultralytics Discord 🎧 community for real-time conversations and collaborations. This platform offers a perfect space to inquire, showcase your work, and connect with fellow Ultralytics users.

Install

Pip install the ultralytics package including all requirements in a Python>=3.8 environment with PyTorch>=1.8.

pip install ultralytics

Environments

YOLOv8 may be run in any of the following up-to-date verified environments (with all dependencies including CUDA/CUDNN, Python and PyTorch preinstalled):

Status

Ultralytics CI

If this badge is green, all Ultralytics CI tests are currently passing. CI tests verify correct operation of all YOLOv8 Modes and Tasks on macOS, Windows, and Ubuntu every 24 hours and on every commit.

pderrenger commented 4 months ago

@EzraKenig hi there,

Great question! In YOLOv8, the prediction process has indeed evolved from the anchor-based approach used in previous versions. With the removal of anchor boxes, YOLOv8 employs a more streamlined and efficient method for object detection.

In YOLOv8, the model uses a single prediction layer that directly predicts the bounding boxes, class probabilities, and other relevant information. This layer operates on the feature maps generated by the backbone and neck of the network, which are designed to capture multi-scale features. The assignment of predictions to specific objects is handled by the model's loss function during training, which ensures that the predictions are aligned with the ground truth labels.

The key difference here is that YOLOv8 does not rely on predefined anchor boxes to match predictions to ground truth. Instead, it uses a more flexible and adaptive approach, which simplifies the architecture and can lead to improved performance.

If you have any further questions or need more details, feel free to ask. Best regards and happy coding!

Y-T-G commented 4 months ago

It uses TAL.

https://github.com/ultralytics/ultralytics/blob/e094f9c3718e3e43f17299b9919da696d4b96887/ultralytics/utils/tal.py#L13

glenn-jocher commented 4 months ago

And DFL for anchor-less box regressions.

github-actions[bot] commented 3 months ago

👋 Hello there! We wanted to give you a friendly reminder that this issue has not had any recent activity and may be closed soon, but don't worry - you can always reopen it if needed. If you still have any questions or concerns, please feel free to let us know how we can help.

For additional resources and information, please see the links below:

Feel free to inform us of any other issues you discover or feature requests that come to mind in the future. Pull Requests (PRs) are also always welcomed!

Thank you for your contributions to YOLO 🚀 and Vision AI ⭐