ultralytics / ultralytics

NEW - YOLOv8 πŸš€ in PyTorch > ONNX > OpenVINO > CoreML > TFLite
https://docs.ultralytics.com
GNU Affero General Public License v3.0
25.79k stars 5.13k forks source link

YOLOv8 Multiple Object Tracking Training #8928

Closed 1WantMyMoneyBack closed 18 hours ago

1WantMyMoneyBack commented 3 months ago

Search before asking

Question

Hi, I would like to train the model performing the Multiple Object Tracking task on the Visdrone2019-MOT dataset. Is this feasible? Or maybe I don't need to do specific training for the Multiple Object Tracking part.

Additional

No response

github-actions[bot] commented 3 months ago

πŸ‘‹ Hello @1WantMyMoneyBack, thank you for your interest in Ultralytics YOLOv8 πŸš€! We recommend a visit to the Docs for new users where you can find many Python and CLI usage examples and where many of the most common questions may already be answered.

If this is a πŸ› Bug Report, please provide a minimum reproducible example to help us debug it.

If this is a custom training ❓ Question, please provide as much information as possible, including dataset image examples and training logs, and verify you are following our Tips for Best Training Results.

Join the vibrant Ultralytics Discord 🎧 community for real-time conversations and collaborations. This platform offers a perfect space to inquire, showcase your work, and connect with fellow Ultralytics users.

Install

Pip install the ultralytics package including all requirements in a Python>=3.8 environment with PyTorch>=1.8.

pip install ultralytics

Environments

YOLOv8 may be run in any of the following up-to-date verified environments (with all dependencies including CUDA/CUDNN, Python and PyTorch preinstalled):

Status

Ultralytics CI

If this badge is green, all Ultralytics CI tests are currently passing. CI tests verify correct operation of all YOLOv8 Modes and Tasks on macOS, Windows, and Ubuntu every 24 hours and on every commit.

glenn-jocher commented 3 months ago

@1WantMyMoneyBack hi there! πŸ‘‹

Great question! For Multiple Object Tracking (MOT) with YOLOv8, you primarily need a well-trained object detection model. YOLOv8's tracking capabilities, such as with BoT-SORT or ByteTrack, can then utilize this model for tracking without requiring specific training on the MOT task.

To use YOLOv8 for MOT on the Visdrone2019-MOT dataset, you'd follow these steps:

  1. Train YOLOv8 on the Visdrone2019 detection dataset to get a robust detector.
  2. Apply a tracker like BoT-SORT or ByteTrack using the trained model for the MOT task.

Here's a quick example of how you might set up tracking with a pre-trained model:

from ultralytics import YOLO

# Load your trained model
model = YOLO('path/to/your/trained_model.pt')

# Perform tracking on a video
results = model.track(source='path/to/your/video.mp4', tracker='botsort.yaml')

This approach allows you to leverage YOLOv8's detection capabilities for tracking without needing a separate training phase specifically for MOT.

Hope this helps! Let us know if you have any more questions. 😊

1WantMyMoneyBack commented 2 months ago

@glenn-jocher Thank you so much for your answer. There is one more question I would like to ask, is it possible to use VisDrone's MOT dataset to train yolo's detector?

github-actions[bot] commented 1 month ago

πŸ‘‹ Hello there! We wanted to give you a friendly reminder that this issue has not had any recent activity and may be closed soon, but don't worry - you can always reopen it if needed. If you still have any questions or concerns, please feel free to let us know how we can help.

For additional resources and information, please see the links below:

Feel free to inform us of any other issues you discover or feature requests that come to mind in the future. Pull Requests (PRs) are also always welcomed!

Thank you for your contributions to YOLO πŸš€ and Vision AI ⭐

glenn-jocher commented 1 month ago

Hi @1WantMyMoneyBack! πŸ‘‹

Yes, you can use the VisDrone dataset to train YOLO's detector. Ensure the dataset is formatted correctly (e.g., annotations in YOLO format). Here’s a quick example of how you might start training:

yolo train data=visdrone.yaml model=yolov8n.yaml

Make sure visdrone.yaml points to your dataset paths and is set up correctly. Happy training! 😊

github-actions[bot] commented 1 week ago

πŸ‘‹ Hello there! We wanted to give you a friendly reminder that this issue has not had any recent activity and may be closed soon, but don't worry - you can always reopen it if needed. If you still have any questions or concerns, please feel free to let us know how we can help.

For additional resources and information, please see the links below:

Feel free to inform us of any other issues you discover or feature requests that come to mind in the future. Pull Requests (PRs) are also always welcomed!

Thank you for your contributions to YOLO πŸš€ and Vision AI ⭐