ultralytics / ultralytics

Ultralytics YOLO11 🚀
https://docs.ultralytics.com
GNU Affero General Public License v3.0
32.6k stars 6.27k forks source link

How to get kalman predictions as an output in tracking #9039

Closed yusufkoca0 closed 6 months ago

yusufkoca0 commented 8 months ago

Search before asking

Question

Hi, me and my team are currently working on a tracking problem in a traffic environment with a drone. Our current approach follows the vehicle that has the given Id by trying to follow middle point of its bounding box. This has caused issues where the vehicle is blocked by an occlusion. BoTSORT manages to catch the vehicle and give it the same id after it is visible again, but in this timeframe our drone cannot follow it. So we have been thinking about making it follow kalman predictions. We have tried to do implementations on source code so it returns kalman predictions in results.Boxes but so far we were unsuccessful. Is there a better way to get kalman predictions? Or a better way to follow occluded objects?

Additional

No response

github-actions[bot] commented 8 months ago

👋 Hello @yusufkoca0, thank you for your interest in Ultralytics YOLOv8 🚀! We recommend a visit to the Docs for new users where you can find many Python and CLI usage examples and where many of the most common questions may already be answered.

If this is a 🐛 Bug Report, please provide a minimum reproducible example to help us debug it.

If this is a custom training ❓ Question, please provide as much information as possible, including dataset image examples and training logs, and verify you are following our Tips for Best Training Results.

Join the vibrant Ultralytics Discord 🎧 community for real-time conversations and collaborations. This platform offers a perfect space to inquire, showcase your work, and connect with fellow Ultralytics users.

Install

Pip install the ultralytics package including all requirements in a Python>=3.8 environment with PyTorch>=1.8.

pip install ultralytics

Environments

YOLOv8 may be run in any of the following up-to-date verified environments (with all dependencies including CUDA/CUDNN, Python and PyTorch preinstalled):

Status

Ultralytics CI

If this badge is green, all Ultralytics CI tests are currently passing. CI tests verify correct operation of all YOLOv8 Modes and Tasks on macOS, Windows, and Ubuntu every 24 hours and on every commit.

glenn-jocher commented 8 months ago

@yusufkoca0 hello! It's great to hear about the innovative work you and your team are doing with drone traffic monitoring and tracking. Indeed, handling occlusions can be quite challenging.

For utilizing Kalman filter predictions in Ultralytics YOLOv8, especially with trackers like BoTSORT that incorporate Kalman filters for smoothing and predicting object positions, you would typically need to dive into the tracker's implementation. However, as you've noticed, direct access to Kalman filter predictions is not readily exposed through the Ultralytics interface.

One approach could be to modify the tracking algorithm's source code to include the Kalman prediction in the tracking results directly. Here's a basic idea on how you might approach it:

# Pseudocode for customizing your tracking output to include Kalman predictions
if 'kalman_prediction' in track_info:
    results.boxes.append(track_info['kalman_prediction'])

This requires familiarity with the tracking code and ensuring that the Kalman prediction data is accessible and correctly formatted.

If modifying the source doesn't align with your project goals or if you're looking for a simpler solution, another approach could involve implementing a separate Kalman filter module in your pipeline. You can input the detected objects' coordinates to this module whenever they're visible and rely on the module's predictions during occlusions.

Unfortunately, without direct support for exporting Kalman filter predictions in the current Ultralytics toolkit, these workarounds involve additional development but could provide the flexibility you need for your drone tracking application.

Have you considered reaching out in the Ultralytics discussions for community insight or potential hidden features that might be beneficial? The community often has valuable tricks up its sleeve.🛠️

Best of luck with your project, and don't hesitate to ask further questions!

yusufkoca0 commented 8 months ago

Hi, thanks for the quick response. As you said it requires strong familiarity with the source code I believe. I have been trying to somehow correctly format it and append to the output but so far I have been unsuccesful. We thought about adding a seperate kalman that works simultanously with BoTSORT's tracking code but we have a time constraint since it must be working in real time. I assume by community you meant the discord community, I will be posting my question there as well, I did not consider that platform at first. We are also open to any other suggestions about occlusion problem other than kalman predictions.

glenn-jocher commented 8 months ago

@yusufkoca0, I understand your challenges, especially when working under time constraints for real-time applications. Making modifications to the deep parts of the tracking code can be quite a task without in-depth knowledge of the source code, and setting up a separate Kalman filter that syncs with the tracker's output in real-time indeed adds complexity.

Posting in the Discord community is a great idea! You might find others who have tackled similar issues or have insights on effective shortcuts. 🚀

Regarding alternative approaches to the occlusion problem, explore the possibility of utilizing motion vectors or optical flow techniques. These can sometimes offer a simpler way to predict an occluded object's trajectory based on its previous motion patterns.

Here's a very quick conceptual snippet for incorporating optical flow with detected object coordinates:

# Pseudo-code for optical flow usage
if object_visible:
    update_object_position(new_coords)
else:
    predicted_coords = predict_next_position_using_optical_flow(last_known_coords)
    update_object_position(predicted_coords)

Incorporating motion predictions like these can provide a fallback for when direct object detection isn't available, potentially smoothing out tracking during occlusions without fully diving into custom Kalman predictions in the tracker.

Additionally, depending on your application, re-examining frame rate and resolution or using strategic waypoints for the drone based on last known object paths might help mitigate some occlusion issues without significantly increasing computational demands.

Keep us updated on your progress and findings from the community; your project sounds fascinating, and tackling these challenges will surely bring out innovative solutions!

github-actions[bot] commented 7 months ago

👋 Hello there! We wanted to give you a friendly reminder that this issue has not had any recent activity and may be closed soon, but don't worry - you can always reopen it if needed. If you still have any questions or concerns, please feel free to let us know how we can help.

For additional resources and information, please see the links below:

Feel free to inform us of any other issues you discover or feature requests that come to mind in the future. Pull Requests (PRs) are also always welcomed!

Thank you for your contributions to YOLO 🚀 and Vision AI ⭐