Closed Rak-r closed 9 months ago
@Rak-r hello! Your endeavor to combine YOLOv8's tracking capabilities with 3D data from an Intel Realsense D435 for speed estimation is quite interesting.π
With respect to your concern about object re-identification after it leaves and then reappears in the frame, it's true that the default Bytetrack tracker will assign a new ID. This can indeed affect longitudinal speed estimation since the tracker treats it as a new object, thereby resetting any accumulated speed data.
For continuous speed estimation despite ID resets, you might consider implementing a custom re-identification mechanism that matches the new ID to the last known state of the object based on additional cues (such as appearance or 3D positioning) when it reappears. This would allow you to maintain speed estimations more consistently across frame exits and entries.
This task is quite sophisticated and might require fine-tuning or additional algorithmic support beyond the default capabilities provided by YOLOv8's tracking. If you're interested in exploring the enhancement of tracking persistency, you might delve into some existing literature on object re-identification and multi-object tracking.
Happy to see the innovation happening with YOLOv8, and the best of luck with your speed estimation enhancements! Remember to check out our Ultralytics Docs for guidance on modes like Track that might help you refine your approach. Keep up the great work! π
Thanks for the explaination. Although this scenario might be a future work. My current application involves the pedestrian coming from one direction and the vehicle is from other (perpendicular paths of the two), so there might be no scenario for now where the pedestrian goes out of frame. So, currently what I want is the pedestrian's speed. I am looking for Deep sort for some re-id tasks and to extract out the trajectories of the objects but I am getting bit confused how can I compute the speed or velocity.
@Rak-r, indeed, if your application involves pedestrians and vehicles crossing paths without leaving the frame, continuous tracking without re-identification may suffice for speed estimation.
To compute the speed of a pedestrian using trajectories provided by a tracker like Deep SORT, you'll need to:
Speed = Distance / Time
to get the pedestrian's speed.Ensure you have the correct scale for distance conversion from pixels to real-world units (e.g., meters), and sync it with your frame rate to measure time accurately.
This process does not inherently require reidentification, assuming uninterrupted tracking. For relying on trackers built into YOLOv8, remember to review our documentation on the Track mode for insights that could inform your approach to speed calculations. Keep moving forward! πΆββοΈπ¨
I read that we can get velocity estimation of detected and tracked objects by extracting the kalman filter present in the tracker. I have searched for it but couldn't found much. I am new to kalman filters so have very less idea about it.
Other thing as adviced above, I have extracted the trajectories and having the 3D bounding box center, using these two I am finding the Euclidean distance between current bbox position and previous position, then dividing by the timestamps. I get some data.
Now getting a bit confused that which approach is correct or how can I extract the velcotiy from the Kalman filters of the tracker. I am also looking at deep sort and searched for the same but found very less information.
Any help would be appreciated @glenn-jocher
@Rak-r, extracting velocity from a Kalman filter involves interpreting the state vector maintained by the filter, which typically includes position and velocity information. If you're new to Kalman filters, it might take some time to familiarize yourself with their operation and how they're implemented in the tracker you're using.
Your current method of calculating speed by measuring the Euclidean distance between consecutive 3D bounding box positions and dividing by the time interval is a straightforward and valid approach. It gives you the average speed between two points in time.
To extract velocity from the Kalman filter within a tracker like Deep SORT, you would typically access the state vector after each update step, which should give you an estimate of the current velocity. However, this requires a deeper understanding of the tracker's implementation and the Kalman filter's state representation.
For now, if you're achieving reasonable results with your Euclidean distance method, you might continue refining this approach. If you decide to delve into Kalman filters, consider studying the filter's theory and its application within the specific tracker's codebase.
Keep experimenting, and you'll find the method that best suits your needs. Good luck! π οΈπ
π Hello there! We wanted to give you a friendly reminder that this issue has not had any recent activity and may be closed soon, but don't worry - you can always reopen it if needed. If you still have any questions or concerns, please feel free to let us know how we can help.
For additional resources and information, please see the links below:
Feel free to inform us of any other issues you discover or feature requests that come to mind in the future. Pull Requests (PRs) are also always welcomed!
Thank you for your contributions to YOLO π and Vision AI β
Search before asking
Question
I have tried to setup the detection and default tracker (Bytetrack) stack with Intel realsense D435 .
What I want now, is after having the 3d data about the objects, I want to do speed estimation of the detected and tracked objects (let's say person's speed). For this, using yolo default trackers byte track/botosrt, they don't assign re-id to the object which leaves the frame. I want to take some suggestion.
Additional
No response