Closed esraaelelimy closed 5 years ago
Exactly. You are welcome to use any format you’d like to input the dynamic obstacle information, but we happen to store it in the custom clusters msg. Some info on the pedestrian tracking software is described in this thesis: https://dspace.mit.edu/handle/1721.1/106011
But it is a few years old, and many new strategies have come out since. Generally the idea we used was find clusters in a laserscan, and label the clusters as pedestrian/not using camera images, with known static transforms to relate the two sensor types. If the camera also provides depth, you might be able to just use yolo and grab relevant geometry from the depth image, instead of requiring a laserscan.
On Mon, Mar 18, 2019 at 1:26 PM Esraa Magdy notifications@github.com wrote:
Hello Michael, Regarding the Clusters.msg, If I understand it right, the clusters msg contains information about the dynamic obstacles in the environment like their locations and velocities. if that is correct, how do you get this information? Do you use an object detection system like Yolo or something?
Thanks in advance.
— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/mfe7/cadrl_ros/issues/5, or mute the thread https://github.com/notifications/unsubscribe-auth/ADWgpfAX9y1hQpNsVA-OAV1_DAUSQjLGks5vX9pVgaJpZM4b6ewS .
My camera does provide the depth information, I will try to use yolo and get the corresponding depth information from the point cloud. I will also have a look at the thesis. Thanks a lot and Nice work :)
Hello Michael, Regarding the Clusters.msg, If I understand it right, the clusters msg contains information about the dynamic obstacles in the environment like their locations and velocities. if that is correct, how do you get this information? Do you use an object detection system like Yolo or something?
Thanks in advance.