Closed AssassinCrow closed 3 years ago
Hi! We do detection in sensor frame, but tracking in a fixed world frame. The default config uses "odom" frame, which is based upon the robot's odometry (so there might be some drift over long time periods, but usually for short-term tracking this is negligible). Alternatively, you could also configure the tracking frame to "map" within the nnt.launch file I believe.
The transformation from sensor to world frame happens either in the detection-to-detection fusion pipeline, or at the latest in the tracker, depending on your setup.
Note that regardless of what the detector outputs, we currently only track 2D x,y coordinates over the groundplane. This might change in the future (it would basically involve some minor modifications to the motion models used for state estimation).
Thanks, for your quick reply. I noticed that my question might be ambiguous, so I'd like you to reassure me. I also hope this additional matter would not bother your work.
According to your comment, it seems to me that:
the detected information topic is a relative value from the sensor, meanwhile, the TrackedPeople
topic 's values have nothing to do with the sensor.
Do the pose and velocity from TrackedPerson
which is an output of this SPENCER package point the distance from the sensor? or the original pose (
Thanks in advance :)
the detected information topic is a relative value from the sensor, meanwhile, the TrackedPeople topic 's values have nothing to do with the sensor.
Correct!
Do the pose and velocity from TrackedPerson which is an output of this SPENCER package point the distance from the sensor? or the original pose ( = <0 0 0 0 0 0>) of the world-gound coordinates?
The pose of a TrackedPerson and its linear velocity are relative to the world origin (or more precisely, the origin of odom frame). In practice, the origin of the odom frame is usually the location at which you switched on your robot.
You can easily transform these poses into any (sensor) frame of your choice using the TF library, e.g. using transformListener::transformPose().
Hello, your project gave me a lot of help and I'm also currently planning to adopt this package. I really hope my request would not bother your work, but I have some questions.
What is the frame of the output(like
TrackedPerson
) of SPENCER project? I mean, are they the global coordinate values? or the relative coordinates from the sensor? Just so you know, I've already read this issue and I have a feeling that that person wondered which frame in Rviz visualization tools. Unfortunately, I couldn't understand his issue perfectly, so I want you to give some helpful advice for me. (In my personal opinion, it seems it is following the relative coordinates from a sensor.)Thanks in advance :)