Zhefan-Xu / CERLAB-UAV-Autonomy

[CMU] A Versatile and Modular Framework Designed for Autonomous Unmanned Aerial Vehicles [UAVs] (C++/ROS/PX4)
MIT License
295 stars 40 forks source link

A question about the paper #1

Closed yzysmile closed 9 months ago

yzysmile commented 9 months ago

Hello! Thanks for your this great job! I have read the paper named "Onboard dynamic-object detection and tracking for autonomous robot navigation with RGB-D camera".

I am confused in the section of "D. Data Association and Tracking". The "Instead of directly using the previous obstacle’s feature, we apply the linear propagation to get the predicted obstacle’s position and replace the previous obstacle’s position with the predicted position in the feature vector" content mentioned in the original article.

We want to match the obstacles in the previous frame with the obstacles in the current frame. Why not directly use the features of the previous frame? Could you please explain it?

Best yzy

Zhefan-Xu commented 9 months ago

Hi @yzysmile, thanks for your great questions. Please note here we only replace the “position” feature by linear propagation for previous frame. So, all the rest features remain the same. The reason we use linear propagation is the detected dynamic object is perhaps moving fast. Its previous match and current frame could be very different in position (e.g. if it is moving at 5 m/s and sample time is 0.1, then it might have 0.5m difference). This huge difference might lead to mismatch if there exists multiple objects. Please let me know if that answers your question.

yzysmile commented 9 months ago

Thanks for your quickly response.I understand what you mean by your example.

Just as you said "we only replace the 'position' feature by linear propagation for previous frame. ", so does linear propagation mean that the predicted position is obtained by Kalman filtering as described in the next section? Is this the point of using Kalman filter?

Zhefan-Xu commented 9 months ago

Kalman Filter has two parts (1) propagation (2) correction. It is using the first part propagation. The entire kalman filter is for state estimation (i.e., you propogate by your motion model and then correct by your new detection for tracking).

yzysmile commented 9 months ago

Thank you for your answer. : )