Closed lithiumhydride2 closed 10 months ago
Hi! You are correct, the simulation demo uses the ground truth states from Gazebo. The purpose of this environment is only to test the core algorithm before deploying the drones in the field. We could have used a simulated GPS but IIRC there were some particularities on the PX4 side at the time. On the real drones, the ego-localization is done using a combination of IMU and optic flow, whereas relative localization is done with visual detection & tracking. RTK GPS information is only used for ground truth localization (i.e., for the plots in the paper). Hope this clarifies your question and that you still find the code useful.
THANK you so much for answering my question!!! your code still USEFUL
New issue: I would like to ask how you set up the waypoint for the UAV in your experiments, i.e. how you get the transformation relation from the waypoint to the UAV's own coordinate system, which is needed to calculate the migration term. Did you use VIO in the actual experiment? Looking forward to your answer, thanks a lot!
I read the code and find that the topic
/drone_x/mavros/vision_pose/pose
is published by the node/gazebp_mocap_node
, but/gazebo_mocap_node
just subscribe all drone's pose from gazebo environment. And with no change,/gazebo_mocap_node
publish topics/drone_x/mavros/vision_pose/pose
, which is used for caculating control command inflocking_node
.My question is: Is that the default demo in
readme.md
is NOT a demo for vision drones flocking in GPS-DENIED environment, and how to run a demo for vision drones flocking in GPS-DENIED environment, that means DO NOT get accurate pose from gazebo environment but from vision estimations.