alejodosr / drl-landing

MIT License
9 stars 5 forks source link

Running learned network #2

Open Kluchozaur opened 5 years ago

Kluchozaur commented 5 years ago

Hi! I am new at machine learning and my question can be stupid, but how can I use your learned network in my own simulation with bebop2? I am using parrot sphinx and bebop_autonomy, I have world with moving platform with aruco markers. Drone has pkg that recognize this markers and what's next?

alejodosr commented 5 years ago

Hi @Kluchozaur it is possible to use it, but requires a bit of work. Regarding the input of the network, note that this code is meant for a predefined set or markers in certain positions. Also, remember that you Bebop 2 needs a bottom camera with an image topic published in ROS.

Here you can find the moving platform you can incorporate to Gazebo simulator (with the Aruco markers I have used): https://drive.google.com/file/d/1I02xUfXBjEY4SE51r9ctm5SxRzu2r5mi/view?usp=sharing

For other combination of Arucos, you have to modify the frame-of-reference transformations included in the code.

Regarding the output of the network (the actions). In the file rl_environment_landing_with_RPdYdAPE_marker.cpp you may find the publishers corresponding to the actions being published in ROS (roll, pitch, dYaw, dAltitude) which you have to modify to adapt to your simulation.

   roll_pitch_ref_pub_.publish(roll_pitch_msg);
    dYaw_ref_pub_.publish(dyaw_msg);
    dAltitude_ref_pub_.publish(daltitude_msg);

Soon, I'll finish writing the usage of this repo for a real Bebop 2 flight. This information is also useful for a simulated Bebop 2.

Hope this information is useful for you.