Research Group Webpage: [https://ps.is.tue.mpg.de/research_fields/robot-perception-group]
NEWS: Nodes and packages specific to our submission to RA-L + IROS 2019 added. Please scroll below for details.
All Code in this repository - unless otherwise stated in local license or code headers is
Copyright 2018 Max Planck Institute for Intelligent Systems
Licensed under the terms of the GNU General Public Licence (GPL) v3 or higher. See: https://www.gnu.org/licenses/gpl-3.0.en.html
ROS Packages:
Link or copy all flight and optional packages required into the src folder of your catkin workspace.
Build packages with catkin_make
Flying vehicles are connected via Wifi. Each flying vehicle needs a main computer running Linux and ROS (tested with kinetic). Either the main computer or a separate dedicated GPU needs to be running caffe with Wei Liu's SSD Multibox detector and our ssd_server [repository] (https://github.com/AIRCAP/caffe)
Alternatively a seperate GPU board can be connected via LAN (this is our setup. We have SSD Multibox running on an NVIDIA Jetson TX1)
A Flight Controller running the LibrePilot firmware (suggested: OpenPilot Revoltion) needs to be connected to the main computer via USB [repository] (https://github.com/AIRCAP/LibrePilot)
A camera must be connected to the main computer and supply accurately timestamped frames. Any camera with sufficiently high resolution and ROS support can be used. We used BASLER and FLIR cameras and our own ROS nodes for interfacing
The librepilot_node node from the librepilot package [separate repository] (https://github.com/AIRCAP/LibrePilot) provides IMU and robot self pose data and forwards waypoints to the flight controllers autopilot for aerial navigation.
All Poses are in NED (north east down) coordinate frame.
This node provides static transformations between various frames such as camera position relative to vehicle position, or NED world frame relative to ROS ENU reference frame.
The projection_models model_distance_from_height node provides various transformations required by other nodes to translate from camera to world frame and back. This is responsible for translating 2d detections from the neural network into 3d uncertainty ellipses (PoseWithCovarianceStamped) in world frame, as well as translating 3d estimates into 2d coordinate region of interests for the next detection. You will have to modify launch files with correct information regarding the placement and vehicle relative pose of the camera on your vehicle.
The node provided by this package is required by model_distance_from_height for correct projections from camera into world frame and back. You will have to provide a config matching your camera. The camera should export under the topic <machine_namespace>/video
The neural network detector node is NOT running a neural network. It listens to images and forwards them to a running ssd_server node - then exposes the detections back into ROS as timestamped messages
The Kalman Filter runs on each vehicle, fuses the detections of all vehicles (subscribed to using fkie_multimaster and robot specific namespaces) and calculates the fused estimate which is published once every time the robot self pose is updated from the flight controller It also tracks the offset between the flight controller's GPS derived pose estimate and the actual pose as estimated based on neural network detections.
This node corresponds to the work published in RA-L + IROS 2018. For our latest work on active perception see the next section and node.
An MPC based planner that allows the robots to follow the detected person. WARNING! This code is in a very early state of development and not considered stable. Like everything else use it at your own risk. You should always have means for manual override!!! What the planner does is:
Active Perception-driven convex MPC planner with integrated collision avoidance as submitted in manuscript to RA-L + IROS 2019.
This package corresponds to the work submitted for review to RA-L + IROS 2019. Packages other than nmpc_planner (see above) remain mostly the same.
DQMPC based planner with integrated collision avoidance corresponding to the work to appear in SSRR 2018
This trajectory planner was used in comparison in the work submitted for review to RA-L + IROS 2019.
Some of these packages are specific to camera hardware, simulation environment, etc.
The code in this repository and its above listed requirements can be used to control a group of several aerial robots in the real world or in simulation. This can be helpful to reproduce results we presented in our publications as well as a base for your own robotics research.
With the exception of the gcs_visualization package, which publishes a number of tfs and ROS debug topics for visualization in rviz, all the code is supposed to run on board the flying vehicles. For neural network detection the aerial vehicle needs to have a CUDA capable GPU running the ssd_server.bin executable built from our SSD multibox repository. In our setup this runs on a separate computer connected to the robots main computer with a short ethernet cross link cable. It is of course possible to run everything on one single computer if the hardware is capable enough. Of course a camera should be connected to the main computer as well and running apropriate ROS packages to provide image data as sensor_msgs/image along with apropriate camera calibration.
We are using OpenPilot Revolution flight controllers with our own custom firmware based on Librepilot to have flight controller ROS integration. These are connected to the main computer via USB and accept flight control waypoints from ROS and in turn provide GPS+IMU state estimates derived from the integrated EKF based sensor fusion.
Switching to a different low level flight controller would require a wrapper package that provides the same data and accepts waypoints, similar to the one we use in simulation.
We use fkie_multimaster to run a roscore on each of the robots as well as a basestation computer. scripts/globalstart.sh is used to connect to all the robots and run a startup script which in turn starts all required ROS nodes in different screen virtual background terminals. The globalstart script then repeatedly polls each robot for online status and possible errors. It also allows publishing a global topic to trigger video recording on all robots if they run our own camera interface and video recording packages.
Please expect to need to read and modify at least the startup scripts and paths to adapt our code to your own robots and cameras. Likely you will need to change network addresses in various launchfiles and startscripts.
In our most recent work, we compare different formation control algorithms. These can be selected by changing the nmpc_planner launchfile in scripts/navigation.sh. Per default we use the Active Target Perception solution. To setup known static obstacles to be avoided it is necessary to edit the launchfiles. See the section about simulation for details.
The path scripts/simulation includes helper scripts to run our code for any number of robots on a single machine using the Gazebo simulator. The easiest way to test this is with the setup_mavocap_gazebo.sh script which runs all requirements, if all requirements have been installed correctly. The components being started and run are:
You can debug and visualize the simulation for example using rviz. The fixe_frame should be set to world_ENU, as rviz per default uses East-North-Up coordinate axis. Librepilot and all our code uses North-East-Down, represented by the static tf providing world
The following ROS topics are probably of interest for visualization:
Results achieved in real world experiments always depend on the hardware in question as well as environmental factors on the day of experiment. However our simulated experiments results were averaged over a large number of identical experiments and should be reproducible by third parties.
RA-L + IROS 2018 -- Eric Price et al. Deep Neural Network-based Cooperative Visual Tracking through Multiple Micro Aerial Vehicles.
The simulation experiments were conducted using a flat plane world and no static obstacles, running the baseline nmpc planner. To run this, the following changes are necessary.
RA-L + IROS 2019 -- Work submitted for review by Rahul Tallamraju et al.
The simulation experiments were conducted using a 3d world with and without static obstacles using multiple trajectory planners
Static obstacle avoidance can be disabled by setting param name="POINT_OBSTACLES" value="false" in any of the above mentioned nmpc_planner launchfiles. Keep in mind that the simulated robots can collide with trees and get stuck in them, so you should also switch to a treeless world by changing $WORLD to "arena_ICRA_notrees" in scripts/simulation/setup_mavocap_gazebo.sh
The launchfile for the formation controller for real robot experiments can be set in scripts/navigation.sh. To enable or disable virtual obstacles the POINT_OBSTACLES parameter should be set accordingly. The obstacle coordinates are defined in yml files in packages/flight/nmpcplanner/cfg