cd catkin_ws/src
git clone https://github.com/AutonomyLab/bebop_autonomy.git
git clone https://github.com/ShiSanChuan/darknet_ros.git
cd ..
catkin build bebop_autonomy
catkin build darknet_ros -DCMAKE_BUILD_TYPE=Release
source devel/setup.bash
wget http://pjreddie.com/media/files/yolov3-voc.weights
wget http://pjreddie.com/media/files/yolov3.weights
roslaunch bebop_driver bebop_node.launch
roslaunch darknet_ros yolo_v3.launch
rostopic pub --once /bebop/land std_msgs/Empty
cd doc
g++ -std=c++11 Inrang.cpp -o Inrang `pkg-config --cflags --libs opencv`
./Inrang YOLO V3_screenshot_17.10.2018.png
通过深度学习框架比如yolo来识别物体是可以的,不过受到图片尺寸问题,对于近处目标(大概3米)的目标可以识别,但对于远处目标(在parrot传来的428x240图片里大概占5个像素),这种目标是无法分辨出来的,因此在原基础代码上添加opencv的颜色识别的线程,用于辅助yolo的识别,使得无人机在比较远处能先靠近物体,若不是该目标物体再移动视角环境,找下一个疑似目标;
由于目标是移动的或者静止的,因此通过PID调整飞行参数使得目标的中心位置与图像中心位置重合。
catkin_wc/src/darknet_ros/darknet_ros/yolo_network_config/weights/
catkin_wc/src/darknet_ros/darknet_ros/yolo_network_config/cfg/
catkin_wc/src/darknet_ros/darknet_ros/launch/yolo_v3.launch
<rosparam command="load" ns="darknet_ros" file="$(find darknet_ros)/config/yolov3-MBZIRC.yaml"/>
<param name="bebop_topic_head" value="/bebop" />
M. Bjelonic "YOLO ROS: Real-Time Object Detection for ROS", URL: https://github.com/leggedrobotics/darknet_ros, 2018.
@misc{bjelonicYolo2018, author = {Marko Bjelonic}, title = {{YOLO ROS}: Real-Time Object Detection for {ROS}}, howpublished = {\url{https://github.com/leggedrobotics/darknet_ros}}, year = {2016--2018}, }