jsk-ros-pkg / jsk_recognition

JSK perception ROS packages
https://github.com/jsk-ros-pkg/jsk_recognition
275 stars 190 forks source link

add bag files for color space detection #2069

Open k-okada opened 7 years ago

k-okada commented 7 years ago

@tongtybj, @chibi314 please upload your bag files for color space detection which you used in https://github.com/ros-perception/opencv_apps/pull/61#issuecomment-296366004 If you have several bag files with different lighting condition, that's fine. If you can provide bag file taken by real drone, with IMU data, that's much better. If not, bagfile with only images is fine for now.

@wkentaro will help you how to upload bag data

wkentaro commented 7 years ago

Please refer to

tongtybj commented 7 years ago

@wkentaro

I got following error when trying to upload:

$ jsk_data put --public 2017-04-14_uav_with_marked_tree_kashiwa_camp_strong_light_industrial_cam.bag.tgz
Uploading to Google Drive...
Go to the following link in your browser:
https://accounts.google.com/o/oauth2/auth?client_id=367116221053-7n0vf5akeru7on6o2fjinrecpdoe99eg.apps.googleusercontent.com&redirect_uri=urn%3Aietf%3Awg%3Aoauth%3A2.0%3Aoob&response_type=code&scope=https%3A%2F%2Fwww.googleapis.com%2Fauth%2Fdrive&state=state

Enter verification code: An error occurred creating Drive client: OAuthError: updateToken: Unexpected HTTP status 400 Bad Request

An error occurred creating Drive client: OAuthError: updateToken: Unexpected HTTP status 400 Bad Request
Traceback (most recent call last):
  File "/opt/ros/kinetic/bin/jsk_data", line 8, in <module>
    jsk_data.cli.cli(obj={})
  File "/usr/lib/python2.7/dist-packages/click/core.py", line 716, in __call__
    return self.main(*args, **kwargs)
  File "/usr/lib/python2.7/dist-packages/click/core.py", line 696, in main
    rv = self.invoke(ctx)
  File "/usr/lib/python2.7/dist-packages/click/core.py", line 1060, in invoke
    return _process_result(sub_ctx.command.invoke(sub_ctx))
  File "/usr/lib/python2.7/dist-packages/click/core.py", line 889, in invoke
    return ctx.invoke(self.callback, **ctx.params)
  File "/usr/lib/python2.7/dist-packages/click/core.py", line 534, in invoke
    return callback(*args, **kwargs)
  File "/opt/ros/kinetic/lib/python2.7/dist-packages/jsk_data/cli.py", line 142, in cmd_put
    stdout = upload_gdrive(filename)
  File "/opt/ros/kinetic/lib/python2.7/dist-packages/jsk_data/gdrive.py", line 72, in upload_gdrive
    return run_gdrive(args=args)
  File "/opt/ros/kinetic/lib/python2.7/dist-packages/jsk_data/gdrive.py", line 30, in run_gdrive
    return subprocess.check_output(cmd, shell=True)
  File "/usr/lib/python2.7/subprocess.py", line 574, in check_output
    raise CalledProcessError(retcode, cmd, output=output)
subprocess.CalledProcessError: Command 'rosrun jsk_data drive-linux-x64 --config /home/chou/.ros/jsk_data/.gdrive upload --file 2017-04-14_uav_with_marked_tree_kashiwa_camp_strong_light_industrial_cam.bag.tgz --parent 0B9P1L--7Wd2vUGplQkVLTFBWcFE' returned non-zero exit status 1
wkentaro commented 7 years ago

@tongtybj Have you done this?

Go to the following link in your browser:
https://accounts.google.com/o/oauth2/auth?client_id=367116221053-7n0vf5akeru7on6o2fjinrecpdoe99eg.apps.googleusercontent.com&redirect_uri=urn%3Aietf%3Awg%3Aoauth%3A2.0%3Aoob&response_type=code&scope=https%3A%2F%2Fwww.googleapis.com%2Fauth%2Fdrive&state=state
tongtybj commented 7 years ago

@wkentaro Thx! WFM

tongtybj commented 7 years ago

@k-okada I upload 5 rosbags with different scenes and image sensors.

Image Sensor: pointgrey chameleon3, web camera.

Scene1: hongo sanshiro forest with pointgrey chameleon3 rosbag: 2017-04-16_uav_with_marked_tree_hongo_sanshiro_forest_light_industrial_cam.bag honge_sanshiro

Scene2: kashiwa camp(ground) with pointgrey chameleon3 rosbag: 2017-04-14_uav_with_marked_tree_kashiwa_camp_strong_light_industrial_cam.bag.tgz kashiwa_ground

Scene3: kashiwa camp(ground) with web camera rosbag: 2017-04-19_uav_with_marked_tree_kashiwa_camp_strong_light_web_cam.bag kashiwa_ground2

Scene4: kashiwa camp(forest) with web camera rosbag: 2017-04-21_uav_with_marked_tree_kashiwa_camp_forest_light_web_cam1.bag rosbag: 2017-04-21_uav_with_marked_tree_kashiwa_camp_forest_light_web_cam2.bag kashiwa_forest

Scene5: indoor(Eng.8 R.330) with web camera rosbag: 2017-04-21_uav_with_marked_tree_inside_web_cam.bag indoor

tongtybj commented 7 years ago

@k-okada All rosbag files contains the imu and odometry data of UAV, along with the range sensor(ultrasonic) to the ground.

$ rosbag info 2017-04-21_uav_with_marked_tree_kashiwa_camp_forest_light_web_cam2.bag
path:        2017-04-21_uav_with_marked_tree_kashiwa_camp_forest_light_web_cam2.bag
version:     2.0
duration:    2:02s (122s)
start:       Apr 21 2017 14:28:15.17 (1492752495.17)
end:         Apr 21 2017 14:30:17.39 (1492752617.39)
size:        2.7 GB
messages:    99284
compression: none [3105/3105 chunks]
types:       dji_sdk/Acceleration         [5de5cfee671950d30a03b944f0d1555c]
             dji_sdk/AttitudeQuaternion   [999d24c7cb273aa4f2c6044f85cac84c]
             dji_sdk/GlobalPosition       [10784f0f63ab6f41e201fee714fabb2a]
             dji_sdk/LocalPosition        [933ce5db06b6bff36785c58a964ad3c7]
             geometry_msgs/PointStamped   [c63aecb41bfdfd6b7e1fac37c7cbe7bf]
             geometry_msgs/Twist          [9f195f881246fdfa2798d1d3eebca84a]
             geometry_msgs/Vector3Stamped [7b324c7325e683bf02a9b14b01090ec7]
             nav_msgs/Odometry            [cd5e73d190d741a2f92e81eda573aca7]
             sensor_msgs/CameraInfo       [c9a58c1b0b154e0e6da7578cb991d214]
             sensor_msgs/Image            [060021388200f6f0f447d0fcd9c64743]
             sensor_msgs/LaserScan        [90c7ef2dc6895d81024acba2ac42f369]
topics:      /camera/camera_info             3104 msgs    : sensor_msgs/CameraInfo      
             /camera/image_rect              3104 msgs    : sensor_msgs/Image           
             /cmd_vel                        2348 msgs    : geometry_msgs/Twist         
             /dji_sdk/acceleration          12219 msgs    : dji_sdk/Acceleration        
             /dji_sdk/attitude_quaternion   12219 msgs    : dji_sdk/AttitudeQuaternion  
             /dji_sdk/global_position       12219 msgs    : dji_sdk/GlobalPosition      
             /dji_sdk/local_position        12219 msgs    : dji_sdk/LocalPosition       
             /dji_sdk/odometry              12219 msgs    : nav_msgs/Odometry           
             /guidance/ultrasonic            2442 msgs    : sensor_msgs/LaserScan       
             /guidance/velocity              1221 msgs    : geometry_msgs/Vector3Stamped
             /modified_odom                 12210 msgs    : nav_msgs/Odometry           
             /object_image_center            3104 msgs    : geometry_msgs/PointStamped  
             /scan                           4874 msgs    : sensor_msgs/LaserScan       
             /tree_location                  3434 msgs    : geometry_msgs/PointStamped  
             /uav_target_pos                 2348 msgs    : nav_msgs/Odometry

Also note that the odometry is obtained from IMU + downwards optical flow + GPS, and recently the GPS is really suck, because US army add undecryptable noise on the GPS data(NEWS). So the accuracy of position is very bad. What we done is to simply integrate the linear velocity from optical flow for the real experiments.

I am trying the 2D laser slam, and looking forwards to test imu + monocular mapping like LSD-SLAM

k-okada commented 7 years ago

Nice, could you create smaller version? may be only image and color info, or (+ dji_sdk... + guidance)

$ rosbag info 2017-04-16_uav_with_marked_tree_hongo_sanshiro_forest_light_industrial_cam.bag
path:        2017-04-16_uav_with_marked_tree_hongo_sanshiro_forest_light_industrial_cam.bag
version:     2.0
duration:    48.0s
start:       Apr 16 2017 14:24:57.59 (1492320297.59)
end:         Apr 16 2017 14:25:45.54 (1492320345.54)
size:        5.3 GB
messages:    33569
compression: none [1441/1441 chunks]
types:       dji_sdk/Acceleration         [5de5cfee671950d30a03b944f0d1555c]
             dji_sdk/AttitudeQuaternion   [999d24c7cb273aa4f2c6044f85cac84c]
             dji_sdk/GlobalPosition       [10784f0f63ab6f41e201fee714fabb2a]
             dji_sdk/LocalPosition        [933ce5db06b6bff36785c58a964ad3c7]
             geometry_msgs/PointStamped   [c63aecb41bfdfd6b7e1fac37c7cbe7bf]
             geometry_msgs/Twist          [9f195f881246fdfa2798d1d3eebca84a]
             geometry_msgs/Vector3Stamped [7b324c7325e683bf02a9b14b01090ec7]
             nav_msgs/Odometry            [cd5e73d190d741a2f92e81eda573aca7]
             sensor_msgs/CameraInfo       [c9a58c1b0b154e0e6da7578cb991d214]
             sensor_msgs/Image            [060021388200f6f0f447d0fcd9c64743]
             sensor_msgs/LaserScan        [90c7ef2dc6895d81024acba2ac42f369]
topics:      /camera/camera_info            1439 msgs    : sensor_msgs/CameraInfo      
             /camera/image_rect             1440 msgs    : sensor_msgs/Image           
             /cmd_vel                        959 msgs    : geometry_msgs/Twist         
             /dji_sdk/acceleration          4796 msgs    : dji_sdk/Acceleration        
             /dji_sdk/attitude_quaternion   4796 msgs    : dji_sdk/AttitudeQuaternion  
             /dji_sdk/global_position       4796 msgs    : dji_sdk/GlobalPosition      
             /dji_sdk/local_position        4796 msgs    : dji_sdk/LocalPosition       
             /dji_sdk/odometry              4795 msgs    : nav_msgs/Odometry           
             /guidance/ultrasonic            959 msgs    : sensor_msgs/LaserScan       
             /guidance/velocity              479 msgs    : geometry_msgs/Vector3Stamped
             /object_image_center           1440 msgs    : geometry_msgs/PointStamped  
             /scan                          1915 msgs    : sensor_msgs/LaserScan       
             /uav_target_pos                 959 msgs    : nav_msgs/Odometry

odometry is obtained from IMU + downwards optical flow + GPS imu + monocular mapping like LSD-SLAM

When you integrate multiple sensor data, you need check accuracy and characteristics in step-by-step manner. And before you merge data, you have to image how each data compensate each other. For example, usually optical flow is not s accurate at least in-house environment, it may work well on outdoor scene, but it may depends on camera motion speeds vs search area in flow. So, first you should check only with IMU as http://wiki.ros.org/robot_pose_ekf#How_Robot_Pose_EKF_works, and then you consider what kind of information is missing and choose appropriate sensor which covers that information.

tongtybj commented 7 years ago

@k-okada Thank you for your advice!

I will check step by step. As for the optical flow sensor, it is a mature product form DJI, called guidance, which has their own fusion system inside the black box using IMU + Stereo Optical Flow + Ultra-sonic. dji-guidance