dji-sdk / Guidance-SDK-ROS

The official ROS package of Guidance SDK for 32/64 bit Ubuntu and XU3
80 stars 87 forks source link

guidance node and stereo_image_proc package problem #17

Open wkyoun opened 8 years ago

wkyoun commented 8 years ago

My goal is to produce the pointclouds data by using guidance node and stereo_image_proc package(http://wiki.ros.org/stereo_image_proc).

I have successfully done with two usb webcam to produce the pointclouds data in the same ways.

I successfully calibrate dji guidance and run the following commands:

roslaunch guidance load_calib_file.launch  

rosrun guidance guidanceNodeCalibration  

It shows severals images including raw right and left image, and depth images with following messages:

frame index: 15507, stamp: 775667
ultrasonic distance: 0.096000, reliability: 1
ultrasonic distance: 1.691000, reliability: 1
ultrasonic distance: 0.470000, reliability: 1
ultrasonic distance: -0.001000, reliability: 0
ultrasonic distance: 1.502000, reliability: 1
frame index: 15508, stamp: 775716
obstacle distance: 20.000000  0.360000  0.400000  0.400000  1.500000 
frame index: 15508, stamp: 775716
vx:0.000000 vy:0.000000 vz:0.000000

The resulting topics are shown as followings:

wkyoun@wkyoun-15Z960-GA70K:~$ rostopic list -v

Published topics:
 * /guidance/imu [geometry_msgs/TransformStamped] 1 publisher
 * /guidance/left/camera_info [sensor_msgs/CameraInfo] 1 publisher
 * /guidance/left/image_raw [sensor_msgs/Image] 1 publisher
 * /guidance/depth_image [sensor_msgs/Image] 1 publisher
 * /guidance/obstacle_distance [sensor_msgs/LaserScan] 1 publisher
 * /guidance/velocity [geometry_msgs/Vector3Stamped] 1 publisher
 * /guidance/ultrasonic [sensor_msgs/LaserScan] 1 publisher
 * /guidance/right/image_raw [sensor_msgs/Image] 1 publisher
 * /rosout [rosgraph_msgs/Log] 2 publishers
 * /rosout_agg [rosgraph_msgs/Log] 1 publisher
 * /guidance/right/camera_info [sensor_msgs/CameraInfo] 1 publisher

Subscribed topics:
 * /rosout [rosgraph_msgs/Log] 1 subscriber
 * /statistics [rosgraph_msgs/TopicStatistics] 1 subscriber

Afterwards, In order to produce the pointclouds data, I run the stereo_image_proc packages which is supposed to subscribe the topics that is published by guidance node.

I run the following commands:

wkyoun@wkyoun-15Z960-GA70K:~$ ROS_NAMESPACE=guidance rosrun stereo_image_proc stereo_image_proc _approximate_sync:=True

The resulting topics are as follows:

wkyoun@wkyoun-15Z960-GA70K:~$ rostopic list -v

Published topics:
 * /guidance/disparity [stereo_msgs/DisparityImage] 1 publisher
 * /guidance/imu [geometry_msgs/TransformStamped] 1 publisher
 * /guidance/right/image_rect_color/compressed/parameter_descriptions [dynamic_reconfigure/ConfigDescription] 1 publisher
 * /guidance/left/image_color/compressed [sensor_msgs/CompressedImage] 1 publisher
 * /guidance/right/image_rect [sensor_msgs/Image] 1 publisher
 * /guidance/left/camera_info [sensor_msgs/CameraInfo] 1 publisher
 * /guidance/left/image_mono/theora/parameter_updates [dynamic_reconfigure/Config] 1 publisher
 * /guidance/stereo_image_proc_debayer_left/parameter_updates [dynamic_reconfigure/Config] 1 publisher
 * /guidance/left/image_raw [sensor_msgs/Image] 1 publisher
 * /guidance/right/image_color [sensor_msgs/Image] 1 publisher
 * /guidance/right/image_mono [sensor_msgs/Image] 1 publisher
 * /guidance/left/image_rect/theora [theora_image_transport/Packet] 1 publisher
 * /guidance/right/image_rect_color/compressedDepth/parameter_updates [dynamic_reconfigure/Config] 1 publisher
 * /guidance/left/image_mono/compressed [sensor_msgs/CompressedImage] 1 publisher
 * /guidance/right/image_rect_color/compressed/parameter_updates [dynamic_reconfigure/Config] 1 publisher
 * /guidance/right/image_mono/compressed/parameter_descriptions [dynamic_reconfigure/ConfigDescription] 1 publisher
 * /guidance/left/image_rect/compressed/parameter_descriptions [dynamic_reconfigure/ConfigDescription] 1 publisher
 * /guidance/right/image_rect/theora [theora_image_transport/Packet] 1 publisher
 * /guidance/right/image_rect_color/compressedDepth/parameter_descriptions [dynamic_reconfigure/ConfigDescription] 1 publisher
 * /guidance/left/image_rect_color [sensor_msgs/Image] 1 publisher
 * /guidance/left/image_rect/theora/parameter_descriptions [dynamic_reconfigure/ConfigDescription] 1 publisher
 * /guidance/right/image_mono/compressedDepth [sensor_msgs/CompressedImage] 1 publisher
 * /guidance/right/image_color/compressedDepth/parameter_updates [dynamic_reconfigure/Config] 1 publisher
 * /guidance/left/image_rect_color/compressedDepth [sensor_msgs/CompressedImage] 1 publisher
 * /guidance/left/image_color/theora/parameter_descriptions [dynamic_reconfigure/ConfigDescription] 1 publisher
 * /guidance/left/image_mono [sensor_msgs/Image] 1 publisher
 * /guidance/stereo_image_proc_rectify_mono_right/parameter_updates [dynamic_reconfigure/Config] 1 publisher
 * /guidance/stereo_image_proc_rectify_color_right/parameter_descriptions [dynamic_reconfigure/ConfigDescription] 1 publisher
 * /guidance/depth_image [sensor_msgs/Image] 1 publisher
 * /guidance/stereo_image_proc_debayer_right/parameter_updates [dynamic_reconfigure/Config] 1 publisher
 * /guidance/right/image_color/compressed [sensor_msgs/CompressedImage] 1 publisher
 * /guidance/right/image_rect/compressedDepth/parameter_descriptions [dynamic_reconfigure/ConfigDescription] 1 publisher
 * /guidance/left/image_rect_color/compressedDepth/parameter_descriptions [dynamic_reconfigure/ConfigDescription] 1 publisher
 * /guidance/points2 [sensor_msgs/PointCloud2] 1 publisher
 * /guidance/obstacle_distance [sensor_msgs/LaserScan] 1 publisher
 * /guidance/stereo_image_proc_rectify_color_left/parameter_updates [dynamic_reconfigure/Config] 1 publisher
 * /guidance/stereo_image_proc/parameter_updates [dynamic_reconfigure/Config] 1 publisher
 * /guidance/left/image_color/theora [theora_image_transport/Packet] 1 publisher
 * /guidance/left/image_mono/theora/parameter_descriptions [dynamic_reconfigure/ConfigDescription] 1 publisher
 * /guidance/left/image_color/theora/parameter_updates [dynamic_reconfigure/Config] 1 publisher
 * /guidance/right/image_rect_color/compressedDepth [sensor_msgs/CompressedImage] 1 publisher
 * /guidance/left/image_color/compressed/parameter_descriptions [dynamic_reconfigure/ConfigDescription] 1 publisher
 * /guidance/stereo_image_proc_rectify_mono_left/parameter_updates [dynamic_reconfigure/Config] 1 publisher
 * /guidance/left/image_color/compressedDepth/parameter_descriptions [dynamic_reconfigure/ConfigDescription] 1 publisher
 * /guidance/right/image_color/theora/parameter_updates [dynamic_reconfigure/Config] 1 publisher
 * /guidance/right/image_color/theora [theora_image_transport/Packet] 1 publisher
 * /guidance/right/image_mono/compressed [sensor_msgs/CompressedImage] 1 publisher
 * /guidance/left/image_rect/compressedDepth/parameter_updates [dynamic_reconfigure/Config] 1 publisher
 * /guidance/right/image_rect/theora/parameter_descriptions [dynamic_reconfigure/ConfigDescription] 1 publisher
 * /guidance/velocity [geometry_msgs/Vector3Stamped] 1 publisher
 * /guidance/right/image_rect_color/theora/parameter_updates [dynamic_reconfigure/Config] 1 publisher
 * /guidance/ultrasonic [sensor_msgs/LaserScan] 1 publisher
 * /guidance/right/image_rect_color [sensor_msgs/Image] 1 publisher
 * /guidance/stereo_image_proc_rectify_color_left/parameter_descriptions [dynamic_reconfigure/ConfigDescription] 1 publisher
 * /guidance/left/image_color [sensor_msgs/Image] 1 publisher
 * /guidance/left/image_rect_color/theora/parameter_descriptions [dynamic_reconfigure/ConfigDescription] 1 publisher
 * /guidance/left/image_rect_color/compressed [sensor_msgs/CompressedImage] 1 publisher
 * /guidance/right/image_rect/compressedDepth/parameter_updates [dynamic_reconfigure/Config] 1 publisher
 * /guidance/right/image_color/compressedDepth [sensor_msgs/CompressedImage] 1 publisher
 * /guidance/right/image_rect/compressed/parameter_descriptions [dynamic_reconfigure/ConfigDescription] 1 publisher
 * /guidance/right/image_mono/compressedDepth/parameter_updates [dynamic_reconfigure/Config] 1 publisher
 * /guidance/stereo_image_proc_debayer_right/parameter_descriptions [dynamic_reconfigure/ConfigDescription] 1 publisher
 * /guidance/right/image_raw [sensor_msgs/Image] 1 publisher
 * /guidance/left/image_rect_color/compressedDepth/parameter_updates [dynamic_reconfigure/Config] 1 publisher
 * /guidance/left/image_mono/compressedDepth/parameter_updates [dynamic_reconfigure/Config] 1 publisher
 * /rosout [rosgraph_msgs/Log] 3 publishers
 * /guidance/right/image_mono/theora/parameter_updates [dynamic_reconfigure/Config] 1 publisher
 * /guidance/right/image_rect_color/theora/parameter_descriptions [dynamic_reconfigure/ConfigDescription] 1 publisher
 * /guidance/left/image_mono/theora [theora_image_transport/Packet] 1 publisher
 * /guidance/right/image_rect_color/theora [theora_image_transport/Packet] 1 publisher
 * /guidance/left/image_rect_color/compressed/parameter_descriptions [dynamic_reconfigure/ConfigDescription] 1 publisher
 * /guidance/left/image_color/compressed/parameter_updates [dynamic_reconfigure/Config] 1 publisher
 * /guidance/stereo_image_proc_rectify_mono_left/parameter_descriptions [dynamic_reconfigure/ConfigDescription] 1 publisher
 * /guidance/left/image_rect/compressed [sensor_msgs/CompressedImage] 1 publisher
 * /guidance/left/image_mono/compressed/parameter_updates [dynamic_reconfigure/Config] 1 publisher
 * /guidance/stereo_image_proc/parameter_descriptions [dynamic_reconfigure/ConfigDescription] 1 publisher
 * /guidance/stereo_image_proc_rectify_mono_right/parameter_descriptions [dynamic_reconfigure/ConfigDescription] 1 publisher
 * /guidance/right/image_color/theora/parameter_descriptions [dynamic_reconfigure/ConfigDescription] 1 publisher
 * /guidance/right/image_rect_color/compressed [sensor_msgs/CompressedImage] 1 publisher
 * /guidance/right/image_rect/theora/parameter_updates [dynamic_reconfigure/Config] 1 publisher
 * /rosout_agg [rosgraph_msgs/Log] 1 publisher
 * /guidance/right/image_rect/compressedDepth [sensor_msgs/CompressedImage] 1 publisher
 * /guidance/right/image_mono/compressed/parameter_updates [dynamic_reconfigure/Config] 1 publisher
 * /guidance/right/image_mono/compressedDepth/parameter_descriptions [dynamic_reconfigure/ConfigDescription] 1 publisher
 * /guidance/right/image_mono/theora [theora_image_transport/Packet] 1 publisher
 * /guidance/left/image_mono/compressed/parameter_descriptions [dynamic_reconfigure/ConfigDescription] 1 publisher
 * /guidance/left/image_rect/compressed/parameter_updates [dynamic_reconfigure/Config] 1 publisher
 * /guidance/right/image_mono/theora/parameter_descriptions [dynamic_reconfigure/ConfigDescription] 1 publisher
 * /guidance/left/image_rect/compressedDepth/parameter_descriptions [dynamic_reconfigure/ConfigDescription] 1 publisher
 * /guidance/left/image_color/compressedDepth/parameter_updates [dynamic_reconfigure/Config] 1 publisher
 * /guidance/left/image_rect_color/compressed/parameter_updates [dynamic_reconfigure/Config] 1 publisher
 * /guidance/right/image_color/compressedDepth/parameter_descriptions [dynamic_reconfigure/ConfigDescription] 1 publisher
 * /guidance/right/image_color/compressed/parameter_descriptions [dynamic_reconfigure/ConfigDescription] 1 publisher
 * /guidance/right/camera_info [sensor_msgs/CameraInfo] 1 publisher
 * /guidance/right/image_rect/compressed [sensor_msgs/CompressedImage] 1 publisher
 * /guidance/left/image_mono/compressedDepth/parameter_descriptions [dynamic_reconfigure/ConfigDescription] 1 publisher
 * /guidance/left/image_mono/compressedDepth [sensor_msgs/CompressedImage] 1 publisher
 * /guidance/left/image_rect/theora/parameter_updates [dynamic_reconfigure/Config] 1 publisher
 * /guidance/right/image_color/compressed/parameter_updates [dynamic_reconfigure/Config] 1 publisher
 * /guidance/left/image_rect [sensor_msgs/Image] 1 publisher
 * /guidance/left/image_color/compressedDepth [sensor_msgs/CompressedImage] 1 publisher
 * /guidance/left/image_rect_color/theora/parameter_updates [dynamic_reconfigure/Config] 1 publisher
 * /guidance/left/image_rect_color/theora [theora_image_transport/Packet] 1 publisher
 * /guidance/right/image_rect/compressed/parameter_updates [dynamic_reconfigure/Config] 1 publisher
 * /guidance/left/image_rect/compressedDepth [sensor_msgs/CompressedImage] 1 publisher
 * /guidance/stereo_image_proc_rectify_color_right/parameter_updates [dynamic_reconfigure/Config] 1 publisher
 * /guidance/stereo_image_proc_debayer_left/parameter_descriptions [dynamic_reconfigure/ConfigDescription] 1 publisher

Subscribed topics:
 * /rosout [rosgraph_msgs/Log] 1 subscriber
 * /statistics [rosgraph_msgs/TopicStatistics] 1 subscriber

The problem is that according to http://wiki.ros.org/stereo_image_proc (As you can see the above, stereo_image_proc package subscribe only /rosout and /statistics)

stereo_image_proc package should subscribe the topics that are published by guidance nodes as followings:

Published topics:

 * /guidance/left/camera_info [sensor_msgs/CameraInfo] 1 publisher
 * /guidance/left/image_raw [sensor_msgs/Image] 1 publisher
 * /guidance/right/image_raw [sensor_msgs/Image] 1 publisher
 * /guidance/right/camera_info [sensor_msgs/CameraInfo] 1 publisher

When I look the node graph by rqt_graph, stereo_image_proc package does not subscribe the topics.

Would any of you explain why this happens?

Thank you in advances

yebo92 commented 8 years ago

hello when I run the following commands:

roslaunch guidance load_calib_file.launch

the terminal shows:terminate called after throwing an instance of 'YAML::TypedBadConversion' what(): yaml-cpp: error at line 0, column 0: bad conversion

Do you have any ideas about this error?

wkyoun commented 8 years ago

When you run the following command:

roslaunch guidance load_calib_file.launch 

it read .yaml file containing intrisinc and extrinsic calibaration data.

You can check in the load_calib_file.launch which read .yaml file containing intrisinc and extrinsic calibaration data.

There must be some error .yaml file in load_calib_file.launch.

uwleahcim commented 8 years ago

Is your camera_info formatted correctly?

It looks like stereo_image_proc is working (i.e. it generated /guidance/disparity topic). In order to go from disparity --> depth --> point cloud you'll need cu, cv, focal, and baseline information, which should be encoded in the camera_info of the right camera (it is part of the projection matrix).

Anyways, I have stereo_image_proc generating /points2, which a point cloud 2 topic. Note, I'm running ROS Kinetic Kame on Ubuntu 16.04. I also modified my guidanceNode to broadcast tf topics and have synced time stamps.

wkyoun commented 8 years ago

@uwleahcim

Thank you for the reply As you said above as following " I also modified my guidanceNode to broadcast tf topics and have synced time stamps."

If you don't mind, would you please share how you modified guidanceNode to broadcast tf topics and have synced time stamps?

tngan commented 8 years ago

@ipatient-zju I can run it properly, so check out your .yaml file under the directory /calibration_files.

@uwleahcim How's the CPU usage when you run the stereo_image_proc in manifold ?

uwleahcim commented 8 years ago

@won13y Well the best practice is not the modify the SDK directly, but for the time stamp you do have to modify the SDK src. (and you'll need to modify the frame id for tf setup)

In GuidanceNode.cpp and GuidanceNodeCalibration.cpp do the following:

int my_callback(int data_type, int data_len, char *content)
{
    g_lock.enter();
    ros::Time current_time = ros::Time::now(); // this line right after g_lock.enter()

    // replace all xxx.header.stamp = ros::Time::now(); with xxx.head.stamp = current_time;

}

The above makes more sense for the time stamp after the g_lock.enter(); since the buffer inside should all have the same exact time stamp. Usingxxx.header.stamp = ros::Time::now()` for each publisher's time stamp inside the callback would result in very very small time stamp deltas (gets even larger if you plan on doing some processing in the same function before publishing like speckle filter).

For the tf I recommend using static_tf in the launcher to generate tf tree for the guidance. static_tf always assumes the latest tf (only limited by the parent tf), and it is more efficient as it doesn't require constant broadcasting. Static_tf works here because the guidance sensors are fixed to the frame.

Below is a template I would use to generate tf for each of your guidance sensors.

<launch>
<node pkg="tf2_ros" type="static_transform_publisher" name="link1_broadcaster" args="1 0 0 0 0 0 1 link1_parent link1" />
</launch>

Note, the args are in x, y, z, q1, q2, q3, q0 (x, y, z, qx, qy, qz, qw) or in x, y, z, roll, pitch, yaw (based on number of args, it will automatically chose quat for eular angles). You can find the transform for each sensor set in the guidance assistant software, but here are the default values (assuming it is all mounted on the bottom). Also note the transforms are from the center of the M100 which in my example is frame_id = "base_link".

<launch>
<!-- Note, you're going form NED body frame coordinates to optical coordinates uses the following transformations -->

<!-- Transform from base_link to the guidance core -->
<node pkg="tf2_ros" type="static_transform_publisher" name="guidance_broadcaster" args="0.0 0.0 0.09 0.0 0.0 0.0 base_link guidance" />

<!-- Transform from base_link to the various sensors -->
<node pkg="tf2_ros" type="static_transform_publisher" name="guidance_broadcaster" args="0.0 0.0 0.105 0.0 0.0 1.57079632679 base_link guidance_down" />
<node pkg="tf2_ros" type="static_transform_publisher" name="guidance_broadcaster" args="0.10 0.0 0.09 1.57079632679 0 1.57079632679 base_link guidance_front" />
<node pkg="tf2_ros" type="static_transform_publisher" name="guidance_broadcaster" args="0.0 0.10 0.09 1.57079632679 0 3.14159265359 base_link guidance_right" />
<node pkg="tf2_ros" type="static_transform_publisher" name="guidance_broadcaster" args="0.10 0.0 0.09 1.57079632679 0 -1.57079632679 base_link guidance_back" />
<node pkg="tf2_ros" type="static_transform_publisher" name="guidance_broadcaster" args="0.0 -0.10 0.09 1.57079632679 0 0 base_link guidance_left" />

<!-- Here we generate the camera offsets for more accuracy, you have to do this for each camera set -->
<!-- You could save a little overhead by directly transforming from camera to base_link and skip the above transforms -->
<node pkg="tf2_ros" type="static_transform_publisher" name="guidance_broadcaster" args="-0.075 0 0 0 0 0 guidance_down guidance_down_left"/>
<node pkg="tf2_ros" type="static_transform_publisher" name="guidance_broadcaster" args="0.075 0 0 0 0 0 guidance_down guidance_down_right" />

<node pkg="tf2_ros" type="static_transform_publisher" name="guidance_broadcaster" args="-0.075 0 0 0 0 0 guidance_front guidance_front_left"/>
<node pkg="tf2_ros" type="static_transform_publisher" name="guidance_broadcaster" args="0.075 0 0 0 0 0 guidance_front guidance_front_right" />

<node pkg="tf2_ros" type="static_transform_publisher" name="guidance_broadcaster" args="-0.075 0 0 0 0 0 guidance_right guidance_right_left"/>
<node pkg="tf2_ros" type="static_transform_publisher" name="guidance_broadcaster" args="0.075 0 0 0 0 0 guidance_right guidance_right_right" />

<node pkg="tf2_ros" type="static_transform_publisher" name="guidance_broadcaster" args="-0.075 0 0 0 0 0 guidance_back guidance_back_left"/>
<node pkg="tf2_ros" type="static_transform_publisher" name="guidance_broadcaster" args="0.075 0 0 0 0 0 guidance_back guidance_back_right" />

<node pkg="tf2_ros" type="static_transform_publisher" name="guidance_broadcaster" args="-0.075 0 0 0 0 0 guidance_left guidance_left_left"/>
<node pkg="tf2_ros" type="static_transform_publisher" name="guidance_broadcaster" args="0.075 0 0 0 0 0 guidance_left guidance_left_right" />
</launch>

(Note, launch file in the DJI Guidance ROS SDK will launch all the nodes in the guidance project, so just be aware)

You'll also need to modify the header information in the GuidanceNode.cpp and GuidanceNodeCalibration.cpp for each camera set so that they will have the correct frame_id:

left_8.header.frame_id  = "guidance_" + direction[CAMERA_ID] + "_left";
g_cam_info_left.header.frame_id =  "guidance_" + direction[CAMERA_ID] + "_left";

right_8.header.frame_id  = "guidance_" + direction[CAMERA_ID] + "_right;
g_cam_info_right.header.frame_id = "guidance_" + direction[CAMERA_ID] + "_right";

// depth map is generated with left camera reference. If you're using stereo_image_proc, it will do it automatically for you.
depth_16.header.frame_id  = "guidance_" + direction[CAMERA_ID] + "_left";  

Where direction is an array of strings containing the direction name. You'll want to put it as a global variable in the source files.

String direction[] = {"down", "front", "right", "back", "left"};

Just to recap, at minimal you'll have to modify the src guidance Nodes in GuidanceNode.cpp and GuidanceNodeCalibration.cpp, to assign unique frame_ids and also setup a synced time stamps.

@tngan I'm not running stereo_image_proc on my Manifold. On mobile i5 processor, each stereo_image_proc uses up around 15% according top. Thus for 4 sets I'm getting around 40-45% CPU usage. I only used stereo_image_proc to quickly prototype my setup (i.e. setting up my speckle filter parameters).

Also note that using stereo_image_proc probably isn't the most efficient approach in terms of CPU usage. The image_raw generated by the guidance are already calibrated and the calibration parameters cu, cv, focal, and baseline are provided. Plus the guidance does internally generate its own disparity map for any 2 cameras. Using the above will probably decrease the processing load by 60%!!! Basically for the 2 cameras sets with disparity maps, you can directly generate a pointcloud2. For additional camera sets, you can generate disparity maps using cv::StereoBM. To further free up the CPU, you can use CUDA accelerated opencv to generate the point clouds from stereo images.

Here is how I implemented 4 point clouds. I actually don't recommend using the depth image generated from the guidance core. Rather, you should grab the disparity image from the guidance core and generate an OpenNI depth map format (int16 in mm).

Here is the code to enable disparity maps instead of depth maps

    //err_code = select_depth_image(CAMERA_ID);
    //    RETURN_IF_ERR(err_code);
    err_code = select_disparity_image(CAMERA_ID);
        RETURN_IF_ERR(err_code);

Here you need change the if statement in the callback to look for disparity image

//      if ( data->m_depth_image[CAMERA_ID] ){
//          memcpy(g_depth.data, data->m_depth_image[CAMERA_ID], IMAGE_SIZE * 2);
        if ( data->m_disparity_image[CAMERA_ID] ){
            memcpy(g_depth.data, data->m_depth_image[CAMERA_ID], IMAGE_SIZE * 2);

To convert the disparity to depth in mm, use the following equation

depth = baseline * focal/disparity So it should look something like this

// guidance core provides baseline in units of meters so 1000x to get mm
// guidance core also provides focal length
// 16x because disparity image has 4-bit fractions
// Do some post filtering here of the disparity image here
g_depth = baseline * 1000 * focal * 16 / g_depth

For additional cameras use StereoBM to generate disparity maps. Use the same steps above to generate a depth map.

Ptr<StereoBM> sbm = StereoBM::create(64, 9);
sbm->compute(g_greyscale_image_left, g_greyscale_image_right, g_depth);
// Do some post filtering here of the disparity image here
g_depth = baseline * 1000 * focal * 16 / g_depth;

You can use depth_image_proc package to generate your pointclouds from the depth images. This process wil offload a lot of the CPU strain to the guidance core versus straight up using stereo_image_proc (where you'll be wasting cycles generating image_rect and disparity via stereo matching).

tngan commented 8 years ago

@uwleahcim Yes, I also disable the noisy depth image. In fact, select_disparity_image can be only used in the DIY work type, otherwise it will return error. In stereo_image_proc, it will handle the time synchronization of left and right image topic, would you like to share how you expose all the grey scale images from the additional cameras using StereoBM at the same time ?

uwleahcim commented 8 years ago

Just FYI, I found the calibration generated by the guidance core to be pretty poor quality using the Guidance assistant app. Thus the rectified images aren't that great. Also, you can't change the setting on pre-filtering for the disparity images. You can definitely improve the stereo matching if you manually calibrate the cameras and manually generate disparity with the right pre-filter settings.

Anyways, on the Manifold you're processing resources are scarce, so you should probably want to stick with the rectified image and disparity image the guidance core generates. The data is good enough.

As for how to enable multiple cameras at once? It is pretty straight forward.

Make sure you have publisher for each set. Personally I used an array of Publishers, so it looks something like this

ros::Publisher left_image_pub[CAMERA_PAIR_NUM];
ros::Publisher right_image_pub[CAMERA_PAIR_NUM];
ros::Publisher depth_image_pub[CAMERA_PAIR_NUM];
ros::Publisher cam_info_left_pub[CAMERA_PAIR_NUM]; 
ros::Publisher cam_info_right_pub[CAMERA_PAIR_NUM];

You can either hard code in how many cameras you'll need. Even better is reading in parameter of cameras you need.

In main(), you'll need to enable all the publishers for all the images you want

for (i in list of cameras you want)
{
    left_image_pub[i]           = my_node.advertise<sensor_msgs::Image>("/guidance/"+camera_position[i]+"/left/image_rect",1);
    right_image_pub[i]          = my_node.advertise<sensor_msgs::Image>("/guidance/"+camera_position[i]+"/right/image_rect",1);

    cam_info_left_pub[i]        = my_node.advertise<sensor_msgs::CameraInfo>("/guidance/"+camera_position[i]+"/left/camera_info",1);
    cam_info_right_pub[i]       = my_node.advertise<sensor_msgs::CameraInfo>("/guidance/"+camera_position[i]+"/right/camera_info",1);
}

for (i in list of depths you want)
{
    depth_image_pub[i]          = my_node.advertise<sensor_msgs::Image>("/guidance/"+camera_position[i]+"/depth/image_raw",1);
}

Next in main(), you'll need to enable all the camera images you want.

    for (i in the list of cameras you want) // note here the values i is a e_vbus_index (I'm not sure if you directly pass in a corresponding int)
    {
        /* select data */
        err_code = select_greyscale_image(i, true);
        RETURN_IF_ERR(err_code);
        err_code = select_greyscale_image(i, false);
        RETURN_IF_ERR(err_code);

        // for setting exposure
        exposure_param para;
        para.m_is_auto_exposure = 1;
        para.m_step = 10;
        para.m_expected_brightness = 120;
        para.m_camera_pair_index = i;
        set_exposure_param(&para);
    }
    for (i in the list of depths you want) // note here the values i is a e_vbus_index (I'm not sure if you directly pass in a corresponding int)
    {
        err_code = select_depth_image(i);
        RETURN_IF_ERR(err_code);
        err_code = select_depth_image(i);
        RETURN_IF_ERR(err_code);
    }

Finally in my_callback(), you'll want to grab all the enable images (you can simply loop through everything as the if statement will check if it is enabled.

for (int i = 0; i<NUM_CAMERA_PAIRS; i++) // note you might need to switch reading a e_vbus_index value instead of an int. This can be done with an array that stores the list of corresponding e_vbus_index. 
{
    if ( data->m_greyscale_image_left[i] ){
        memcpy(g_greyscale_image_left.data, data->m_greyscale_image_left[i], IMAGE_SIZE);
        if (show_images) {
            imshow("left",  g_greyscale_image_left);
        }
        // publish left greyscale image
        cv_bridge::CvImage left_8;
        g_greyscale_image_left.copyTo(left_8.image);
        left_8.header.frame_id  = "guidance_"+position[i]+"_left";
        left_8.header.stamp = ros::Time::now();
        left_8.encoding     = sensor_msgs::image_encodings::MONO8;
        //left_image_pub.publish(left_8.toImageMsg());
        //you'll need an array of publisher for each camera set 
        left_image_pub[i].publish(left_8.toImageMsg());
        // publish camerainfo here too
    }
    if ( data->m_greyscale_image_right[i] ){
        memcpy(g_greyscale_image_right.data, data->m_greyscale_image_right[i], IMAGE_SIZE);
        if (show_images) {
            imshow("right", g_greyscale_image_right);
        }
        // publish right greyscale image
        cv_bridge::CvImage right_8;
        g_greyscale_image_right.copyTo(right_8.image);
        right_8.header.frame_id  = "guidance_"+position[i]+"_right";
        right_8.header.stamp     = ros::Time::now();
        right_8.encoding     = sensor_msgs::image_encodings::MONO8;
        right_image_pub[i].publish(right_8.toImageMsg());
        // publish camerainfo here too
    }
    if ( data->m_depth_image[i] ){
        memcpy(g_depth.data, data->m_depth_image[i], IMAGE_SIZE * 2);
        g_depth.convertTo(depth8, CV_8UC1);
        if (show_images) {
            imshow("depth", depth8);
        }
        //publish depth image
        cv_bridge::CvImage depth_16;
        g_depth.copyTo(depth_16.image);
        depth_16.header.frame_id  = "guidance_"+position[i]+"_left";
        depth_16.header.stamp     = ros::Time::now();
        depth_16.encoding     = sensor_msgs::image_encodings::MONO16;
        depth_image_pub[i].publish(depth_16.toImageMsg());
    }
}

Disclaimer: the above isn't my actual code (my real code is a lot messier with tons of flagging variables and loggers and commented out stuff). But the structure is basically the same. Thus code bits above most likely won't compile without some small modifications.

From here you'll want to use a separate ROS Node to subscribe to the tropics (left image, right image, disparity) to generate disparity (for those stereo sets without disparity map) and points clouds. Don't do the processing inside the my_callback, or you risk blocking (i.e. you're sensor readout from the guidance core will be slowed down if the processing inside the callback takes longer 1/20 of a second).

wkyoun commented 8 years ago

@uwleahcim

Thank you for the reply, and I would really appreciate your kind help.

And, I have more question to ask: (What I really need is pointcloud2 data(/guidance/points2 [sensor_msgs/PointCloud2]) generated by guidance.)

According to other's comments,

The previous problem might be due to incomplete stereo calibration,

In detail, following topics related to stereo calibration parameter was not properly published.

* /guidance/left/camera_info [sensor_msgs/CameraInfo] 1 publisher
* /guidance/right/camera_info [sensor_msgs/CameraInfo] 1 publisher

Regarding following webpages:

http://wiki.ros.org/camera_calibration/Tutorials/StereoCalibration

The following is the questions that I want to ask

1) Do I need to calibrate all of 5 guidance using above pakages(camera_calibration/Tutorials/StereoCalibration)?

2) when stereo calibration is complete,

I am supposed to click the save button to create camera_params_left.yaml, and camera_params_right.yaml

Where can I find camera_params_left.yaml, and camera_params_right.yaml file?

I think that those camera_params_left.yaml, and camera_params_right.yaml file are required for me

to generate following topics so that I can get pointcloud2 topic(/guidance/points2 [sensor_msgs/PointCloud2]) by connecting raw guidance topics[sensor_msgs/Image] to stereo_image_proc package.

* /guidance/left/camera_info [sensor_msgs/CameraInfo] 1 publisher
* /guidance/right/camera_info [sensor_msgs/CameraInfo] 1 publisher

if the answer of 1) question is yes,

I guess that I am supposed to have 5 pairs of camera_params_left.yaml, and camera_params_right.yaml file for all of 5 guidance.

Lastly

Would you please share your working code in github to generate pointcloud2 from guidances?

Thank you in advance

uwleahcim commented 8 years ago

Here is a quick and dirty method to generate point clouds quickly if you need them right away and don't mind some performance hit.

Supposedly the cameras should be already rectified through the guidance assistant app. The the example GuidanceNode.cpp, there is a line of code that generates cv (principle point), cu (principle point), f (focal length), and T(i think dji used b for baseline). Using these values you can basically generate a reasonable camera info file.

cam_info_left.header.frame_id = "guidance_front_left"; 
cam_info_left.height = 320;
cam_info_left.width = 240;
cam_info_left.distortion_model = "plumb_bob";
cam_info_left.K = {f, 0, cu, 0, f, cv, 0, 0, 1};
cam_info_left.R = {1, 0, 0, 0, 1, 0, 0, 0, 1};
cam_info_left.P = {f, 0, cu, 0, 0, f, cv, 0, 0, 0, 1, 0};
cam_info_left.D.push_back(0); // all zeros since no distrotion in rect
cam_info_left.D.push_back(0);
cam_info_left.D.push_back(0);
cam_info_left.D.push_back(0);
cam_info_left.D.push_back(0);

cam_info_left.roi.x_offset = 0;
cam_info_left.roi.y_offset = 0;
cam_info_left.roi.height = 0;
cam_info_left.roi.width = 0;
cam_info_left.roi.do_rectify = false;

cam_info_right = cam_info_left[index];

// Tx =-1 * focal * baseline
cam_info_right.header.frame_id = "guidance_front_right";
cam_info_right.P[3] = -f * b;

Do this for each camera pair (there are 5).

Anyways the above method is quick since calibration with the DJI assistant app is nearly instant (camera_calibration package takes forever even with such low resolution guidance cameras).

Once you have published the camera_info, you can generate Point Clouds with stereo_image_proc. Here is a syntax for ROS launcher file. You can run 5 instants of stereo_image_proc if you have all the images being publish from the guidance (warning its eats up about 50% of one core on an intel i5).

<node pkg="stereo_image_proc" type="stereo_image_proc" name="stereo_image_proc_front" output="screen" ns = "/guidance/right">
    <param name="prefilter_size" type="int" value="9"/>
    <param name="prefilter_cap" type="int" value="31"/>
    <param name="correlation_window_size" type="int" value="15"/>
    <param name="min_disparity" type="int" value="0"/>
    <param name="disparity_range" type="int" value="64"/>
    <param name="uniqueness_ratio" type="int" value="10"/>
    <param name="texture_threshold" type="int" value="10"/>
    <param name="speckle_size" type="int" value="100"/>
    <param name="speckle_range" type="int" value="4"/>
</node>

Anyways, I'm still messing around with how to get the best performance while generating decent point cloud data, so my code is very messy and not final. Note, if you want to use CUDA opencv on your Manifold, you have to modify a bunch of files in ROS indigo (here is the guide: http://wiki.ros.org/NvidiaJetsonTK1). If you need the code for the above quick and dirty method, I can provide it (surprisingly it is readable).

Mara2020 commented 7 years ago

@uwleahcim Were you able to properly record all stereo and depth images from the 4 directions? Did you have any bandwidth limitations? I can only record disparity/depth images from 1-2 cameras at a time. Once I try to record from 3 or more cameras, the additional depth image topics just provide black images.

andre-nguyen commented 7 years ago

@Mara2020 The FPGA can only process two depth maps at a time. If you want a third depth map you can request two camera images and process them yourself.

andre-nguyen commented 7 years ago

btw we're from the lab upstairs, we rewrote the driver here https://github.com/MRASL/mrasl_guidance_ros some of it is hardcoded but it can help you get started