Open zikprid0 opened 4 years ago
Hey @zikprid0 , thanks for sharing an example bag file to reproduce and confirm the issue:
Triggering the segfault using the bagfile in debug workspace, with stack-trace captured here:
pcl::OrganizedMultiPlaneSegmentation<pcl::PointXYZRGBA, pcl::Normal, pcl::Label>::refine organized_multi_plane_segmentation.hpp:348
pcl::OrganizedMultiPlaneSegmentation<pcl::PointXYZRGBA, pcl::Normal, pcl::Label>::segmentAndRefine organized_multi_plane_segmentation.hpp:280
cogrob::OrganizedSegmentationTBB<pcl::PointXYZRGBA>::computePlanes organized_segmentation_tbb.cpp:396
cogrob::OrganizedSegmentationTBB<pcl::PointXYZRGBA>::spinOnce()::{lambda()#2}::operator()() const organized_segmentation_tbb.cpp:179
tbb::internal::function_task<cogrob::OrganizedSegmentationTBB<pcl::PointXYZRGBA>::spinOnce()::{lambda()#2}>::execute() task.h:926
<unknown> 0x00007f2a05a46b46
tbb::task::wait_for_all task.h:760
tbb::internal::task_group_base::wait task_group.h:140
cogrob::OrganizedSegmentationTBB<pcl::PointXYZRGBA>::spinOnce organized_segmentation_tbb.cpp:188
cogrob::OrganizedSegmentationTBB<pcl::PointXYZRGBA>::spinThread organized_segmentation_tbb.cpp:158
boost::_mfi::mf0<void, cogrob::OrganizedSegmentationTBB<pcl::PointXYZRGBA> >::operator() mem_fn_template.hpp:49
boost::_bi::list1<boost::_bi::value<cogrob::OrganizedSegmentationTBB<pcl::PointXYZRGBA>*> >::operator()<boost::_mfi::mf0<void, cogrob::OrganizedSegmentationTBB<pcl::PointXYZRGBA> >, boost::_bi::list0> bind.hpp:259
boost::_bi::bind_t<void, boost::_mfi::mf0<void, cogrob::OrganizedSegmentationTBB<pcl::PointXYZRGBA> >, boost::_bi::list1<boost::_bi::value<cogrob::OrganizedSegmentationTBB<pcl::PointXYZRGBA>*> > >::operator() bind.hpp:1294
boost::detail::thread_data<boost::_bi::bind_t<void, boost::_mfi::mf0<void, cogrob::OrganizedSegmentationTBB<pcl::PointXYZRGBA> >, boost::_bi::list1<boost::_bi::value<cogrob::OrganizedSegmentationTBB<pcl::PointXYZRGBA>*> > > >::run thread.hpp:116
<unknown> 0x00007f2a11ac7bcd
start_thread 0x00007f2a1546e6db
clone 0x00007f2a1124588f
Due to the lock down, we've been unable to fully use the robots in the university lab, so we're working towards a setting up a gazebo simulation environment for testing with as well. Could I ask about which gazebo plugin you used to generate the point clouds recorded in the bag file, and how you configured your world file for the depth sensor model?
The is_dense: false
field in the message should be normal, given when a point in the filed of view is far off in the unseeable range:
However, the height should not be one, but instead should correspond to the aspect ratio of the image resolution from the virtual depth camera.
Are you using an external node for doing the RGB depth registration onto the point cloud? It seems like that's something the gazebo_ros_depth_camera
plugin can already do:
Hello, @ruffsl I just used plugin named "libgazebo_ros_camera.so" for point cloud. And no other node for registration. This is sensor definition of model sdf.
<sensor name="d435_depth_sensor" type="depth">
<pose>0 0 0 0 0 0</pose>
<update_rate>15</update_rate>
<always_on>true</always_on>
<depth_camera>
</depth_camera>
<camera>
<horizontal_fov>1.59174</horizontal_fov>
<!-- <horizontal_fov>2.9</horizontal_fov> -->

<clip>
<near>0.001</near>
<far>10</far>
</clip>
</camera>
<noise>
<type>gaussian</type>
<mean>0.0</mean>
<stddev>0.07</stddev>
</noise>
<plugin name="plugin_name" filename="libgazebo_ros_camera.so">
<ros>
<argument>/front_camera/depth/image_raw:=/front_camera/depth/image_raw</argument>
<argument>/front_camera/depth/camera_info:=/front_camera/depth/camera_info</argument>
<argument>/front_camera/image_raw:=/front_camera/rgb/image_raw</argument>
<argument>/front_camera/image_raw/compressed:=/front_camera/rgb/image_raw/compressedDepth</argument>
<argument>/front_camera/image_raw/compressedDepth:=/front_camera/rgb/image_raw/compressedDepth</argument>
<argument>/front_camera/image_raw/theora:=/front_camera/rgb/image_raw/theora</argument>
<argument>/front_camera/camera_info_raw:=/front_camera/camera_info_raw</argument>
<argument>/front_camera/points:=/front_camera/depth/points</argument>
</ros>
<!-- Set camera name. If empty, defaults to sensor name (i.e. "sensor_name") -->
<camera_name>front_camera</camera_name>
<!-- Set TF frame name. If empty, defaults to link name (i.e. "link_name") -->
<hack_baseline>0.07</hack_baseline>
<!-- No need to repeat distortion parameters or to set autoDistortion -->
<min_depth>0.11</min_depth>
<max_depth>10.0</max_depth>
</plugin>
</sensor>
I think the problem is that libgazebo_ros_camera generates data format in non-organized point clouds.
From pcl docs:
An organized point cloud dataset is the name given to point clouds that resemble an organized image (or matrix) like structure, where the data is split into rows and columns. Examples of such point clouds include data coming from stereo cameras or Time Of Flight cameras. The advantages of an organized dataset is that by knowing the relationship between adjacent points (e.g. pixels), nearest neighbor operations are much more efficient, thus speeding up the computation and lowering the costs of certain algorithms in PCL.
Maybe we can use depth_image_proc::PointCloudXyzrgbNode
to register the point cloud from /front_camera/depth/image_raw
and /front_camera/rgb/image_raw
. Previous we did this: https://github.com/CogRob/lge_deliverables/blob/5487e7ee815ecc2eadafb2610e144b3e6b2ce25c/mission_control_freight53/always_on_ros2/intel_realsense/intel_realsense.launch.py#L15-L24
@zikprid0 , where you able to test the suggestion of using depth_image_proc
with your simulation setup, to reshape the sensor data from gazebo into a correct organized point cloud, solved the issue?
@BillWSY , Your linked page occured error 404 not found. I think it is temporary, but still could not connect page. Or can you tell me reshape format of point cloud? width, height, point_step, field, etc... @ruffsl , I did not try to reshape yet, because other job. I will try this next week. Thank you.
@zikprid0 That repo is the one we used to deliver Omnimapper to you last Novemeber -- @yjkim046 and @hyunseok have access to it. I just added you as well (you probably need to accept the invitation to be able to access it). It's just height should be > 1. Somethingl like width = 1280 height = 720 will look reasonable.
Hello. I just tested omnimapper with depth_image_proc. Actually I used own program working like depth_image_proc, because I cannot find it in ROS2. I changed width and height of point cloud data to 640 and 636.
Omnimapper did not die anymore. But I cannot certain it works correctly. It did not show something like plane boundaries. We will send you screenshot and log of terminal.
Thank you.
Did you download the latest code? We encountered similar issues before but fixed with https://github.com/CogRob/omnimapper_ros/pull/3
By the way, the LGe deliverable repo contains a dockerfile that patches the current ROS2 code and builds depth_image_proc. May I ask how did you implement your own depth image process?
@BillWSY I think I download latest code, but I will chek it and tell you. About depth_image_proc, I will check inside docker. And I implement it like this with simple callback.
msg->fields = org_msg->fields;
msg->header = org_msg->header;
msg->point_step = org_msg->point_step;
msg->is_bigendian = org_msg->is_bigendian;
msg->is_dense = org_msg->is_dense;
int height = 636;
int width = org_msg->width/height;
msg->height = height;
msg->width = width;
msg->row_step = org_msg->row_step/height;
msg->data = org_msg->data;
I re-cloned omnimapper_ros repository and tested. And omnimapper died with this error at just seconds later after start.
[omnimapper_ros_node-1] [pcl::OrganizedNeighbor::radiusSearch] Input dataset is not from a projective device!
[omnimapper_ros_node-1] Residual (MSE) 0.035808, using 1088 valid points
[omnimapper_ros_node-1] [pcl::OrganizedNeighbor::radiusSearch] Input dataset is not from a projective device!
[omnimapper_ros_node-1] Residual (MSE) 0.035808, using 1088 valid points
[omnimapper_ros_node-1] [pcl::OrganizedNeighbor::radiusSearch] Input dataset is not from a projective device!
[omnimapper_ros_node-1] Residual (MSE) 0.035808, using 1088 valid points
[omnimapper_ros_node-1] [pcl::OrganizedNeighbor::radiusSearch] Input dataset is not from a projective device!
[omnimapper_ros_node-1] Residual (MSE) 0.035808, using 1088 valid points
[omnimapper_ros_node-1] [INFO] [omnimapper_ros_node]: GetTransformFunctorTF: waiting for camera_link to base_footprint to be ready: First attempt.
[omnimapper_ros_node-1]
[omnimapper_ros_node-1] [INFO] [omnimapper_ros_node]: OmniMapperROS: Installing BoundedPlanePlugin callback.
[omnimapper_ros_node-1] BoundedPlanePlugin: Got 1 regions.
[omnimapper_ros_node-1] border before: 2069
[omnimapper_ros_node-1] border after: 2043
[omnimapper_ros_node-1] BoundedPlanePlugin: regions->measurements: GOT INVALID MEASUREMENT!
[omnimapper_ros_node-1] Model: -0.000220 -0.999984 -0.005600 1.029942, Base Model: -0.140601 0.990051 -0.005600 -3.091874
[omnimapper_ros_node-1] BoundedPlanePlugin: Have 1 measurements
[omnimapper_ros_node-1] BoundedPlanePlugin: Processing planes for pose 8646911284551352320
[omnimapper_ros_node-1] BoundedPlanePlugin: map_meas: GOT INVALID MEASUREMENT!
[omnimapper_ros_node-1] BoundedPlanePlugin: Got 1 regions.
[omnimapper_ros_node-1] border before: 2076
[omnimapper_ros_node-1] border after: 2047
[omnimapper_ros_node-1] BoundedPlanePlugin: regions->measurements: GOT INVALID MEASUREMENT!
[omnimapper_ros_node-1] Model: -0.000219 -0.999984 -0.005587 1.029897, Base Model: -0.140602 0.990050 -0.005587 -3.091918
[omnimapper_ros_node-1] BoundedPlanePlugin: Have 1 measurements
[omnimapper_ros_node-1] BoundedPlanePlugin: Processing planes for pose 8646911284551352321
[omnimapper_ros_node-1] BoundedPlanePlugin: map_meas: GOT INVALID MEASUREMENT!
[omnimapper_ros_node-1] BoundedPlanePlugin: Got 1 regions.
[omnimapper_ros_node-1] border before: 2061
[omnimapper_ros_node-1] border after: 2035
[omnimapper_ros_node-1] BoundedPlanePlugin: regions->measurements: GOT INVALID MEASUREMENT!
[omnimapper_ros_node-1] Model: 0.000221 0.999984 0.005598 -1.029939, Base Model: -0.140600 0.990051 -0.005598 1.029939
[omnimapper_ros_node-1] BoundedPlanePlugin: Have 1 measurements
[omnimapper_ros_node-1] BoundedPlanePlugin: Processing planes for pose 8646911284551352322
[omnimapper_ros_node-1] ERROR: Initializing boundary at bad place: Point is 4.121816 from plane.
[ERROR] [omnimapper_ros_node-1]: process has died
Thanks.
Hi -- thanks for the feedback. I think the error message Input dataset is not from a projective device! gives us a hint that that input data is malformed. Unless you can be sure the data in your msg is organized and flattened sequentially, I think it is a bit risky just to change the header -- it will be much safer if we can use depth_image_proc
. It is very simple. You can use the script here to build it -- just be sure to apply this patch. Then you can launch it using this launch file.
Hello, I tested depth_image_proc. It seems create point cloud data using depth image and rgb image. But It did not publish points data. I opened source code of point_cloud_xyzrgb.cpp file. It use depth(depth_registered/image_rect), rgb(rgb/image_rect_color), info(rgb/camera_info) and I checked these topics published well. Also I think I remapped topics correctly in launch file. This is part of my launch file.
container = ComposableNodeContainer(
node_name='realsense_container',
node_namespace='',
package='rclcpp_components',
node_executable='component_container',
composable_node_descriptions=[
ComposableNode(
package='depth_image_proc',
node_plugin='depth_image_proc::PointCloudXyzrgbNode',
node_name='realsense_depth_image',
remappings=[('rgb/camera_info', '/front_camera/camera_info_raw'),
('rgb/image_rect_color', '/front_camera/rgb/image_raw'),
('depth_registered/image_rect',
'/front_camera/depth/image_raw'),
('points', '/front_camera/depth/points2')],
),
],
output='screen',
)
I checked topic's echo of /front_camera/camera_info_raw, /front_camera/rgb/image_raw and /front_camera/depth/image_raw. And PointCloudXyzrgbNode::imageCb method not working. I added some 'cout' code at that method but it did not printed in console. I could not find reason because I am not familiar image_transport package. Have you seen this problem?
Thanks.
Hi -- thanks for the feedback. Did you happen to encounter any error messages while running depth_image_proc? I am worried it is running at all.
Hello. I could not see error messages. This is capture of log.
[INFO] [launch]: All log files can be found below /root/.ros/log/2020-04-28-23-24-14-624456-8a2403e528d6-2426
[INFO] [launch]: Default logging verbosity is set to INFO
[INFO] [component_container-1]: process started with pid [2437]
[INFO] [launch_ros.actions.load_composable_nodes]: Loaded node '/realsense_depth_image' in container '/realsense_container'
[component_container-1] [INFO] [realsense_container]: Load Library: /opt/tmp_ws/install/depth_image_proc/lib/libdepth_image_proc.so
[component_container-1] [INFO] [realsense_container]: Found class: rclcpp_components::NodeFactoryTemplate<depth_image_proc::PointCloudXyzrgbNode>
[component_container-1] [INFO] [realsense_container]: Instantiate class: rclcpp_components::NodeFactoryTemplate<depth_image_proc::PointCloudXyzrgbNode>
Could you list the output from ros2 node info
and ros2 topic info
of the respective topics and nodes in question to check everything it being mapped correctly?
Also, using markdown code blocks helps readability and avoid word wrapping: https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks
Hello. This is info of nodes and topics.
zikprid@master-node:~/work$ ros2 node info /realsense_depth_image
/realsense_depth_image
Subscribers:
/front_camera/camera_info_raw: sensor_msgs/msg/CameraInfo
/front_camera/depth/image_raw: sensor_msgs/msg/Image
/front_camera/rgb/image_raw: sensor_msgs/msg/Image
/parameter_events: rcl_interfaces/msg/ParameterEvent
Publishers:
/front_camera/depth/points2: sensor_msgs/msg/PointCloud2
/parameter_events: rcl_interfaces/msg/ParameterEvent
/rosout: rcl_interfaces/msg/Log
Services:
/realsense_depth_image/describe_parameters: rcl_interfaces/srv/DescribeParameters
/realsense_depth_image/get_parameter_types: rcl_interfaces/srv/GetParameterTypes
/realsense_depth_image/get_parameters: rcl_interfaces/srv/GetParameters
/realsense_depth_image/list_parameters: rcl_interfaces/srv/ListParameters
/realsense_depth_image/set_parameters: rcl_interfaces/srv/SetParameters
/realsense_depth_image/set_parameters_atomically: rcl_interfaces/srv/SetParametersAtomically
zikprid@master-node:~/work$ ros2 node info /realsense_container
/realsense_container
Subscribers:
/parameter_events: rcl_interfaces/msg/ParameterEvent
Publishers:
/parameter_events: rcl_interfaces/msg/ParameterEvent
/rosout: rcl_interfaces/msg/Log
Services:
/realsense_container/_container/list_nodes: composition_interfaces/srv/ListNodes
/realsense_container/_container/load_node: composition_interfaces/srv/LoadNode
/realsense_container/_container/unload_node: composition_interfaces/srv/UnloadNode
/realsense_container/describe_parameters: rcl_interfaces/srv/DescribeParameters
/realsense_container/get_parameter_types: rcl_interfaces/srv/GetParameterTypes
/realsense_container/get_parameters: rcl_interfaces/srv/GetParameters
/realsense_container/list_parameters: rcl_interfaces/srv/ListParameters
/realsense_container/set_parameters: rcl_interfaces/srv/SetParameters
/realsense_container/set_parameters_atomically: rcl_interfaces/srv/SetParametersAtomically
zikprid@master-node:~/work$ ros2 node list
/gazebo
/plugin_name
/launch_ros_2426
/realsense_container
/realsense_depth_image
zikprid@master-node:~/work$ ros2 topic list
/clock
/front_camera/camera_info_raw
/front_camera/depth/camera_info
/front_camera/depth/image_raw
/front_camera/depth/image_raw/compressed
/front_camera/depth/image_raw/compressedDepth
/front_camera/depth/image_raw/theora
/front_camera/depth/points
/front_camera/depth/points2
/front_camera/rgb/image_raw
/front_camera/rgb/image_raw/compressedDepth
/front_camera/rgb/image_raw/theora
/parameter_events
/rosout
zikprid@master-node:~/work$ ros2 topic info /front_camera/camera_info_raw
Topic: /front_camera/camera_info_raw
Publisher count: 1
Subscriber count: 1
zikprid@master-node:~/work$ ros2 topic info /front_camera/rgb/image_raw
Topic: /front_camera/rgb/image_raw
Publisher count: 1
Subscriber count: 1
zikprid@master-node:~/work$ ros2 topic info /front_camera/depth/image_raw
Topic: /front_camera/depth/image_raw
Publisher count: 1
Subscriber count: 1
Can we check the QOS policies on all the publisher/subscribers to see if there is a mismatch? What DDS vendor were you using?
Good suggestion @BillWSY . @zikprid0 , could you verify that the respective subscribers and publishers are using compatible QoS. For example, by default omnimapper_ros attempts to subscribe to sensor topics using ROS's sensor QoS profile. There is a parameter in omnimapper_ros that can be used to change the QoS, like when using rosbag that publishes via system default QoS. I.e: https://github.com/CogRob/omnimapper_ros/commit/f1ee4fc5daec425adcd84215b584fb7a13645cb5
The later should be less of an issue in ROS2 Foxy now that rosbag has support for preserving QoS settings used when recording. https://github.com/ros2/ros2_documentation/pull/627
However, configuring QoS at runtime is still an open design question:
Hello, @ruffsl, @BillWSY I changed QoS of pub_point_cloud rclcpp::SensorDataQoS to rclcpp::SystemDefaultsQoS. But it has no effects.
So I checked depth_image_proc and image_transport source code. And I printed some data in depth_image_proc like this.
int depth_num_publisher = 0;
int count = 0;
while(depth_num_publisher == 0 && count < 10) {
depth_num_publisher = sub_depth_.getNumPublishers();
std::cout << "AAA:: " << sub_depth_.getTopic() << ", " << sub_depth_.getNumPublishers() << std::endl;
std::cout << "AAA:: " << sub_depth_.getTransport() << ", " << sub_depth_.getSubscriber() << std::endl;
std::cout << "AAA:: " << sub_rgb_.getTopic() << ", " << sub_rgb_.getNumPublishers() << std::endl;
std::cout << "AAA:: " << sub_rgb_.getTransport() << ", " << sub_rgb_.getSubscriber() << std::endl;
std::cout << "AAA:: " << sub_info_.getTopic() << ", " << sub_info_.getSubscriber() << std::endl;
rclcpp::sleep_for(std::chrono::seconds(1));
count ++;
}
And I got this result
[component_container-1] AAA:: /front_camera/depth/image_raw, 0
[component_container-1] AAA:: raw, 0x1
[component_container-1] AAA:: /front_camera/rgb/image_raw, 0
[component_container-1] AAA:: raw, 0x1
[component_container-1] AAA:: /front_camera/camera_info_raw, 0x5579575c9520
[component_container-1] AAA:: /front_camera/depth/image_raw, 1
[component_container-1] AAA:: raw, 0x1
[component_container-1] AAA:: /front_camera/rgb/image_raw, 1
[component_container-1] AAA:: raw, 0x1
[component_container-1] AAA:: /front_camera/camera_info_raw, 0x5579575c9520
image_transport::Subscriber of subdepth and subrgb printed 0x1. I think it should print similar to sub_info's. Can you guess why?
Thank you.
I changed QoS of pub_point_cloud rclcpp::SensorDataQoS to rclcpp::SystemDefaultsQoS.
Could you step back an detail exactly what you change, which launch file are you using, what values you set for what parameters? For example, the realsense.launch.xml
launch file has a different default qos setting that the base omnimapper_ros.launch.xml
launch file, given its for a sensor. If you didn't change the QoS for the subscribers in omnimapper to match that which you set for the publisher in pub_point_cloud
then data exchange may not be successful.
But it has no effects.
If that's the case, then I'm thinking ros2 topic info
may only be aggregating telemetry from the ros graph on a matching topic level, and not a matching QoS level. If we have a minimal example to reproduce, we should file a ticket over on ros2_cli
repo as well.
To repeat @BillWSY question,
What DDS vendor were you using?
- RMW? (e.g. rmw_fastrtps_cpp?)
@BillWSY I used rmw_fastrtps_cpp.
@ruffsl I did not execute omnimapper yet, because depth_image_proc could not publish point cloud msg. I changed this
pub_point_cloud_ = create_publisher<PointCloud2>("points", rclcpp::SensorDataQoS());
to
pub_point_cloud_ = create_publisher<PointCloud2>("points", rclcpp::SystemDefaultsQoS());
Looking back at the world file your provided. What are you trying to do with the <argument>
tags within the <ros>
parent element? Are you try to specify or remap these topic namespaces?
<plugin name="plugin_name" filename="libgazebo_ros_camera.so">
<ros>
<argument>/front_camera/depth/image_raw:=/front_camera/depth/image_raw</argument>
<argument>/front_camera/depth/camera_info:=/front_camera/depth/camera_info</argument>
<argument>/front_camera/image_raw:=/front_camera/rgb/image_raw</argument>
<argument>/front_camera/image_raw/compressed:=/front_camera/rgb/image_raw/compressedDepth</argument>
<argument>/front_camera/image_raw/compressedDepth:=/front_camera/rgb/image_raw/compressedDepth</argument>
<argument>/front_camera/image_raw/theora:=/front_camera/rgb/image_raw/theora</argument>
<argument>/front_camera/camera_info_raw:=/front_camera/camera_info_raw</argument>
<argument>/front_camera/points:=/front_camera/depth/points</argument>
</ros>
If that's the case, then I think you should be using the <namespace>
tag instead:
Could you please check that gazebo plugin is publishing the correct data to the expected topics, regardless if depth_image_proc
is being used. Using rqt's node graph view or rviz2 should help check.
Hello. I checked image topics of rgb and depth. (/front_camera/rgb/image_raw and /front_camera/depth/image_raw) Rviz2 show images of topics well. I used this gazebo plugin other project and there is no problems. I think these topics work correctly.
Thanks.
Hi -- would you be able to record a bag file for all of the image_raw topics. We can try to run it here to see if we can find out the issue.
Hi @zikprid0 I was inspecting the ros2 bag file you sent us, and it seems the depth image is malformed. In rviz I can only see a black image with a few bright pixels on the top. Do you have the same issue? Were you able to use rviz to visualize /front_camera/depth/image_raw?
To @BillWSY . I send email to you with some screenshots because I cannot attach images in this place. I will check and re-record bag file.
Thanks.
I've been waiting on foxy here, as the current LTS ships with QoS improvements to rosbag given https://github.com/ros2/rosbag2/issues/125 has been resolved . I had been suspecting that the differing QoS from recording vs playback may have been subtly effecting your setup, given before foxy, you would have had to use switch between reliable QoS publication to record bag files, vs unreliable QoS publication you'd have to use for Omnimapper to subscribe to. But foxy, rosbag2 can now record and playback with the same original QoS, helping us to determine if you've set your publication QoS options correctly.
I had also been hoping that https://github.com/ros-simulation/gazebo_ros_pkgs/issues/1080 would be resolved by now, but one can still build the gazebo ros2 packages from source and is working as I've been doing over on https://github.com/ros-swg/turtlebot3_demo/pull/34 . There is an open PR for adding a depth only option to avoid serializing a color registered depth cloud , https://github.com/ros-simulation/gazebo_ros_pkgs/pull/1118 , so the reduction in number of channels may help debug any density forming issues.
With https://github.com/CogRob/omnimapper/pull/30 , omnimapper and omnimapper_ros are now building with foxy, so we could try to debug your QoS issues further with the current LTS distro.
It seems your image does not include a visualization of the /front_camera/depth/image_raw
topic in rivz2
. Will it be possible for you to check that? Thanks.
Hello. I got a new progress today. I tried to find why depth_image_proc is not working, and modify source code. It works after I comment out subscribe camera_info topic. It shows this log and died.
[omnimapper_ros_node-1] BoundedPlanePlugin: Constructor.
[omnimapper_ros_node-1] [INFO] [omnimapper_ros_node]: OmniMapperROS: Constructing... Loading ROS Params...
[omnimapper_ros_node-1] [INFO] [omnimapper_ros_node]: Waiting for initial pose from odom to base_footprint
[omnimapper_ros_node-1] [INFO] [omnimapper_ros_node]: GetTransformFunctorTF: waiting for camera_link to base_footprint to be ready: First attempt.
[omnimapper_ros_node-1] [INFO] [omnimapper_ros_node]: OmniMapperROS: Installing BoundedPlanePlugin callback.
[omnimapper_ros_node-1] BoundedPlanePlugin: Got 3 regions.
[omnimapper_ros_node-1] border before: 0
[omnimapper_ros_node-1] border after: 0
[omnimapper_ros_node-1] BoundedPlanePlugin: GOT VALID MEASUREMENT!
[omnimapper_ros_node-1] Model: -0.510358 -0.000267 0.859962 -0.355007, Base Model: -0.464451 -0.211544 -0.859962 0.355007
[omnimapper_ros_node-1] border before: 2360
[omnimapper_ros_node-1] border after: 2360
[omnimapper_ros_node-1] BoundedPlanePlugin: GOT VALID MEASUREMENT!
[omnimapper_ros_node-1] Model: -0.510144 -0.000424 0.860089 -0.355090, Base Model: -0.464191 -0.211598 -0.860089 0.355090
[omnimapper_ros_node-1] border before: 544
[omnimapper_ros_node-1] border after: 538
[omnimapper_ros_node-1] BoundedPlanePlugin: GOT VALID MEASUREMENT!
[omnimapper_ros_node-1] Model: -0.511623 0.015263 0.859074 -0.358054, Base Model: -0.472033 -0.197931 -0.859074 0.358054
[omnimapper_ros_node-1] BoundedPlanePlugin: Have 3 measurements
[omnimapper_ros_node-1] BoundedPlanePlugin: Processing planes for pose 8646911284551352320
[omnimapper_ros_node-1] BoundedPlanePlugin: Creating new plane 7061644215716937728: 0.510358 0.000279 -0.859962 4.392852
[omnimapper_ros_node-1] Adding new symbol: b0
[omnimapper_ros_node-1] BoundedPlanePlugin: Added factor!
[omnimapper_ros_node-1] ERROR: Initializing boundary at bad place: Point is 4.035764 from plane.
I tried after move robot here and there. But It died with same error except float of Point. I will check omnimapper source code. If you have some ideas, please tell me.
Thanks for the update. We had seen similar issues before and fixed it with this commit: https://github.com/CogRob/omnimapper/commit/79c8b1243f22ee3bf86092edc5b0bc798d7e68e0
Can you check you are using the latest code?
@BillWSY . Thank you. I made progress with your mention. I re-clone omnimapper source (eloquent-devel branch) and build. It is not perfect yet, I got a feasible picture in rviz2. I think there's a problem with tf message between docker and host And I will fix it soon. Thanks.
Hello. I tested omnimapper with docker image cogrob/omnimapper_ros:latest.
Omnimapper died at first cloud callback.
I tested it with ros-dashing, ubuntu bionic, and gazebo simulator.
This is error log of omnimapper.
It seems pcl library could not take normal cloud from input cloud.
This is CloudPoint2 ros message of that test.
I also tested 'is_dense' flag made true by moving robot.
But it was not helpful.