ethz-asl / maplab

A Modular and Multi-Modal Mapping Framework
https://maplab.asl.ethz.ch
Apache License 2.0
2.58k stars 722 forks source link

Using ROVIOLI on a pan-tilt-zoom camera sensor dataset #44

Closed kashmoneygt closed 6 years ago

kashmoneygt commented 6 years ago

Hello, I am trying to use Rovioli (in VIO mode) on a dataset (rosbag) to create a VI map. However, I'm not sure if I'm correctly utilizing Rovioli. My dataset was acquired by an Autonomous Surface Vessel using a Pan-Tilt-Zoom camera surveying a lakeshore. At the moment, I am using the default euroc calibration files.

Here are the topics of my rosbag: image

Here is how I am calling rovioli, and its output: image

Once I open the map folder created by rovioli, it contains an empty resources directory and that's it.

My main question is: Is rovioli not recognizing the topics of my dataset (such as imu measurements) due to incorrect calibration files, or due to me not utilizing rovioli correctly? Please give me advice, and thank you!

NOTES: Maplab is successfully installed on Ubuntu 14.04 with the following plugins:

mfehr commented 6 years ago

Hi @kashmoneygt

Thank you for the report. There are several crucial steps missing before you can run ROVIOLI on your dataset:

ZacharyTaylor commented 6 years ago

Also just adding to this I noticed your IMU is only running at 20 Hz, its possible this will work but I have never seen someone run rovio with an IMU rate of less than 100 Hz

kashmoneygt commented 6 years ago

Thanks for all that useful information. I made the following changes accordingly.

The images are on the rostopic /axis/image_raw; however, they are compressed, as you can see in the picture of all rostopics above. Is there any sort of parameter I can pass into ROVIOLI for the transport type to be compressed? I do not have access to the original image_raw topic.

If not, then may I get some help on how I can input frames of the images into ROVIOLI rather than rosbags?

Thank you once again.

ZacharyTaylor commented 6 years ago

You can start an additional node that will republish compressed images in an uncompressed format: <node name="republish" type="republish" pkg="image_transport" output="screen" args="compressed in:=/axis/image_raw/compressed raw out:=/axis/image_raw" />

Note however that any artifacts the compression process induced will still be present and this will detrimentally impact the quality of the tracking and localization.

mfehr commented 6 years ago

Unfortunately there is no parameter in ROVIOLI to account for compressed images, you will have to either do what @ZacharyTaylor suggested or convert your ROS data prior to feeding it to ROVIOLI (e.g. have a look here) or change the interfaces (DataSourceRosbag or DataSourceRosTopic).

If you would like to add a new way to input data to ROVIO, you should have a look at the datasource system in ROVIOLI. It consists of a parent class (link) and two descendants for inputting live ROS data (link) or reading rosbags (link). You could add a third option that parses the data from a specific format from the file system. Then you will need to extend the DataSourceType and the factory function to create your own datasource if you set the gflag accordingly (--datasource_type, defined here). The DataSourceFlow creates one of the DataSource classes (rostopic, rosbag, or e.g. your version) based on the gflag and then starts streaming the data.

kashmoneygt commented 6 years ago

I appreciate the help in getting started from both of you.

I created an uncompress_img.launch launch file: ``

When I created the launch file using what @ZacharyTaylor specified, I got the following warning, so I updated the .launch file as above. [ WARN] [1519133772.177988434]: [image_transport] It looks like you are trying to subscribe directly to a transport-specific image topic '/axis/image_raw/compressed', in which case you will likely get a connection error. Try subscribing to the base topic '/axis/image_raw' instead with parameter ~image_transport set to 'compressed' (on the command line, _image_transport:=compressed). See http://ros.org/wiki/image_transport for details. ``

I created a topic_to_map scripts and placed it alongside the other scripts:

``

!/usr/bin/env bash

LOCALIZATION_MAP_OUTPUT=$1 NCAMERA_CALIBRATION="$ROVIO_CONFIG_DIR/camera_calibration.yaml" IMU_PARAMETERS_MAPLAB="$ROVIO_CONFIG_DIR/imu-adis16488.yaml" IMU_PARAMETERS_ROVIO="$ROVIO_CONFIG_DIR/imu-sigmas-rovio.yaml" REST=$@

rosrun rovioli rovioli \ --alsologtostderr=1 \ --v=2 \ --ncamera_calibration=$NCAMERA_CALIBRATION \ --imu_parameters_maplab=$IMU_PARAMETERS_MAPLAB \ --imu_parameters_rovio=$IMU_PARAMETERS_ROVIO \ --datasource_type="rostopic" \ --save_map_folder="$LOCALIZATION_MAP_OUTPUT" \ --map_builder_save_image_as_resources=false \ --optimize_map_to_localization_map=false $REST ``

I did not specify a --vio_camera_topic_suffix="" flag because I am republishing to the default topic (image_raw).

In terminal 1, I ran roslaunch ~/uncompress_img.launch which ran roscore and the image_transport pkg successfully. image In terminal 2, I ran rosrun rovioli topic_to_map ~/bags/maps/.

In terminal 3, I played the rosbag.

However, rovioli says that it has not received and camera messages or imu messages every 5 seconds. image

I checked to make sure that the raw images were being published to image_raw by playing the rosbag and running rostopic echo image_raw. I also checked to make sure that all filepaths are found. If I Crtl+C the ROVIOLI process, it informs me that it did not find a specific filepath, but I fixed all filepath errors.

I was wondering what else I can debug to ensure that ROVIOLI can receive the imu data and image_raw data from playing the rosbag with the roslaunch file and live script running.

Also @mfehr I will most definitely look into adding a third option. Thanks for all the info on how I can go about doing that!

mfehr commented 6 years ago
schneith commented 6 years ago
I was wondering what else I can debug to ensure that ROVIOLI can receive the imu data and image_raw data from playing the rosbag with the roslaunch file and live script running.

If you enable verbose level >=3 (by adding the argument -v=3) you should see the output of the following lines: https://github.com/ethz-asl/maplab/blob/master/applications/rovioli/src/datasource-rosbag.cc#L163 https://github.com/ethz-asl/maplab/blob/master/applications/rovioli/src/datasource-rosbag.cc#L178

If you see those outputs then the sensor data is fed properly to the system and the issue must lie somewhere else.

kashmoneygt commented 6 years ago

Hello,

I am having trouble visualizing the map in maplab_console once I have created the map with ROVIOLI. Side note: thank you for all the tips and help you gave me in creating the map. I'm very much a beginner, but your advice has helped me out very much.

I created the map with the following: Terminal1: running the roslaunch file that subscribes to compressed img topic and broadcasts raw img topic $ roslaunch ~/uncompress_img.launch

File contents of uncompress_img.launch: image

Terminal2: running the rovioli live script $ rosrun rovioli topic_to_map ~/bags/maps/

contents of topic_to_map: image

Terminal3: playing the rosbags $ rosbag play -s 120 *.bag

Notes: total of 103 rosbags each bag contains 30 to 43 seconds of recording the rosbags are continuous recordings of an unmanned vessel moving around a lake automatically, stays 10 meters away from shoreline skipping the first 120s of the bags b/c it consists of getting the vessel ready

rosbag info .bag output: image

ROVIOLI is successfully subscribing to the rosbag's broadcasted messages: image

Creation of the map in ROVIOLI looks like the following: image


Once I try to visualize the map via maplab_console and rviz, it does not display properly. I am assuming this is because of my lack of understanding of rviz, so I consulted wikis and forums on rviz help but was not able to get anywhere.

Terminal1: after maplab_console successfully starts $ load --map_folder ~/bags/maps/_02 $ visualize

Terminal2: running rviz rosrun rviz rviz

I followed the steps from here: https://github.com/ethz-asl/maplab/wiki/Console-map-management "On the left side, click the Add button and select the data you want to visualize. In the tab By topic, select Marker from /vi_map_edges/viwls and PointCloud2 from vi_map_landmarks, should give you a view similar to this:"

However, my view looks like the following: Closeup: image

Little bit zoomed out: image

Zoomed out very far: image

I read up on fixed frames, and I know that I should select 'map' from the two options I'm given: 'map' and 'camera_fe01'

My question is: what can I do so that the map I created can possibly resemble the lake that the unmanned vehicle went around?

mfehr commented 6 years ago

Hi,

So first of all the problem you have is certainly unrelated to visualizing the map, because from your screenshot from rovio(li) it is evident that the estimation itself is not working. No estimation, no meaningful map to visualize. So if you want to debug this you need to focus on getting rovio working. This means looking at the ROVIO window and at the tf and stamped transforms in RVIZ while ROVIOLI is running. Increase the verbosity level to get more output of rovio as @schneith suggested. Since image and imu are now connected to rovio the problem lies elsewhere. In our posts above we already listed the most likely problems:

Please also post your current calibration files (camera, imu maplab, imu rovio) as suggested above.

kashmoneygt commented 6 years ago

Thank you for the quick response. Sorry that I did not post the calibration files from when I was previously asked. The transformation between the camera and the imu is constant. The imu sensor is situated 10 cm below and and 5 cm left of the camera, but I am not sure how to translate these measurements to the camera calibration file.

camera: image imu maplab: image imu rovio: image

I originally had a problem running kalibr a few weeks back, but I will try it once again.

The problem: kpatel2@gtlpc101:~/Downloads/kalibr-cde$ ./cde-exec kalibr_calibrate_imu_camera --bag ~/bags/2015-07-23-09-44-08_50.bag --cam ~/camera_calib/camchain.yaml --imu ~/camera_calib/imu.yaml --target ~/camera_calib/checkerboard.yaml importing libraries Initializing IMUs: Update rate: 20.0 Accelerometer: Noise density: 0.00186 Noise density (discrete): 0.0083181728763 Random walk: 0.000433 Gyroscope: Noise density: 0.000187 Noise density (discrete): 0.000836289423585 Random walk: 2.66e-05 Initializing imu rosbag dataset reader: Dataset: /home/GTL/kpatel2/bags/2015-07-23-09-44-08_50.bag Topic: /imu/data Number of messages: 878 Reading IMU data (/imu/data) Read 878 imu readings over 44.3 seconds
Initializing calibration target: Type: checkerboard Rows Count: 7 Distance: 0.06 [m] Cols Count: 6 Distance: 0.06 [m] Initializing camera chain: Camera chain - cam0: Camera model: pinhole Focal length: [461.629, 460.152] Principal point: [362.68, 246.049] Distortion model: radtan Distortion coefficients: [-0.27695497, 0.06712482, 0.00087538, 0.00011556] baseline: no data available Initializing camera rosbag dataset reader: Dataset: /home/GTL/kpatel2/bags/2015-07-23-09-44-08_50.bag Topic: /axis/camera_info Number of images: 444 Extracting calibration target corners Traceback (most recent call last): File "/var/kalibr-build/devel/bin/kalibr_calibrate_imu_camera", line 5, in exec(fh.read()) File "", line 206, in File "", line 149, in main File "/var/kalibr-build/src/kalibr/aslam_offline_calibration/kalibr/python/kalibr_imu_camera_calibration/IccSensors.py", line 391, in init showOneStep=parsed.extractionstepping) )
File "/var/kalibr-build/src/kalibr/aslam_offline_calibration/kalibr/python/kalibr_imu_camera_calibration/IccSensors.py", line 59, in init self.targetObservations = kc.extractCornersFromDataset(self.dataset, self.detector, multithreading=multithreading) File "/var/kalibr-build/src/kalibr/aslam_offline_calibration/kalibr/python/kalibr_common/TargetExtractor.py", line 73, in extractCornersFromDataset raise RuntimeError("Exception during multithreaded extraction: {0}".format(e)) RuntimeError: Exception during multithreaded extraction: '_sensor_msgs__CameraInfo' object has no attribute 'encoding' Error in sys.excepthook: Traceback (most recent call last): File "/usr/lib/python2.7/dist-packages/apport_python_hook.py", line 64, in apport_excepthook from apport.fileutils import likely_packaged, get_recent_crashes ImportError: No module named apport.fileutils

Original exception was: Traceback (most recent call last): File "/var/kalibr-build/devel/bin/kalibr_calibrate_imu_camera", line 5, in exec(fh.read()) File "", line 206, in File "", line 149, in main File "/var/kalibr-build/src/kalibr/aslam_offline_calibration/kalibr/python/kalibr_imu_camera_calibration/IccSensors.py", line 391, in init showOneStep=parsed.extractionstepping) )
File "/var/kalibr-build/src/kalibr/aslam_offline_calibration/kalibr/python/kalibr_imu_camera_calibration/IccSensors.py", line 59, in init self.targetObservations = kc.extractCornersFromDataset(self.dataset, self.detector, multithreading=multithreading) File "/var/kalibr-build/src/kalibr/aslam_offline_calibration/kalibr/python/kalibr_common/TargetExtractor.py", line 73, in extractCornersFromDataset raise RuntimeError("Exception during multithreaded extraction: {0}".format(e)) RuntimeError: Exception during multithreaded extraction: '_sensor_msgs__CameraInfo' object has no attribute 'encoding'

Thanks again.

mfehr commented 6 years ago

So based on your calibration files, you adapted only the camera intrinsics so far:

Getting this IMU to camera transformation you will need a proper calibration dataset of your sensor, which I assume you have, since you tried using kalibr. So I assume the public dataset you're using does not provide this transformation already? Have you checked the tf tree if there is an IMU frame?

The error you got is at least claiming the error is that your CameraInfo message is missing the encoding field, not sure though that is the real reason, but you should check.

SYSUHPK commented 5 years ago

@kashmoneygt Have you solved your problem? I also encountered a similar problem to yours. I ran the following command:

Kalibr_calibrate_cameras - bag/home/nvidia/kalibr_workspace/output. The bag - switchable viewer/android/image_raw/compressed - models pinhole - equi - target/home/nvidia/kalibr_workspace/ap yaml

But it reported an error:

importing libraries Initializing cam0: Camera model: pinhole-equi Dataset: /home/nvidia/kalibr_workspace/output.bag Topic: /android/image_raw/compressed Number of images: 877 Extracting calibration target corners Traceback (most recent call last): File "/home/nvidia/kalibr_workspace/devel/bin/kalibr_calibrate_cameras", line 15, in exec(compile(fh.read(), python_script, 'exec'), context) File "/home/nvidia/kalibr_workspace/src/Kalibr/aslam_offline_calibration/kalibr/python/kalibr_calibrate_cameras", line 447, in main() File "/home/nvidia/kalibr_workspace/src/Kalibr/aslam_offline_calibration/kalibr/python/kalibr_calibrate_cameras", line 178, in main noTransformation=True) File "/home/nvidia/kalibr_workspace/src/Kalibr/aslam_offline_calibration/kalibr/python/kalibr_common/TargetExtractor.py", line 73, in extractCornersFromDataset raise RuntimeError("Exception during multithreaded extraction: {0}".format(e)) RuntimeError: Exception during multithreaded extraction: '_sensor_msgs__CompressedImage' object has no attribute 'encoding' What should I do?