Open mslavescu opened 6 years ago
@mslavescu said: If anyone make it work, please share it there and lets talk in # 1-mvp-vbacc and # 3-mvp-smart-camera more about how to integrate VINS cameras. I'll try to collect some datasets with the Lattice Embedded Vision Kit (dual camera) and phone accelerometer/gyro, so we can test scenarios from the car at faster speeds.
I just finished the first pass of the paper and it is very interesting (no deep net no nothing but good old dynamics and filtering). Its github code requires ROS Kinetic, and I am a bit confused on which docker image I should pull. Should I get the kinetic-robot, kinetic-perception, kinetic-ros-core or kinetic-ros-base? BTW, what is the best recommended resource to pick up ROS in general?
I don't have experience in ROS
to decouple https://github.com/KumarRobotics/msckf_vio from ROS
Seems like I am able to built the code within a ROS kinetic image, relatively painful maybe due to my lack of experience with rosdep. I am planning to see how to "run" it in some meaningful way, then pack it up with snap and see it is as good as it advertise.
Update: docker image available at https://hub.docker.com/r/ppirrip/msckf_vio/, package installed in /msckf_vio, and roslaunch script has no error.
Today I'll try to collect a rosbag with images from a PS4Eye stereo camera and 2 x PS3Eye cameras (in a stereo setup).
Will also try to add GPS and accel/gyro from an Android phone.
Then we should try to test this algorithm, live to.
I downloaded one of the bag from the github last nite (~10G) and will try to play it back today, We will need a config file setup for the PS3 Eye too I think.
I got a ROS launch file for the 2 x PS3Eye cameras.
I calibrated them individually, but I could not do it in stereo mode.
This should help, I'll try it tonight: https://github.com/ossdc/stereovision
For the IMU/GPS from Android phone I'll try to make this work:
https://github.com/OSSDC/ros_android_sensors
I tried to record a rosbag this morning, but the laptop hung after a few minutes, I need to reduce the number of topics and compress the images faster.
I'll post tonight the scripts and datasets I got so far.
I like the PS3Eye more than PS4Eye, with PS3Eye auto exposure works pretty well and also seems to focus farther away.
I played back the flight data bag this morning, but not sure how it interacts with the nodes started by the program. I installed rviz now and will try it later. BTW, the two nodes subscribes and publishes the following topics:
image_processor node Subscribed Topics: imu (sensor_msgs/Imu): IMU messages is used for compensating rotation in feature tracking, and 2-point RANSAC. cam[x]_image (sensor_msgs/Image): Synchronized stereo images.
Published Topics features (msckf_vio/CameraMeasurement): Records the feature measurements on the current stereo image pair. tracking_info (msckf_vio/TrackingInfo): Records the feature tracking status for debugging purpose. debug_stereo_img (sensor_msgs::Image): Draw current features on the stereo images for debugging purpose. Note that this debugging image is only generated upon subscription.
vio node Subscribed Topics imu (sensor_msgs/Imu): IMU measurements. features (msckf_vio/CameraMeasurement): Stereo feature measurements from the image_processor node.
Published Topics odom (nav_msgs/Odometry): Odometry of the IMU frame including a proper covariance. feature_point_cloud (sensor_msgs/PointCloud2): Shows current features in the map which is used for estimation.
I finally got a dataset with phone sensors data and 4 cameras images (compressed), I will publish it in the evening.
We need to figure out a way to collect uncompressed images, PS4Eye ROS method is not reliable, I'll try opencv to save lossless PNGs.
How do you hook up the PS cameras to your computer (I assume) anyway? Just curious. I will play with your bag file later then. Any good ROS reference material you aware of, esp on publish/subscribe topics?
I use 2 USB2 ports to connect the 2 x PS3Eye and one USB3 powered hub for PS4Eye (the USB3 on the computer doesn't provide enough power). All connected to my laptop which was powered, as I have 110V outlet in my car. The USB3 hub was powered from the power outlet also.
I will just provide a zip with images and we will generate the ROS messages on the fly if needed.
Large rosbags are not easy to use, because rosbag play scans them fully every time we playback, that is very slow.
See here a simple example with ROS pub/sub images:
This would be good to try also:
NanoMap: Fast, Uncertainty-Aware Proximity Queries with Lazy Search of Local 3D Data - https://www.youtube.com/watch?v=zWAs_Djd_hA https://github.com/peteflorence/nanomap_ros
This would be good to try also:
OKVIS: Open Keyframe-based Visual-Inertial SLAM. https://github.com/ethz-asl/okvis
Project code:
Robust Stereo Visual Inertial Odometry for Fast Autonomous Flight https://github.com/KumarRobotics/msckf_vio
Amazing demo video here: https://youtu.be/jxfJFgzmNSw