Open laurimi opened 7 years ago
Hi there, glad it worked for you with an Xtion.
Intel RealSense SR300s are also of interest to us. I think possibly @5yler has taken SR300 data through the pipeline?
There are a couple different parts of the pipeline that currently assume the raw data is in an LCM log. The interfaces to the raw data could be expanded to support other raw data formats, like a ROS bag. What type of raw data format would you like to use?
@laurimi @peteflorence I used SR300 data by using the RealSense ROS driver and writing a custom ROS-LCM converter. Let me know if you want the source code for it.
@5yler A ROS-LCM converter sounds perfect for my use case! I would be glad to give it a try.
@peteflorence I am thinking LabelFusion might be of interest outside the robotics community too. I believe having support for RGB+D input as a sequence of (possibly already registered?) pairs of images saved in separate files would be an interface many people could adapt to (also possible from ROS by dumping images from a bag to files). A sequence of binary pointcloud files would be another option that might potentially support stereo cameras as well. To me personally a ROS-LCM converter is also reasonable.
I'll leave it to you if you want to keep this issue open for future discussion, but my original question was answered.
For the record on this issue, I now have a direct ROSbag --> ElasticFusion driver, although it's not merged into this git codebase or in the docker hub image. If people want it I can merge it into LabelFusion.
@peteflorence is this the one at pf-rosbagreader? I tried that out earlier and was able to run the LabelFusion pipeline with Kinect2 data from a bagfile (up to camera parameter issues as mentioned in #17 ). I think it would be good to make people aware of it at least! In some sense, this is essentially the functionality that would also be provided by a ROS-LCM converter.
@peteflorence @laurimi After an unfortunately long and drawn-out approval process my ROS-LCM converter, rgbd_ros_to_lcm is now public!
Woohoo!! Cool
On Fri, Mar 16, 2018 at 8:44 AM 5yler notifications@github.com wrote:
@peteflorence https://github.com/peteflorence @laurimi https://github.com/laurimi After an unfortunately long and drawn-out approval process my ROS-LCM converter, rgbd_ros_to_lcm https://github.com/MobileManipulation/rgbd_ros_to_lcm is now public!
— You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub https://github.com/RobotLocomotion/LabelFusion/issues/19#issuecomment-373702143, or mute the thread https://github.com/notifications/unsubscribe-auth/AFYQqA6VS4h4GgVsPmQsEhv5TxTS9bn2ks5te7OmgaJpZM4Qkbuq .
Yes I agree that having a ROS interface would be useul. The tricky part is that everyone will have different conventions on what the topics of the color and depth streams are, and getting synchronized pairs of images is not trivial. Maybe it would be simpler to also support reading from a folder where the color and depth image pairs have already been extracted., i.e. something of the form
log_folder\
000001_rgb.png
000001_depth.png
This removes the dependency on ROS/LCM but at the expense of much larger log folders.
Hi, thank you for releasing this interesting software! I tried it out with an ASUS Xtion camera, and it works perfectly.
I am interested to try to use input RGBD data from an Intel RealSense SR300 camera in the pipeline. As far as I understand, your pipeline assumes the input data to be in some LCM format. Do you have any details for the specs the input data is required to fulfill, or a suggestion how to use alternative (raw RGB+D) forms of input?