tum-vision / lsd_slam

LSD-SLAM
GNU General Public License v3.0
2.62k stars 1.23k forks source link

Running on onboard computers [Odroid U3 Hydro 12.04 success] #1

Closed mhkabir closed 10 years ago

mhkabir commented 10 years ago

Hey Guys,

I've been eagerly following your progress for a long time. I had two MAVs on standby waiting for LSD-SLAM integration as soon as it released :)

My system consists of a large copter with 2 cameras, bottom and front facing. Running SVO on the downward camera and tightly coupled optical flow approach for velocity recovery in case of map loss. The forward camera was initially supposed to be a stereo head, but then I decided to try out LSD SLAM. In near future, will probably also be running optical flow on front camera too for robustness.

LSD-SLAM will be used for vehicle state estimation and navigation. The pose estimate from LSD-SLAM is integrated with pose from SVO with IMU data in an EKF for metric scaling and used for MAV's control . Other sensors like GPS, rangefinder, etc. are also integrated in the EKF.

Presently, the onboard computer is an Odroid U3 running 12.04 and ROS Hydro. Will be testing LSD SLAM independently on this testbed, before moving onto the final copter and much more powerful onboard i7 computer.

I had make some small hacks to get it to run LSD-SLAM, especially due to missing dependencies, etc. Wanted to do a wiki entry to help others, but was unsure whether I should do so without your permission :

  1. Remove lsd_slam_viewer due to QGLViewer dependency. Couldn't get package, could not compile from source either.
  2. Remove viewer dependency from core
  3. Move external keyframeGraph messages from the viewer into core.
  4. Use -mfpu=neon and -DENABLE_NEON compilation flags to enable NEON on the Odroid.
  5. Disable debugWindow as there is a problem with XInitThreads. Any idea on this???

Hope you will be glad to see my work :)

P.S - What is the coordinate system considering the camera frame? X outward from lens, Y left, Z down?

Kabir

efernandez commented 10 years ago

@mhkabir :+1: for your list of 5 items. I think you have don't more changes than the ones I applied in #4 ; it would be great if you send a separate PR.

mhkabir commented 10 years ago

@efernandez I have random problems with starting live_slam regarding GTK. Ubuntu 12.04 Desktop, ROS Hydro. The problems are seemingly random and appear intermittently. Also, I won't PR my modifictions yet as they are very specific to my hardware and software setup, but would be useful for anyone else trying out LSD SLAM.

Kabir

mhkabir commented 10 years ago

Calling XInitThreads in live_slam and linking to libX11 fixes most GTK errors on all platforms. Also, I still prefer my own image debug topic via ROS, but need to make it less hacky.

mhkabir commented 10 years ago

@JakobEngel Do you plan to support standard pose and pointcloud output topics in future? I'm not yet ready to start rolling my own support...

JakobEngel commented 10 years ago

@mhkabir yeah, we havn't had the time to try it out on quadrocopters ourselves, but I'm quite exited to see the results! To answer some of your questions:

mhkabir commented 10 years ago

Thanks for the reply :)

I need to get everything into standard message types to integrate between stacks, so what do you propose would be best?

The navigation system(octomap) needs pointcloud2 messages and I need a pose message for the EKF. I'm happy to adapt both as required, but your recommendations would be greatly appreciated. The pose message should be easy, I think...? Pointcloud could be a problem.

Kabir On Sep 17, 2014 3:02 PM, "JakobEngel" notifications@github.com wrote:

@mhkabir https://github.com/mhkabir yeah, we havn't had the time to try it out on quadrocopters ourselves, but I'm quite exited to see the results! To answer some of your questions:

-

GTK stuff: yes, we've had problems with that too, in particular when trying to compile qglviewer and openCV image display into the same binary. I don't really know how to fix it to be honest.

lsd_slam_viewer / messages: Thats actually why we split the viewer into a separate package / executable - if you have a wifi connection to the quadrotor, you can just run the viewer on a ground-station to get the 3D visualization. The cleanest solution is probably to put the messages into a separate

  • otherwise empty - package, so you can compile the viewer without the core

    and vice versa.

    standard pose / pointclouds: No, we do not plan to support them. In my experience the default ROS visualization tools are too slow to handle millions of points gracefully, and additionally (as far as i know) they don't support Sim(3) pointcloud poses. We're pretty happy to stay as

    independant of the ROS ecosystem as possible ;)

    coordinate frame: It's standard image coordinate frame, i.e., z is outwards; x is right and y is downwards.

— Reply to this email directly or view it on GitHub https://github.com/tum-vision/lsd_slam/issues/1#issuecomment-55870119.

JakobEngel commented 10 years ago

converting should be easy, but it wouldn't behave as you'd expect: First off, the current camera pose will jump with large loop-closures, which will mess up your EKF if not handled properly. Second, each keyframe's position, orientation and scale changes with each new loopclosure, so the points will move around - which I guess your navigation system is not built for either. You could try to disable global mapping (eliminating those issues), but then you'd loose the globally consistent map.

mhkabir commented 10 years ago

Thanks Jacob. Handling the navigation will be a bit difficult, and after so some experimentation, I find that to navigate in real time on a MAV, some modifications will be needed to my mapping stack. So so that's a future goal.

I think position holding will be fairly easy with the poses, and can work out as a starting point. The EKF should be able to handle jumps, not that I'm building large maps initially :) LSD slam tracks at 15Hz or so on the quadcore ARM, which should be okay for positioning.

So, how would we get the pose into standard messages?With your recommendations, I'll send in a PR if you'd like when I'm done.

I went on a short mapping run with the MAVs onboard camera yesterday, and the results were fairly good, with a small FOV lens and non-globalshutter camera. There was motion blur and thus some outliers in the pointcloud though, having been indoors in poor lighting. The Firefly MV combined with a fisheye lens should work just great, although the onboard system cannot track at higher rates as of now.

Kabir

JakobEngel commented 10 years ago

for getting ROS standart message types, it is probably best to create your own outputWrapper, analogeously to src/IOWrapper/ROS/ROSOutput3DWrapper. That would allow very easy switching between different "output modes", and hence allow to integrate it into the main repository without breaking anything else.

For running on ARM, my guess would be that you'll get better perfocmance if you down-sample the image to get at least 30fps. If your lens has distortion than the image is re-sampled anyway - you'd just have to change the calibration file.

You could also think about disabeling global Mapping (set doSLAM to false), which should greatly decrease RAM and CPU load.

Jakob

mhkabir commented 10 years ago

@JakobEngel I've been playing around, and I think my first goal would be to achieve pure tracking with this first. That is, just visual odometry.

I looked into the IOWrapper code, but am unsure as to how I would get the camToWorld into standard xyz coordinates and xyz rotations. A brief description would be very helpful. If you could point out the bits which are important.... :)

mhkabir commented 10 years ago

@JakobEngel I have the main framework sorted. I integrated into the present IOWrapper, just extending it for the PoseStamped message. I've got the initial stuff in my fork. Please check: https://github.com/mhkabir/lsd_slam/commit/278ce45a4335cfa6b8a4cbc6dd28ccb7a8f57906

I'm not very sure about the coordinate frames for the rotation and translation for camToWorld, so if you could please add comments on the file and let me know.

mhkabir commented 10 years ago

@JakobEngel All coordinate transforms look alright. :)

Need to integrate with EKF now. I will probably add some parameters to rotate the camera frame into IMU frame as required.

Opened PR : https://github.com/tum-vision/lsd_slam/pull/13

mhkabir commented 10 years ago

@JakobEngel , if I'm not mistaken, the CamToWorld is the camera coordinates in the world frame, right?

JakobEngel commented 10 years ago

CamToWorld is the transformation such that for a point X X_InWorldCoordinates = CamToWorld * X_InCamCoordinates. I use that naming convention everywhere throughout the code.

mhkabir commented 10 years ago

@JakobEngel I'm still a bit confused. Can you check my PR and tell me the corrections to get the X,Y,Z values in the Pose message, in world coordinates? Some line comments would help :) I think the present implementation is incorrect as it doesn't behave as it should.

Dvad commented 10 years ago

Hi,

With this convention in order to access the pose coordinate (X,Y,Z) in world you need to take the quantity CamToWorld.translation(). You can derive that by just taking the coordinates of the origin in camera frame and transform it to world frame (With Sophus/Eigen notation):

    position_cam_center_in_world = CamToWorld * zero 
    position_cam_center_in_world =  CamToWorld.rxso3() * zero + CamToWorld.translation()
    position_cam_center_in_world  =  CamToWorld.translation()

I looked at your implementation for me it seems OK. What is wrong? Are you sure there isn't a problem elsewhere?

mhkabir commented 10 years ago

@JakobEngel Upgraded whole system to Indigo 14.04 today. Working fine.

I will replace the OpenCV image display system with a standard image message which can be visualised on ground computers. Any suggestions on how I should change the key input functionality?

mhkabir commented 10 years ago

@JakobEngel What are the image encodings on the cv::Mat which are passed using util::displayImage?

RDmitrich commented 10 years ago

@mhkabir Could you explain, please, how I can move external keyframeGraph messages from the viewer into core?

Trying to compile lsd-slam on XU3 Ubuntu 14.04, but right now receive only internal error of compiler - it seems that there is not enough RAM. Same problem was with svo_ros but several starts after failure - pass the compilation without any errors. I think that swap enable in kernel could help to solve it, but this is bad variant for eMMC life, so I still in searching.

mhkabir commented 10 years ago

@RDmitrich , you need to do 3 things.

  1. Copy lsd_slam_viewer''s "msg" folder to lsd_slam_core
  2. In lsd_slam_core/src/IOWrapper/ROSOutput3DWrapper.cpp replace all instances of "lsd_slam_viewer" to "lsd_slam_core"
  3. Add this line to the CMakeLists.txt, after gencfg() : rosbuild_genmsg()

Regarding your compile problem, you don't need swap. Simply limit parallel compilation jobs. On catkin, this can be done using :

catkin_make -j2

For rosmake as in lsd_slam, set this environment variable :

export ROS_PARALLEL_JOBS=-j2

The -jN flag limits the number of jobs to N. Usually 2 is okay.

mhkabir commented 10 years ago

And for other's referring to this for help, you can set the NEON flags properly in CMakeLists like this :

# NEON flags
add_definitions("-DUSE_ROS")
add_definitions("-DENABLE_NEON")

# Also add some useful compiler flag
set(CMAKE_CXX_FLAGS
   "${CMAKE_CXX_FLAGS} -march=armv7-a -mfpu=neon -std=c++0x"
) 
RDmitrich commented 10 years ago

@mhkabir I have successfully build with "rosmake --pjobs=2, thank you again for help.

mhkabir commented 10 years ago

@RDmitrich good to hear that :) Although, in my experience, --pjobs doesn't work often for unknown reasons, so better to use the environment flag.

RDmitrich commented 10 years ago

@mhkabir Still trying to run lsd-slam on XU3 with live_slam, but after several seconds program just freeze. I receive only this message: "~/ros_packages/lsd_slam$ rosrun lsd_slam_core live_slam image:=/image_mono _calib:=pinhole_example_calib.cfg Reading Calibration from file pinhole_example_calib.cfg ... not found! Trying /home/odroid/ros_packages/lsd_slam/lsd_slam_core/calib/pinhole_example_calib.cfg ... found! found ATAN camera model, building rectifier. Input resolution: 640 480 In: 0.527334 0.827306 0.473568 0.499436 0.000000 NO RECTIFICATION Output resolution: 640 480 Prepped Warp matrices Started constraint search thread! Started mapping thread! Started optimization thread "

There is no any error appears and if I run "rostopic list" in other terminal - it shows me /lsd-slam/.. topics, but I couldn't connect them to rviz (on desktop computer).

Where is the possible problem could be?

QichaoXu commented 10 years ago

My system configurations is ROS fuerte + Ubuntu 12.04. In compiling lsd_slam(by typing ‘rosmake lsd_slam’), there is always a failure. I then try compile the lsd_slam one by one. First the lsd_slam_core, again a same failure occurs. Second the lsd_slam_viewer, this time no failure.

The same failure is like" /tmp/ccN1VfD5.s:1612: Error: no such instruction: 'vfmadd312sd'".

Where is the possible problem could be?

mhkabir commented 10 years ago

@RDmitrich Its probably running fine. Try 'echo'ing the /lsd_slam/pose topic using rostopic.

debugWindow is disabled, so you don't see the GUI but you don't need to do that anymore. With the latest fixes on master, you can run with debugWindow :) The userinput system is such that it parses input via debugWindow.

Enable the window to get the GUI :) Its probably running just fine in the background.

mhkabir commented 10 years ago

@QichaoXu Compile with NEON enabled, instead of SSE. Those are unavailable SSE instruction errors. See my comment above on how to do it properly in CMakeLists.txt

JakobEngel commented 10 years ago

@QichaoXu I have a similar error on one system setup (Ubuntu 12.04 + Haswell i7 CPU), which is fixed by removing -march=native from the compiler commands. The reason is that gcc from ubuntu 12.04 is simply too old.

mhkabir commented 10 years ago

@JakobEngel Is there any way to get the framerates a bit better on the Odroid? Presently, it is rather slow and totally unsuitable for actual flying. The frames get blurred, etc., which I do not face with same camera on my desktop , under similar conditions.

JakobEngel commented 10 years ago

hmm the frames getting blurred sounds like a camera driver issue. if lsd_slam just runs slowly, try reducing the resolution (to e.g. 320x240).

QichaoXu commented 10 years ago

when compile lsd_slam(ROS indigo + ubuntu 14.04), an error shows there is a undefined lib:

Linking CXX executable ../bin/live_slam /usr/bin/ld: CMakeFiles/live_slam.dir/src/main_live_odometry.cpp.o: undefined reference to symbol 'XInitThreads' //usr/lib/x86_64-linux-gnu/libX11.so.6: error adding symbols: DSO missing from command line collect2: error: ld returned 1 exit status make[3]: * [../bin/live_slam] Error 1 make[3]: Leaving directory `/home/user/ros_workspace/lsd_slam/lsd_slam_core/build' make[2]: * [CMakeFiles/live_slam.dir/all] Error 2 make[2]: Leaving directory/home/user/ros_workspace/lsd_slam/lsd_slam_core/build' make[1]: **\* [all] Error 2 make[1]: Leaving directory/home/user/ros_workspace/lsd_slam/lsd_slam_core/build'

how to correct it?

JakobEngel commented 10 years ago

please see Issue https://github.com/tum-vision/lsd_slam/issues/29

Chao1155 commented 7 years ago

Thank you @mhkabir for explaining how to separate the core from the viewer. I have managed to compile the core alone in a raspberry pi 3, with ROS kinetic and Ubuntu Mate 16. But I cannot run the lsd_slam_core smoothly. It seams need to connect a local viewer to display the result (what is indeed the intention of the original version). So what command did you use to run the lsd_slam_core without calling the viewer? Something other than this? rosrun lsd_slam_core live_slam image:=/image_raw camera_info:=/camera_info

Thanks.