RobotLocomotion / director

A robotics interface and visualization framework, with extensive applications for working with http://drake.mit.edu
BSD 3-Clause "New" or "Revised" License
178 stars 86 forks source link

General Purpose Lidar Rendering #340

Open mauricefallon opened 8 years ago

mauricefallon commented 8 years ago

Feature request for general purpose lidar rendering in Director - in particular a set of N planar scans.

Currently director supports rendering 360 degrees sweeps of Multisense SL data (about 3-6 seconds of Hokuyo scans) for Val and Atlas.

I recently added 2D lidar rendering: https://www.dropbox.com/s/vghhtrwi7iojuzo/2016-10-lidar_height_control.mp4?dl=0

And extended this by hard coding in multiple instances of the class for two sensor: https://github.com/robotperception/director/blob/rpg-director/src/python/director/perception.py#L814 In this case for a horizontal and vertical LIDAR (SICK_SCAN and HORIZONTAL_SCAN)

What is currently implemented has a few issues:

Any thoughts or interested in this?

patmarion commented 8 years ago

Can you upload some log snippets?

patmarion commented 8 years ago

How does the bot-viewer handle the lagging bot frame? Does it use the closest available bot frame in time, and redraw continually? So the scan line is initially in the wrong position, but it is moved to the right position when new bot frame messages are received (milliseconds later, i guess)? Or does it wait until the bot frame has been received before drawing the scan line at all?

mauricefallon commented 8 years ago

Does it use the closest available bot frame in time, and redraw continually? So the scan line is initially in the wrong position, but it quickly moved to the right position?

Yes, that is what is going on. In this video you can actually see it: https://www.dropbox.com/sh/lt7956mqv4nmzmb/AABaLNRXu8GiLJeY-LJXDeWfa?dl=0&preview=laser_odometry_director_bad_pronto_viewer_good.ogv

If you look closely the bot-viewer occasionally draws a stray scan but its later re-drawn in the right place.

I'll send a log snippet of this video, probably a day or so

mauricefallon commented 8 years ago

For director, I think the code just captures the currently available world-to-liar frame at a certain timestamp. This is occurring before the world-to-body is estimated by a state estimator and transmitted to the director.

In this example - running lidar state estimation - the director will always call get_frame_with_timestamp too early.

patmarion commented 8 years ago

One idea would be to have the receiving thread block until the POSE_BODY update utime is newer than the planar lidar msg utime. This might be reasonable, but I'll have to try it on the log. This would prevent the viewer from ever drawing the scan line in the wrong place, even for one frame. It would slave the scan line visualization to the state est update.

To implement the bot-viewer behavior which recomputes transformed scan lines each render would be feasible, but it's a different architecture.

I remember Matt and I investigated this very issue during the drc to make sure the multisense scan lines were using the correct SCAN_to_local and at the time we decided it was correct. Perhaps the multisense driver is slaved to the state est somehow, or maybe it was just by chance?

patmarion commented 8 years ago

Another detail, by the way, which is related to planar lidar message utime, but specific to multisense, the vtkMultisenseSource does this in order to reconstruct the scan lines using an interpolated transform across the sampling time:

get_trans_with_utime("SCAN", "local", msg->utime, scanToLocalStart);
get_trans_with_utime("SCAN", "local", msg->utime +  1e6*3/(40*4), scanToLocalEnd);

So note that it is asking bot-frames for a SCAN_to_local using a msg.utime + .

mauricefallon commented 8 years ago

that's fine when those two times are valid but I don't think either trans is actually correct if the state estimator has some latency and POSE_BODY at a certain utime is received AFTER this function is called.

For Pronto and any other estimators (e.g. IHMC and BDI) the state estimation has less latency (4-5 msec) than the capture time of the LIDAR (1/40 second + transmission time). But for others its much higher (e.g. when it actually used the LIDAR in the SE)

mauricefallon commented 8 years ago

As requested, here is a snippet of data: https://www.dropbox.com/sh/8cpv3hs975ql9j9/AABNkAIomfGulcMA4NnXBOTqa?dl=0

you need to check out the husky config from here: https://github.com/oh-dev/oh-distro-private/tree/mf-husky-sim

Due to lack of meshes and hard coding of LIDAR channel names in master:

then run:

patmarion commented 8 years ago

Cool thanks, I'll give it a go!

patmarion commented 8 years ago

i'm interested in resolving that, it will improve the code base to support more general objects

we should discuss implementation details a little further, I can see a few ways to go. Is there someone on your side interested in tackling it if I help advise (after we agree on implementation details)?