chicagoedt / revo_robot

Code for EDT's IGVC entry, Revo.
http://www.igvc.org/
20 stars 23 forks source link

unit tests for line_detection #26

Open bsubei opened 10 years ago

bsubei commented 10 years ago

start coming up with things to test the line_detection node.

Maybe testing valid camera input, training image input, valid parameters from dynamic_reconfigure...

Can't think of ways to unit test output though... Needs more head-scratching

bsubei commented 10 years ago

start looking here http://wiki.ros.org/unittest

bsubei commented 10 years ago

build a unit test for generic_linedetection that tries every parameter value and see if it breaks anything (testing our validation function). Not really something we need, but just an exercise in learning testing

bsubei commented 9 years ago

ok, I just thought of a way to test performance of different line detection nodes using Gazebo runs. I need @l0g1x to help set this up. This is how it works:

  1. First, you rosbag a run of Scipio inside Gazebo on the simulated IGVC course. It doesn't have to be autonomous. We just need to record the odometry topics (track where the robot is). Let's remove all obstacles and barrels for now (since we have the PCL ground filter to take care of that anyways).
  2. Then, we run this rosbag in one special run that only has the white line textures (remove the grass layer, so that the ground is entirely black except for the lines). We basically make the robot move around (by running odometry rosbag) and rosbag record the camera data. We will then keep referring to this original reference camera data (that has only lines in it). We might have to run a skeletonizing filter on these resulting lines if it turns out we need a single-pixel width but whatever (depends on our line detection output).
  3. Now, we can run each different line detection node in Gazebo using the same odometry bags, and then rosbag record the output from line detection. This output bag can then be compared to the original reference bag (because the camera sees the same things as in the reference run), and we can calculate how close our line detection results are to the real thing (something like RMSD between the original and line detection images or whatever).

This way, we can make like 5-6 different line detection nodes (each with different filters or tweaked values), and we run them and check which is better. We can then keep improving the line detection (and we KNOW that we're improving because we have a metric to compare it to).

bsubei commented 9 years ago

I think this should take top priority in line detection. If we don't have this, then it's pointless to try to figure out which line detection algorithm is better (because we can't measure it currently, we can only eyeball it and guess). For example, is backprojection grass filtering + hough better than just using grayscale + threshold + RANSAC? We don't know, because we can't measure it currently. But with this, we CAN.