Closed dswelch closed 2 years ago
Get a message from a ROS topic, that contains the image from the zed camera (which is already distorted). Then we have perspective_transform node that would create the correct points on initialization, then use those points in order to do a perspective transform. It will then have a publisher that will post that topic in order for another node to use that and create the occupancy grid.
Need to figure out how to import cv_bridge, try to configure python3 with pros melodic
Finished perspective transform ROS node, started organizing files within catkin workspace and cvStack. Rtabmap is no longer working for the occupancy map. Odometry data seems to be the problem
Got all ROS nodes working in order to visualize our lane detection algorithm, including the perspective transform. Still need to figure out how to convert to occupancy grid.
Using dst values in order to only look at selected portion of the transformed image and ignore the black space. We are doing this by publishing these values to the next step.
We have a document that gives the steps to get all of the ROS things up and running in the Computer Vision drive. We now give the dst_quad values properly, and when we run the hough transform the only lines that are picked up from the edge of the image is at the very top corners, which should be fixed fairly easily. Now we need to work on lane detection and pothole detection.
Fixed the previous issue, we were using the wrong points from the dst and masking it weirdly but it is working now. We looked into more lane detection techniques, need to still work on it
Using open source code to calculate points needed for a perspective transform.