Autonomous rover project for the EEC193AB course and University Rover Challenge Key features included:
Tested each mechanism separately from each other to avoid complications
Utilized the rtabmap package on ROS to integrate vSLAM into our system. Mainly aimed at the car’s throttle and steering responding to a set goal. The goal is designed to be set through a series of local goals (set automatically by the rover) moving towards a global goal (set by humans).
Leveraged the openCV package on Python to perform depth perception using our 3D stereo camera. openCV is also used to cluster regions with "deeper" pixels, to detect potential pits for the rover to avoid.
Basic AR tag detection. If the rover detected an AR Tag, then it would respond by moving slowly towards it
Designed for the rover to perform a fixed set of operations to attempt to "escape" from an emergency situation (e.g., getting stuck). The "emergency" is detected through calculating the difference between throttle and actual movement of the rover.
For a simple demo video, please watch this video.
For our poster presentation, refer to this