Andrews2024 / lemur

Code repository for Learning and Educating ML/ CV in Undergraduate Robotics
0 stars 0 forks source link

Paper Prototype of Module #4

Open Andrews2024 opened 10 months ago

Andrews2024 commented 10 months ago

This is developing a lab activity and providing a document with instructions and pictures such that a student could complete the lab on their own.

Module 0 - Set up hardware, install libraries, light up an LED with CV Module 1 - PyTorch and OpenCV tutorials Module 2 - Module 3 Module 4 -

Andrews2024 commented 10 months ago

Ideas:

ajmckeen commented 10 months ago

Lab 0 for setting up hardware, doing initial tests Lab 1 --> Take a pre-built model, adapt it to do something else, (likely retraining a model on new data instead of modyfing the data since this would be early in the course) this would also need to be some functionality that could be used in the final lab. maybe object detection? Lab 2 --> Drive the robot using sensor(s) with some sort of control (e.g. PID), then control it with ML and compare implementation and results (archie did line following? don't have the report for that yet) could have students design an experiment to quantitatively compare the two approaches. For example, they could measure the time taken to complete a task, the accuracy of the final position, or some other metric to compare traditional (PID?) control with the ML model Lab 3-5 --> Semantic segmentation? Reinforecment learning? Anomoly Detection? HRI (gestures?) Final lab: Navigation and Control: The robot should be able to navigate the environment effectively using either traditional control (like PID) or ML-based control, or a combination of both. Students should justify their choice of control method.

Object Recognition and Interaction: The robot should be able to recognize certain objects in the environment (using skills developed in Lab 1). Upon recognizing a specified object, the robot should be able to interact with it in a certain way (pick it up, push it, avoid it, etc.). The nature of interaction would depend on the specific objects and the robot's capabilities.

Anomaly Detection: The robot should be able to detect anomalies in the environment (new objects, unexpected changes, etc.) using techniques learned in previous labs. Upon detecting an anomaly, the robot should be able to respond appropriately (stop, avoid, investigate, etc.).

Human-Robot Interaction (Optional): If possible, the robot should also be able to recognize and respond to certain human gestures or commands, making the task more interactive.

Design Choices: Students will have to make several design choices in this lab, including: Choice of control method: Traditional control, ML-based control, or a combination of both. Choice of object recognition model: Which model to use for object recognition, and how to adapt it for the task. How to handle anomalies: How to define and detect anomalies, and how to respond to them. How to handle human-robot interaction: Which gestures or commands to recognize, and how to respond to them.