Closed tatarchm closed 1 week ago
Considering the demo concept above, I see us having two avenues to go down on 'strategic' course structure:
Basically we wouldn't hand out real robots until around lesson 5-6 until the students are familiar enough with the programming language and algorithmics. This division roughly matches the coursework/projects division.
The deployment lifecycle is simpler since you don't have to reconnect/re-flash the microcontroller physically.
We can delay releasing a final version of the library until we actually hand out the real robots. On one hand this simply gives us more time, on the other hand we can (within reason) incorporate feedback and visions voiced by the students during the sim part.
The (optional) lack of state keeping allows the students to jump right into new lesson-scoped projects without having to re-read their old code that possibly references APIs unrelated to the current lesson.
Simulations allow setting up many different environments/use case scenarios without any physical modifications to the robot.
Naturally, this approach introduces a sim2real gap. Possible API mismatches may prevent direct code transfer.
Modeling scenarios for each lesson is more time-consuming than deploying (possibly ad-hoc) on a real system.
Some students may learn better with hands-on development.
Have an optional simulation part in each lesson which ultimately leads to real deployment further down the lesson.
Students get more hands-on experience earlier.
All previously written code can be directly stored in a git repository. Reusing code across lessons is much simpler.
The hardware is tested against the API at all points of the development.
An additional ongoing task for course instructors is checking/fixing hardware and wiring of the physical robots.
Deploying often on real hardware is time-consuming and hinders tweaking of parameters.
implement (at least) one of
needs reviews/discussion @tatarchm
@voshch thanks a lot for looking into this. I really like the simulator. Depending on how we proceed, we might not need to deploy it ourselves. There is, for example, this solution, which is a ready-to-use browser app.
I would not even try to replicate all hardware features in simulation. The basic components (LEDs, screen, buzzer, servos) already provide a good basis for the first part of the course. The fact that one can upload custom libraries makes it super flexible: we can show how to work with standard libraries first, and later introduce our board and the corresponding API. This way they will (hopefully) better understand what's going on under the hood.
Regarding the course strategy, I think I'm more in favour of the second one, i.e., mixing simulated and real.
I'm going to be away until Aug. 11. Let's meet after that and discuss once more. We can also do this online if it's easier. @NColdGit
Side note: Library uploads are a paid feature (~$8.50/mo). Not sure if account sharing is possible, shouldn't be a problem though.
The other issue I have with wowki's webapp is the lack of motion simulation. I think it's crucial to the understanding of robot perception and head simulations of state machines.
Thoughts on Simulation
Arduino-Robot-Virtual-Lab (source) is a really cool demo of small practice-oriented motion planning exercises. It's a bit text-heavy imo (especially the intro) but I love the idea of having small simulations.
Technical Spec
The core simulator is avr8js FOSS and entirely browser-based.
It supports
This specific demo doesn't keep any state at all, but it's trivial to save the projects to browser/session storage to revisit later. If required, I can also add rudimentary user management and cloud storage very cheaply (<$5/yr). The organization behind the simulator offers a paid hosted solution, but I honestly see no real benefit in it for us.
Personally, I really like the simplicity of the simulation. You don't have to worry about locomotion variances and stuff breaking in general. We can dedicate a session to sim2real transfer, which is a good exposure to the (least fun part of the) robotics development lifecycle.
Possible Issues
Debugging (breakpoints) is not supported and I don't see myself integrating it easily. What I can offer in the meantime is state visualization below the rendered window.
A conceptual issue we'll have with any simulator is that we need to have simulatable components on our robot. In this case we've already seen proof-of-concepts for simplified locomotion, distance sensors, displays, LEDs. What's possibly not trivial to model is anything abstract with servo motors, especially manipulation and interacting with objects. There are a few counterexamples from last time that we could easily simulate:
While the simulator generally supports MicroPython, it's not integrated in the demo.
The 'collecting coins' in the simulation scenarios does not transfer easily to real-life applications. (One possible approach is putting rfid cards on the floor and reading them as you pass over. Needs to be prototyped, I have 2 sets of hardware for this at home.)