FRC-7913 / Crescendo-2024

Our first attempt at robot code for the 2024 season. Rewritten in linked fork.
https://github.com/FRC-7913/Crescendo-2024-Rewrite
MIT License
2 stars 2 forks source link

AI-Powered shooter based on field location #8

Closed BruceMcRooster closed 9 months ago

BruceMcRooster commented 9 months ago

Based on the robot's estimated field location (taken from limelight and swerve data, but perhaps not entirely accurate) we might be able to train a model using evolutionary learning in a simulator (I'm eyeing IsaacSim and PhysX by NVidia, but a game engine might work) to learn the angle and shooter wheel speeds needed to score a note at distance into the speaker (and potentially, in a secondary model, into the amp). Of course this only works if we have an angleable shooter in the robot design. But I think the soft kinematics can be accounted for by a good simulator.

BruceMcRooster commented 9 months ago

This does seem quite complex. I also don't have access to NVidia GPUs for IsaacSim or PhysX, so it might be slow or impossible to do it with those. Something else, like MuJoCo or PyBullet, might work, but it involves a decent amount of work to understand how to model everything with materials that accurately deform. It's quite a cost-benefit analysis thing.

BruceMcRooster commented 9 months ago

This could allow me to take an OnShape model of our shooter, the game piece, and the speaker and convert them to the URDF file format that MuJoCo or PyBullet can use. I'm concerned that note deformation throughout an event might affect the model's accuracy, but that problem is true of any way we do the shooter, and I might be able to simulate this in the training.

BruceMcRooster commented 9 months ago

An alternative that does seem simpler is to program a bunch of positions with known values for shooter speed and angle and robot angle. Give the driver a selection they can choose from. Don't know how advanced the widgets on SmartDashboard can get, but with the touchscreen driver's station and a second driver, we should be fine. The driver could have an indicator (on-screen/through LEDs on robots) that indicates the closeness to the position, and the driver could hit a button to make the robot do the final piloting when it's close enough.

BruceMcRooster commented 9 months ago

The one drawback is that we likely can't take moving shots (unless those were additional positions). But at least for the first few competitions, missing this capability will not be detrimental. For context, BOB is considering this method. While less impressive and robust, as long as we can get time on the field to test all of these, we should be fine.

BruceMcRooster commented 9 months ago

I'm going to sweep this idea under the rug. Judging by the complexity, it's likely our competition—unless we get to a very high level—won't be doing anything like this. The complexity of simulating everything has so many unknowns, so I'll say we focus our resources on something less complex and more repeatable.