This is a tentitive plan for things that need to be changed next year. Note that this is not neccicarilly exaustive or final.
Summary
Overall, we will be moving away from using machine learning, and moving to a traditional planning based approch. This will utilise as much of the existing infrastructure as possible, as well as the entirety of the current embedded system. Some new hardware will also be required to assist in this. Overall this approch is a hybrid between pure reactive approches, and pure planning approches. This is optimal for our competition, since we can enroll in the reactive competition (since we dont prebuild a map), but still allows us for greater optimistations that comes with planning allow without messing with long term map maintence (we have essentially a local map, but we dont do loop closure).
Notes
We do not have a map frame, only an odom frame. This is because things in the map frame are supposed to be corrected by SLAM, which we do not have. Things that we track should be in the odom frame.
I'm making the gamble that not having SLAM is fine, since we replan frequently. The only error that matters is the local drift relitive to where we were when we last planned. Planning should be cheap, frequent, and local, so long term drift should be irrellivent.
New Hardware
Vector Nav 300: This was used on Ohm, and is a great GPS IMU combo unit with a ROS2 driver.
SICK LIDAR: This is the lidar used on both Yeti and Ohm, with an ISC built driver. This is a 2d lidar and very reliable, so it should make things pretty easy to work with.
TODO
Removals
[x] All of pheonix_training: While this was a good learning exersise, it is no longer needed. Note that we don't need to delete this, as things like run_manager may actually be useful for future training. (NOTE: data_logger can stay around, since we don't have a better solution yet, and it is already tested)
[x] Inference node: This node was never actually created, but we need to adjust docs and the overall system to work with its removal.
Modifications
[x] Gazebo: Currently, the gazebo model is still quite poor. While it is capible of driving (and properly now that we no longer use pedal inputs), it still needs a bit of work
[x] STL: Phoenix's model will need to be updated to include the ~IMU~, lidar, and GPS sensors.
[ ] phnx_io_ros: This node should now also output encoder values as a traditional odom message, alongside the odom_ack message. This allows us to fuse it in a kalman filter. Additionally, we need to modify this node from the old form of ackermann_msgs, to treating the speed in the control message as a set speed, rather than a raw can signal. This will require the creation of a control loop, which will run in PIR to output throttle and brake commands to CAN to maintain our desired set speed. This will require PIR to sub to /odom to close this controllers loop. Additionally, we can now remove the max_x parameters from this node, as we no longer encode pedal location as velocity.
[x] - [ ] oak_d: We need to make this node ouput depth images.
Additions
[x] Object Detector Node: AKS is planning to have something lining the track, although what that is is not yet decided. At the moment they speculate small retroreflective chips, but thats just a guess. Regardless, we will need a node that detects those objects, and ouputs an array of poses representing the centroids of those objects. This way downstream is loosely coupled to the kind of object we are detecting.
[ ] Object Planner Node: Given the tracked objects, we create a path (vector of poses) by combining the midpoints formed by the objects (as in https://blogs.mathworks.com/student-lounge/2022/10/03/path-planning-for-formula-student-driverless-cars-using-delaunay-triangulation/). These poses will exist in our odom frame, as they are relitive to the camera. This path should be published only when it needs to be updated since either 1) an object has left the frame 2) a new object has entered the frame 3) there has not been a path published. This is only possible because we have IDs on the objects, and can thus tell when a cone has left via tracking them in a set or something.
[x] Object Tracker Node: Given detected object poses, we need to track these objects over time. This will involve assigning an ID to each object, and performing some kind of state estimation to track it. This will be useful downstream, as we know when an object is new or not. Additionally, I belive we can use the displacement between pose updates to derive what is essentially a VIO source from this node, which we can feed into the kalman filter as part of our fusion.
[x] Hybrid Pure Pursuit Node: This controller will take the points from the planner (remember to transform these from odom->base_link!), as well as our corrected position vector from the odom of the kalman filter to find the control to be sent to CAN to actually follow the path. I propose we use a hybrid pure pursuit approch with added collision avoidence. This would work by using the tradtional pure pursuit algorithm https://thomasfermi.github.io/Algorithms-for-Automated-Driving/Control/PurePursuit.html to find the arc to the nearest pose in front of us. After this, we then run my old AEB algorithm on that arc with the current scan from the lidar to detect if we will collide with something (will need to decide what our TTC tolerence is). If we detect a collision, then we should fall back to one of the TK planners like find the biggest gap. This planner should run on a fast loop (like 20hz) since this will directly control what the kart does. The output of this controller with be AckermannDrive messages, containing the wheel angle we need to take the arc (given by the control algo itself), as well as the velocity to take the arc at (can apply dynamics here to avoid wheel slip). This message will be taken in by phnx_io_ros to be actually sent to CAN, and finally move the actuators.
[x] robot_localisation: This package contains the all important kalman filter. This kalman filter will fuse the camera IMU, the IMU-GNSS from the GPS, the wheel encoder velocity, and the pose from the tracker into one giant corrected Odom. The use of a kalman filter allows for each of these sensors to correct one another, as well as for us to predict our location if a sensor drops. This node will output both a corrected odom as well as the odom->base_link transform. Note as because we do not have slam, this dead-reckoning will be our 'localisation'.
[ ] Lidar driver: Just plop this in
[x] GPS driver: Copy the config for this from KiloOhm
This is a tentitive plan for things that need to be changed next year. Note that this is not neccicarilly exaustive or final.
Summary
Overall, we will be moving away from using machine learning, and moving to a traditional planning based approch. This will utilise as much of the existing infrastructure as possible, as well as the entirety of the current embedded system. Some new hardware will also be required to assist in this. Overall this approch is a hybrid between pure reactive approches, and pure planning approches. This is optimal for our competition, since we can enroll in the reactive competition (since we dont prebuild a map), but still allows us for greater optimistations that comes with planning allow without messing with long term map maintence (we have essentially a local map, but we dont do loop closure).
Notes
New Hardware
TODO
Removals
Modifications
Additions