Open nairakhilendra opened 9 months ago
Our Current Workplan for the simple-line-follower
@chenyenru will work on the ROS2 Camera Interface and the data pipeline to feed into the midpoint finder. @nairakhilendra will work on the midpoint finder and the controls part with VESC
Issue with using Follow the Gap is it's such an easy and popular algorithm that the F1Tenth organizers have placed traps on the track where Follow the Gap will go in the wrong direction. For ICRA you would still have to use a localization based approach or a behavior cloning based approach
Thanks Sid. You saved us from going off the wrong track.
The team is currently split into two parts: (1) Behavioral Cloning with donkeycar and (2) Alternative approach with ROS2 to see if there's a better way to race.
Within "Alternative approach with ROS2", we're divided into two subparts: (1) LiDAR only and (2) Camera only. https://discord.com/channels/974049852382126150/1174597043898040391/1206820066214150174.
@nairakhilendra and I are working on camera only.
However, I currently cannot find a camera-only approach that has similar performance as the approaches with LiDAR. And I am not sure if pursuing a camera-only approach will be worth it, given we want to reach the level of performance to compete in ICRA F1Tenth in May.
Given these, what's your recommendation on where to start with the non-donkey approach for the ICRA F1Tenth indoor race?
Thanks!
Personally for our team the challenge has always been localizing at speed. Since it's just a 2 car race the planning is actually pretty simple where you could choose b/w 2-3 lines and have the car drive towards those areas. For controls the winning team has mostly just used pure pursuit and tuned it. Not saying these areas can't be improved. But without localization, you would waste your time on them since they would need to be constantly retuned and it's hard to get it to work without localization being accurate
So a Lidar / camera / Lidar + camera localization for speed is off importance. The Lidars we have at UCSD are traditionally b/w 10-20 Hz while the UPenn teams come with 40Hz lidars which are super important for high speed localization to reduce the effect of distortion. For that Autoware traditionally has used https://github.com/SteveMacenski/slam_toolbox. But the F1Tenth community has used that to make maps and then use https://github.com/f1tenth/particle_filter to localize. You can follow along the Autoware Racing WG Meeting notes here: https://github.com/orgs/autowarefoundation/discussions?discussions_q=label%3Ameeting%3Aracing-wg. Again, I think this is a good area to research. But we are limited by the Lidar frequency unless additional work is done to integrate the IMU for short term position and distortion correction. Also since you do have the OAK-D depth as a substitute for Lidar at higher frequency it would be interesting to see if the localization can be done on that
The camera is another interesting area. While there is not much work done in the F1Tenth community there is tons of research in drone racing on using Camera + IMU solutions for high speed drone racing. I don't have any papers / repos I am actively following for them but I would look at drone solutions to start with and see
Hi Sid, thank you for pointing us to these sources!
I have summarized your points, could you confirm if I understood them correctly?
What is less important: Planning (can be improved tho)
What is more important: Accurate Localization
Examples to look to for localization
Ideas for doing accurate localization with our constraints on LiDAR
Looks good
I did actually omit perception but that is also an important challenge. Not sure what are the approaches currently being pursued for that on the AW end. We use their Euclidean Clustering in IAC but that is 3D Lidar specific
Thank you Sid.
I'll look into Autoware in a bit.
Yeah, Euclidean Clustering might be for more high-frequency 3D LiDAR. For research, I'll generally look into paper with one of the following keyword: "low latency LiDAR", "visual SLAM", "RGB-D SLAM", "Visual SLAM", "Visual Odometry."
This information piece from NVIDIA looks like a good starter.
F1Tenth listed publications related to F1Tenth Car: Link
Kinematics-Based Trajectory Tracking
A recently released paper that might be helpful: https://arxiv.org/abs/2402.18558
Thank you so much Sid can't believe there's a paper published discussing the thing we want to know. We'll read into it and see what it suggests!
Due date is February 26 (~1 week)
https://navigation.ros.org/setup_guides/odom/setup_odom.html