Open varun-un opened 2 months ago
Can you add more context about which camera we use, what its resolution is, what device we use to get video out of it, if there are space or power limitations, etc?
If we are to use the same hardware as last year (which is presumed), it's an innomaker 5MP Raspberry Pi Camera, and is capable in theory of 1080p and probably running at 30 fps. The idea is that we could split off the same camera feed that's going to the encoder and live video system and use it for this.
The big limitations come in the actual processing of it however, as I'm not sure how much spare processing power the RPi will have for this as well as if we have to stick another Raspberry Pi it'll become too much space and power draw. Realistically, this task would be a side project as an experimental thing, as making it operational would be hard.
Especially if you're trying to use something existing like NVIDIA Isaac, the processing requirements far exceed what we have. So part of this would be to even determine the feasibility of this on a model rocket in general
Some robots and even rockets use visual odometry to either augment or even replace their gyroscope systems for orientation determination, and potentially even position calculations. Since Avionics has a camera anyways, just research what a system using this camera feed for visual odometry would look like, as well as if it's even possible given our use case and camera views.