AeroRust / WorkingGroup

Issues tracker for ideas, ongoing work, looking for mentors, mentors available. Join here!
38 stars 2 forks source link

Project: Drone demo #16

Open elpiel opened 4 years ago

elpiel commented 4 years ago

Non-Critical Aeronautics

Drones

Aerorust Drones

Current work

Many of you might already know that we are working on Parrot drones right now and building a Rust SDK called arsdk-rs to be able to connect, control and use other features of the drone with Rust.

Proposal

With this in mind I want to propose to create a demo project that will integrate different components and build an integrated system showing what is currently possible with Rust and Open-source.

Social part

The other aspect of the project is the possibility to connect and form bonds with other communities like Rust-ML (machine learning) WG & Rust-CV (Computer Vision).

Tech stack

There are many components already in place, which we can integrate together and build this project.

Components: Legend:


:heavy_check_mark: Sphinx Simulation provided by Parrot

o0Ignition0o/airsim-rs Simulation integration for a much more complete and advanced simulations.

Objectives

vadixidav commented 4 years ago

@elpiel asked me to write up what capabilities we have today at Rust CV. You can see that now in our goals section.

I think the simplest way you can utilize Rust CV today to do something interesting with the drone is using the technique found here: https://github.com/rust-cv/vslam-sandbox/blob/0a0bd760ceee2da38f0626a8a8678b9e98a657e1/src/main.rs. This code will allow you to perform some very rudimentary indirect visual odometry. We can make it a bit more robust by allowing it to use older frames in case of failure. This will allow you approximate how much the drone has moved and rotated on each frame.

Since the drone probably already has basic equipment onboard to detect motion, this could be used in a different way. You could use a reference image of a particular object with some features on it, such as a large sheet of paper (or, since this is a simulation, just a flat rendering of an image). The drone could then constantly estimate its pose relative to this object. Alternatively, you could trigger the drone to hold steady, and then it would remember a snapshot from that location. From that point on, it could hold steady at that location since it can figure out how its position has changed based on the current frame and that snapshot it took. This would also require a human to have a trigger to tell the drone "hold position". This would provide interesting value, since the onboard sensors (accelerometers and gyroscopes) cant tell you absolute position, but you can do that with computer vision so long as objects in the scene do not move (although if they do move, that likely wont throw it off unless it is a large change).

There are also other interesting things you could do by using other algorithms, but I think that this is the simplest way to integrate computer vision into this simulation today. The easiest of the above solutions is likely the "hold in place" concept, since no data needs to be prepared in advance, and all the information comes just from the drone camera.

Let me know if you are interested in me writing up a demo of this capability, and I can do that right away.