<div align="center"; margin: 0px; padding: 0px;">
Copyright (C) 2024 ETH Zürich, University of Bologna. All rights reserved.
Subscribe to our PULP Platform youtube channel!
If you use PULP-Dronet in an academic or industrial context, please cite the listed publications:
@article{lamberti2024pulpdronetIOTJ,
author={Lamberti, Lorenzo and Bellone, Lorenzo and Macan, Luka and Natalizio, Enrico and Conti, Francesco and Palossi, Daniele and Benini, Luca},
journal={IEEE Internet of Things Journal},
title={Distilling Tiny and Ultra-fast Deep Neural Networks for Autonomous Navigation on Nano-UAVs},
year={2024},
volume={},
number={},
pages={1-1},
keywords={Navigation;Task analysis;Artificial intelligence;Internet of Things;Autonomous robots;Throughput;Collision avoidance;Autonomous Nano-UAV;Embedded Devices;Ultra-low-power;Artificial Intelligence;Mobile and Ubiquitous Systems},
doi={10.1109/JIOT.2024.3431913}
}
@INPROCEEDINGS{lamberti2022tinypulpdronetAICAS,
author={Lamberti, Lorenzo and Niculescu, Vlad and Barciś, Michał and Bellone, Lorenzo and Natalizio, Enrico and Benini, Luca and Palossi, Daniele},
booktitle={2022 IEEE 4th International Conference on Artificial Intelligence Circuits and Systems (AICAS)},
title={Tiny-PULP-Dronets: Squeezing Neural Networks for Faster and Lighter Inference on Multi-Tasking Autonomous Nano-Drones},
year={2022},
volume={},
number={},
pages={287-290},
doi={10.1109/AICAS54282.2022.9869931}
}
@ARTICLE{niculescu2021pulpdronetJETCAS,
author={Niculescu, Vlad and Lamberti, Lorenzo and Conti, Francesco and Benini, Luca and Palossi, Daniele},
journal={IEEE Journal on Emerging and Selected Topics in Circuits and Systems},
title={Improving Autonomous Nano-drones Performance via Automated End-to-End Optimization and Deployment of DNNs},
year={2021},
volume={},
number={},
pages={1-1},
doi={10.1109/JETCAS.2021.3126259}
}
@inproceedings{niculescu2021pulpdronetAICAS,
author={V. {Niculescu} and L. {Lamberti} and D. {Palossi} and L. {Benini}},
booktitle={2021 IEEE International Conference on Artificial Intelligence Circuits and Systems (AICAS)},
title={Automated Tuning of End-to-end Neural FlightControllers for Autonomous Nano-drones},
pages={},
keywords={autonomous navigation, nano-size UAVs, deep learning, CNN, heterogeneous computing, parallel ultra-low power, bio-inspired},
doi={},
ISSN={},
month={},
year={2021},
}
@article{palossi2019pulpdronetIoTJ,
author={D. {Palossi} and A. {Loquercio} and F. {Conti} and E. {Flamand} and D. {Scaramuzza} and L. {Benini}},
title={A 64mW DNN-based Visual Navigation Engine for Autonomous Nano-Drones},
journal={IEEE Internet of Things Journal},
doi={10.1109/JIOT.2019.2917066},
ISSN={2327-4662},
year={2019}
}
@inproceedings{palossi2019pulpdronetDCOSS,
author={D. {Palossi} and F. {Conti} and L. {Benini}},
booktitle={2019 15th International Conference on Distributed Computing in Sensor Systems (DCOSS)},
title={An Open Source and Open Hardware Deep Learning-Powered Visual Navigation Engine for Autonomous Nano-UAVs},
pages={604-611},
keywords={autonomous navigation, nano-size UAVs, deep learning, CNN, heterogeneous computing, parallel ultra-low power, bio-inspired},
doi={10.1109/DCOSS.2019.00111},
ISSN={2325-2944},
month={May},
year={2019},
}
PULP-Dronet is a deep learning-powered visual navigation engine that enables autonomous navigation of a pocket-size quadrotor in a previously unseen environment. Thanks to PULP-Dronet the nano-drone can explore the environment, avoiding collisions also with dynamic obstacles, in complete autonomy -- no human operator, no ad-hoc external signals, and no remote laptop! This means that all the complex computations are done directly aboard the vehicle and very fast. The visual navigation engine is composed of both a software and a hardware part.
Software component: The software part is based on the previous DroNet project developed by the RPG from the University of Zürich (UZH). DroNet is a shallow convolutional neural network (CNN) which has been used to control a standard-size quadrotor in a set of environments via remote computation.
Hardware components: The hardware soul of PULP-Dronet is an ultra-low power visual navigation module embodied by a pluggable PCB (called shield or deck) for the Crazyflie 2.0/2.1 nano-drone. The shield features a Parallel Ultra-Low-Power (PULP) GAP8 System-on-Chip (SoC) from GreenWaves Technologies (GWT), an ultra-low power HiMax HBM01 camera, and off-chip Flash/DRAM memory; This pluggable PCB has evolved over time, from the PULP-Shield , the first custom-made prototype version developed at ETH Zürich, and its commercial off-the-shelf evolution, the AI-deck.
The first version of PULP-Dronet, which gave the birth to the PULP-Shield: a lightweight, modular and configurable printed circuit board (PCB) with highly optimized layout and a form factor compatible with the Crazyflie nano-sized quad-rotor. We developed a general methodology for deploying state-of-the-art deep learning algorithms on top of ultra-low power embedded computation nodes, like a miniaturized drone, and then we automated the whole process. Our novel methodology allowed us first to deploy DroNet on the PULP-Shield, and then demonstrating how it enables the execution the CNN on board the CrazyFlie 2.0 within only 64-284mW and with a throughput of 6-18 frame-per-second! Finally, we field-prove our methodology presenting a closed-loop fully working demonstration of vision-driven autonomous navigation relying only on onboard resources, and within an ultra-low power budget. See the videos on the PULP Platform Youtube channel: video.
Summary of characteristics:
Hardware: PULP-Shield
Deep learning framework: Tensorflow/Keras
Quantization: fixed-point 16 bits, hand crafted
Deployment tool: AutoTiler (early release, developed in collaboration with GreenWaves Technologies)
We release here, as open source, all our code, hardware designs, datasets, and trained networks.
This follow-up takes advantage of a new commercial-off-the-shelf PCB design based on the PULP-Shield, now developed and distributed by Bitcraze: the AI-deck. Our work focused in automating the whole deployment process of a convolutional neural network, which required significant complexity reduction and fine-grained hand-tuning to be successfully deployed aboard a flying nano-drone. Therefore, we introduce methodologies and software tools to streamline and automate all the deployment stages on a low-power commercial multicore SoC, investigating both academic (NEMO + DORY) and industrial (GAPflow by GreenWaves) tool-sets. We reduced by 2× the memory footprint of PULP-Dronet v1, employing a fixed-point 8 bit quantization, achieving a speedup of 1.6× in the inference time, compared to the original hand-crafted CNN, with the same prediction accuracy. Our fully automated deployment methodology allowed us first to deploy DroNet on the AI-Deck, and then demonstrating how it enables the execution the CNN on board the CrazyFlie 2.1 within only 35-102mW and with a throughput of 9-17 frames/s!
Summary of characteristics:
Hardware: AI-deck
Deep learning framework: Pytorch
Quantization: fixed-point 8 bits, fully automated (with both academic NEMO and the industrial NNTool)
Deployment: fully automated (with both the academic DORY and the industrial AutoTiler)
We release here, as open source, all our code, hardware designs, and trained networks.
Achieving AI multi tasking perception on a nano-UAV presents significant challenges. The extremely limited payload on nano-UAVs restricts them to accommodate only ultra-low-power microcontroller units that have stringent computational and memory constraint, which have prevented the deployment of multiple AI tasks onboard. Therefore, we focus on optimizing and minimizing the AI workloads without compromising the drone’s behavior when stressed in real-world testing scenarios. We achieve a speedup of 8.5× in the inference time, compared to PULP-Dronet v2, with an inference throughput of 139 frames/s. Moreover, we develop a methodology for dataset collection on a nano-UAV. We collect unified collision avoidance and steering information only with nano-UAV onboard resources, without dependence on external infrastructures. The resulting PULP-Dronet v3 dataset consists of 66k labeled images.
We release all our open-source code here, including the PULP-Dronet v3 dataset, our dataset collection framework, and our trained networks.
Summary of characteristics:
Hardware: AI-deck
Deep learning framework: Pytorch
Quantization: fixed-point 8 bits, fully automated with the academic NEMO.
Deployment: fully automated with the academic DORY.
Dataset: custom made, collected with the nano-drone.
All files in this repository are original and licensed under Apache-2.0. See LICENSE.
We release the dataset (zenodo.org/records/13348430) as open source under Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.
The licenses of external modules are described in the LICENSE_README.md