georgeslabreche / opssat-smartcam

On November 8, 2020, this project achieved the first use of deep convolutional neural networks (CNN) on-board a spacecraft.
https://www.esa.int/opssat
MIT License
15 stars 1 forks source link
ai artificial-intelligence esa image-classification image-clustering k-means k-means-clustering kmeans kmeans-clustering machine-learning neural-network neural-networks space spacecraft tensorflow-lite tensorflow2

OPS-SAT SmartCam Logo

First in Space!

On November 8, 2020, this project achieved the first use of deep convolutional neural networks (CNN) on-board a spacecraft. Many other firsts followed, here are some highlights:

Background

The SmartCam software on-board the OPS-SAT-1 spacecraft is the first use of Artificial Intelligence (AI) by the European Space Agency (ESA) for autonomous planning and scheduling on-board a flying mission. The software's geospatial capability autonomously triggers image acquisitions when the spacecraft is above areas of interest.

Inferences from on-board Machine Learning (ML) models classify the captured pictures for downlink prioritization. This capability is enabled by the spacecraft's powerful processors, which can run open-source software originally developed for terrestrial systems. Notably, with the GEOS Geometry Engine for geospatial computations and the TensorFlow Lite framework for ML model inferences. Additional image classification can be enabled with unsupervised learning using k-means clustering. These features provide new perspectives on how space operations can be designed for future missions given greater in-orbit compute capabilities.

The SmartCam's image classification pipeline is designed to be 'open', allowing it to be constructed from crowdsourced, trained ML models. These third-party models can be uplinked to the spacecraft and chained into a sequence with configurable branching rules for hyper-specialized classification and subclassification through an autonomous decision-making tree. This mechanism enables open innovation methods to extend on-board ML beyond its original mission requirement while stimulating knowledge transfer from established AI communities into space applications. The use of an industry standard ML framework de-risks and accelerate developing AI for future missions by broadening OPS-SAT's accessibility to AI experimenters established outside of the space sector.

Third-party executable binaries and scripts can also be injected into the pipeline and needn't be limited to ML and classification operations.

ESA OPS-SAT-1 Spacecraft
Figure 1: The OPS-SAT spacecraft in the clean room with deployed solar arrays (TU Graz).

Background

The SmartCam software on-board the OPS-SAT-1 spacecraft is the first use of Artificial Intelligence (AI) by the European Space Agency (ESA) for autonomous planning and scheduling on-board a flying mission. The software's geospatial capability autonomously triggers image acquisitions when the spacecraft is above areas of interest.

Inferences from on-board Machine Learning (ML) models classify the captured pictures for downlink prioritization. This capability is enabled by the spacecraft's powerful processors, which can run open-source software originally developed for terrestrial systems. Notably, with the GEOS Geometry Engine for geospatial computations and the TensorFlow Lite framework for ML model inferences. Additional image classification can be enabled with unsupervised learning using k-means clustering. These features provide new perspectives on how space operations can be designed for future missions given greater in-orbit compute capabilities.

The SmartCam's image classification pipeline is designed to be 'open', allowing it to be constructed from crowdsourced, trained ML models. These third-party models can be uplinked to the spacecraft and chained into a sequence with configurable branching rules for hyper-specialized classification and subclassification through an autonomous decision-making tree. This mechanism enables open innovation methods to extend on-board ML beyond its original mission requirement while stimulating knowledge transfer from established AI communities into space applications. The use of an industry standard ML framework de-risks and accelerate developing AI for future missions by broadening OPS-SAT's accessibility to AI experimenters established outside of the space sector.

Third-party executable binaries and scripts can also be injected into the pipeline and needn't be limited to ML and classification operations.

Citation

We appreciate citations if you reference this work in our upcoming scientific publication. Thank you!

APA

Labrèche, G., Evans, D., Marszk, D., Mladenov, T., Shiradhonkar, V., Soto, T., & Zelenevskiy, V. (2022). OPS-SAT Spacecraft Autonomy with TensorFlow Lite, Unsupervised Learning, and Online Machine Learning. 2022 IEEE Aerospace Conference. https://doi.org/10.1109/AERO53065.2022.9843402.

BibTex

@inproceedings{labreche2022_ieee_aeroconf,
  title         =   {{OPS-SAT Spacecraft Autonomy with TensorFlow Lite, Unsupervised Learning, and Online Machine Learning}},
  author        =   {Labrèche, Georges and Evans, David and Marszk, Dominik and Mladenov, Tom and Shiradhonkar, Vasundhara and Soto, Tanguy and Zelenevskiy, Vladimir},
  booktitle     =   {{2022 IEEE Aerospace Conference}},
  year          =   {2022},
  doi           =   {10.1109/AERO53065.2022.9843402}
}

Instructions

Table of Contents:

  1. Neural Networks
  2. Contribute
  3. How It Works
  4. Configuration
  5. Image Metadata

1. Neural Networks

The app can use any .tflite neural network image classification model file trained with TensorFlow.

1.1. Inference

The default model's labels are "earth", "edge", and "bad". The SmartCam's image classification program uses the TensorFlow Lite C API for model inference. Tensorflow Lite inference is thus available to any experimenter without being restricted to image classification.

1.2. Training New Models

Scripts and instructions to train new models are available here.

2. Contribute

Ways to contribute:

Join the OPS-SAT community platform and apply to become an experimenter, it's quick and easy!

3. How It Works

The app is designed to run on the Satellite Experimental Processing Platform (SEPP) payload onboard the OPS-SAT spacecraft. The SEPP is a powerful ALTERA Cyclone V with a 800 MHz CPU clock and 1GB DDR3 RAM.

3.1. Overview

The SmartCam's app configuration is set in the config.ini file. The gist of the application's logic is as follow:

  1. Acquires ims_rgb (raw) and png image files using the spacecraft's HD camera.
  2. Creates a thumbnail jpeg image.
  3. Creates an input jpeg image for the image classifier.
  4. Labels the image using the entry point model file specified by entry_point_model in config.ini.
  5. If the applied label is part of the model's labels_keep in config.ini then label the image further with the next model in image classification pipeline.
  6. Repeat step 5 until either the applied label is not part of the current model's configured labels_keep or until the last model of the pipeline has been applied.
  7. The labeled image is moved into the experiment and the filestore's toGround folders depending on the keep images and downlink configurations set in config.ini.
  8. Subclassify the labeled images into cluster folders via k-means clustering (or train the clustering model if not enough training images have been collected yet).
  9. Repeat steps 1 through 8 until the image acquisition loop as gone through the number of iterations set by gen_number in config.ini.

For certain operations the app invokes external executable binaries that are packaged with the app. The are included in this bin folder. Their source codes are hosted in separate repositories:

3.2. Installation

The app can run on a local development environment (64-bit) as well as onboard the spacecraft's SEPP processor (ARM 32-bit). For the former, the app reads its configuration parameters from the config.dev.ini file whereas for the latter it reads them from the config.ini file.

3.2.1. Local Development Environment

These instruction are written for Ubuntu and were tested on Ubuntu 18.04 LTS. Install development tools:

sudo apt install python3-dev
sudo apt install virtualenv

Create the symbolic links for the TensorFlow Lite shared objects. Execute the following bash script from the project's home directory:

./scripts/create_local_dev_symlinks.sh

Install a Python virtual environment and the Python package dependencies. From the project's home directory:

cd home/exp1000/
virtualenv -p python3 venv
source venv/bin/activate
pip install -r requirements.txt

Edit the smartcam.py file to enable debug mode and indicate that the app must execute binaries compiled for the 64-bit local dev environment. These binaries were built with the k8 architecture. Enabling debug mode simply generates mock data (e.g. acquired pictures) in the absence of spacecraft hardware (e.g. the onboard camera).

DEBUG = True
DEBUG_ARCH = 'k8'

Before running the app make sure that the virutal environment is still active. If it isn't then re-execute source venv/bin/activate. Run the app:

python3 smartcam.py

3.2.2. Onboard the Spacecraft

The SmarCam app and its dependencies are packaged for deployment as opkg ipk files, ready to be installed in the SEPP via the opkg install command.

3.2.2.1. Dependencies

The SEPP runs the Ångström distribution of Linux. The following packages are dependencies that need to be installed in SEPP's Linux operating system prior to installing and running the app. They can be found in the deps directory of this repository:

Other dependencies are the tar and split programs that are invoked by the App.

3.2.2.2. The App

Package the app into an ipk for the EM:

./scripts/ipk_create.sh em

Package the app into an ipk for the spacecraft:

./scripts/ipk_create.sh

3.3. Building an Image Classification Pipeline

  1. Each model consists of labels.txt file and a .tflite model file or an executable program. These files are located in a model folder under /home/exp1000/models, e.g: /home/exp1000/models/default and /home/exp1000/models/kmeans_imgseg.
  2. If the model is an executable binary then it must implement the following input arguments:
    • -i the file path of the input image
    • -w the write mode of the output image (optional)
  3. Create a config.ini section for each model. Prefix the section name with model_, e.g. [model_default] and [model_kmeans_imgseg].
  4. Each model's config section will specify which label to keep via the labels_keep property. For instance, if the default model can label an image as either "earth", "edge", or "bad", but we only want to keep images classified with the first two labels, then labels_keep = ["earth", "edge"].
  5. If another image classification needs to follow up after an image was previously classified with a certain label, then the follow up model name can be appended following a colon. E.g. ["earth:kmeans_imgseg", "edge"].
  6. The entry point model that will be the first model applied in the image classification pipeline is specified in the config.ini's entry_point_model property, e.g. entry_point_model = default.

See the kmeans-image-segmentation poject as an example of point 2.

4. Configuration

This section describes the app's configuration parameters in the config.ini file.

4.1. General

4.2. Image Acquisition

There are two types of image acquisition that can beet set: Polling or Area-of-Interest (AOI):

4.2.1. Area-of-Interest GeoJSON Files

4.2.2. Camera Settings

4.2.3. Acquisition Type

4.3. Images

4.4. Model

All model properties in the SmartCam's config file are prefixed by the name of the model. For instance, the config section for the default model is [model_default] and its properties are default.tflite_model, default.labels, etc. A model can either be a TensorFlow Lite model with default.tflite_model or an executable program with kmeans_imgseg.bin_model. The config properties for these models differ slightly from each other. E.g.:

[model_default]
default.tflite_model                = /home/exp1000/models/default/model.tflite
default.labels                      = /home/exp1000/models/default/labels.txt
default.labels_keep                 = ["earth:kmeans_imgseg","edge","bad"]
default.input_height                = 224
default.input_width                 = 224
default.input_mean                  = 0
default.input_std                   = 255
default.confidence_threshold        = 0.70

[model_kmeans_imgseg]
kmeans_imgseg.bin_model             = bin/armhf/kmeans/image_segmentation
kmeans_imgseg.labels                = models/kmeans_imgseg/labels.txt
kmeans_imgseg.labels_keep           = ["cloudy_0_25","cloudy_26_50","cloudy_51_75","cloudy_76_100","features"]
kmeans_imgseg.input_format          = jpeg
kmeans_imgseg.write_mode            = 1
kmeans_imgseg.args                  = -k 2 -p BW
kmeans_imgseg.confidence_threshold  = 0.70

4.4.1. TF Lite

A note on what input_mean and input_std are for, taken verbatim from this blogpost:

Since the dataset contains a range of values from 0 to 255, the dataset has to be normalized. Data Normalization is an important preprocessing step which ensures that each input parameter (pixel, in this case) has a similar data distribution. This fastens the process of covergence while training the model. Also Normalization makes sure no one particular parameter influences the output significantly. Data normalization is done by subtracting the mean from each pixel and then dividing the result by the standard deviation. The distribution of such data would resemble a Gaussian curve centered at zero. For image inputs we need the pixel numbers to be positive. So the image input is divided by 255 so that input values are in range of [0,1].

4.4.2. Executable Binary

4.5. Clustering

4.6. Raw Image Compression

4.6.1. FAPEC

The FAPEC compression binary provided by DAPCOM DataServices and not included in this repository. The compressor can only be used with a valid license (free of charge if exclusively used for OPS-SAT purposes). Free decompression licenses (with some limitations) can be obtained from the DAPCOM website or upon request to fapec@dapcom.es.

4.6.2. Others

No other image compression algorithms are currently supported.

5. Image Metadata

A CSV file is created and downlinked when collect_metadata is set to yes. Each row contains metadata for an image acquired during the SmartCam app's execution. Metadata for images that were discarded are also included. The following information is collected:

ESA OPS-SAT-1 Mission Patch
European Space Agency