nlahaye / SIT_FUSE

Segmentation, Instance Tracking, and data Fusion Using multi-SEnsor imagery
Apache License 2.0
4 stars 2 forks source link

description: >- Segmentation, Instance Tracking, and data Fusion Using multi-SEnsor imagery (SIT-FUSE)

🛰️ SIT-FUSE Docs

SIT-FUSE utilizes self-supervised machine learning (ML) that allows users to segment instances of objects in single and multi-sensor scenes, with minimal human intervention, even in low- and no-label environments. Can be used with image like and non image-like data.

Currently, this technology is being used with remotely sensed earth data to identify objects including:

Figure 1 depicts the full flow of SIT-FUSE and figures 2 and 3 show segmentation maps and the information extracted for instance tracking across scenes. SIT-FUSE’s innovative multi-sensor fire and smoke segmentation precisely detects anomalous observations from instruments with varying spatial and spectral resolutions. This capability creates a sensor web by incorporating observations from multiple satellite-based and suborbital missions. The ML framework’s output also facilitates smoke plume and fire front tracking, a task currently under development by the SIT-FUSE team.

Figure 1. The flow diagram for SIT-FUSE


Figure 2. The first row contains scenes from different instruments/instrument sets used as input. The second row shows SIT-FUSE’s output segmentation maps for the input scene, and the third row shows retrieved objects of interest, in this case, fire and smoke.


Figure 3. Each 4-image set is generated from a separate GOES-17 scene over an observed fire in 2019. The top row of each set depicts radiances and their associated clustering output from software system S. The second row shows the radiances with an overlay of the subset of clusters assigned to the contexts of smoke and fire. The bottom row shows the input radiances with shape approximations for smoke and fire generated via the openCV contour functionality. The green arrows depict the products that can be used for instance tracking. For cross-instrument instance tracking we will use contrastive learning to map the instance signatures across the different domains.


Recent Talks:

{% embed url="https://vimeo.com/771105424/c1379bc387" %} 2022 ECMWF–ESA Workshop on Machine Learning for Earth Observation and Prediction {% endembed %}

{% embed url="https://www.google.com/url?opi=89978449&rct=j&sa=t&source=web&url=https://www.youtube.com/watch?v=-cYSpBQVQi4&usg=AOvVaw37WlIcIwp3564Kb6AKPdLP&ved=2ahUKEwiGqOXKs9uFAxUeJEQIHahiBVIQtwJ6BAgWEAI" %} 2022 TIES Annual Meeting {% endembed %}

References: