• About
  • Advanced Search
  • Browse Proceedings
  • Access Policy
  • Sponsor
  • Contact
  • Gallery Index
  • A multi-service edge-AI architecture based on self-supervised learning

    Paper ID

    87460

    author

    • Enrico Magli
    • Simone Angarano
    • Stefano Bassetti
    • Tiziano Bianchi
    • Piero Boccardo
    • Silvia Bucci
    • Marcello Chiaberge
    • Gabriele Inzerillo
    • Davide Lisi
    • Matteo Mergè
    • Cristina Monaco
    • Martina Pasturensi
    • Davide Piccinini
    • Diego Valsesia
    • Giacomo Zema

    company

    Politecnico di Torino; ; Argotec; ASI - Italian Space Agency

    country

    Italy

    year

    2024

    abstract

    In recent years, machine learning has made significant strides, enabling the extraction of information from data and images with excellent precision. Onboard image analysis from satellites enables the selection of relevant images to transmit to ground stations, as well as the detection of events of interest and the generation of alerts for potentially dangerous events like fires and floods. However, several factors have thus far limited the adoption of Edge-AI on satellites, including the energy consumption of computing accelerators, and the limited availability of annotated data. In practice, the only application that has gained traction is cloud screening. This paper presents the work done in the framework of the $E=(AI)^2$ project funded by the Italian Space Agency and carried out by Politecnico di Torino, Ithaca and Argotec. The project aims to develop AI methodologies for onboard processing of optical images. In particular, the proposed Edge-AI system consists of a fast and efficient neural network to be applied to radiometrically corrected multispectral images directly onboard a satellite. The architecture has two main components: a backbone feature extractor that has the goal of generating a semantically meaningful multi-purpose feature representations of input data (multispectral images), and sharing it with various task-specific heads, each of which is devoted to an application (such as image classification, image segmentation, object detection, and so on). Since the backbone feature extractor will generate generalizable and meaningful features for a variety of tasks, training of the latter will follow a self-supervised learning procedure, which mostly need unlabeled data, exception made for a reduced labeled dataset for each application head. This architecture is also general enough to make it possible to develop new task-specific heads for other tasks without the need to re-train the entire model from scratch and without making major changes to the model code. The architecture is specifically designed to work on board, and hence it is based on efficient operations and modules that are further quantized and optimized to achieve high throughput and low energy consumption. The selected applications for the project are cloud segmentation, fire detection and flood detection, representing practical use cases with very low latency requirements for early warning or damage assessment. The deep learning models will be trained with data derived from Copernicus. The project will demonstrate an implementation of the architecture on space-relevant hardware which integrates a SoC based on x86 CPU and integrated GPU, with an external AI accelerator.