Meet the Research

A key aspect of the Kanyini mission is a portfolio of research and development projects managed by SmartSat CRC, including projects in artificial intelligence, onboard processing and machine learning. These aim to develop innovative applications that address challenges in agriculture, water management and the environment.

ONBOARD HYPERSPECTRAL AI: CAL, PANOPTIC SEGMENTATION, ESTIMATION

This project will develop capabilities for onboard AI processing and analysis of hyperspectral imagery on smart satellite platforms. In particular, the project will tackle the key modules of calibration, segmentation, fine-grained analysis and joint space-ground inference of onboard AI processing of hyperspectral data.

This gives ability to process onboard the rich and multidimensional spectral modalities in an end-to-end manner. This will create new opportunities to enable accurate, efficient, and reliable automated detection and classification of natural phenomena and human activities over a wide area on Earth.

PROJECT PARTIES:

PROJECT PARTIES:

PROJECT PARTIES:


ROBUST PREDICTIVE AI: ADVANCED SAT HS BAND REGISTRATION AND RELIABLE EVENT PREDICTION

This project aims to develop a novel deep learning pipeline for achieving robust and reliable prediction of events such as landslides, flooding and bushfires with hyperspectral satellite imagery. The output of the project is robust and trustworthy predictive AI capabilities that can accurately predict events further in advance using better aligned hyperspectral data cubes.

This project is a collaboration with the European Space Agency’s Phi-Labs and builds on their algorithmic strengths and onboard processing experience with Phi-Sat 1. It has both onboard processing elements and ground-only processing, and whilst it's primary use will be in disaster management applications, including early fire detection and volcano eruption, it has other end-use applications such as predicting the impact of urban growth on vegetation and road infrastracture and security and defence-related event predictions.


DEVELOPING CAPABILITY TO ASSESS LIVE CORAL COVER AND SEAGRASS SPECIES USING SATELLITE BASED HYPERSPECTRAL IMAGERY

The ability to accurately map live coral cover and differentiate among seagrass species is crucial for monitoring the health of coral reef habitats, requiring detailed spectral information that hyperspectral sensors can provide, unlike the more widely used multispectral sensors. Despite the global mapping efforts like the Allen Coral Atlas, limitations due to the use of multispectral sensors have hindered the differentiation of seagrass species and the assessment of live coral cover.

With the launch of more hyperspectral satellites since 2021, there's an opportunity to overcome these challenges. This project aims to leverage high-quality archived hyperspectral data and field observations to evaluate the potential of these satellites for detailed mapping of coral and seagrass habitats. The research will help develop remote sensing frameworks capable of identifying specific coral and seagrass characteristics, setting the stage for future large-scale mapping efforts, to provide valuable insights for scientists and reef managers worldwide.

SMALL SAT ENERGY-EFFICIENT ONBOARD AI PROCESSING OF HYPERSPECTRAL IMAGERY FOR EARLY FIRE-SMOKE DETECTION

This research aims to provide a solution for energy-efficient AI-based on-board processing of hyperspectral imagery supporting automated early detection of fire smoke, providing a solution that meets on-board processing limitations and up/downlink data transfer restrictions of Kanyini.

Expected outputs of this project include on-board and ground AI algorithms for fire smoke detection, applicable for various multispectral and hyperspectral imagery datasets.

PROJECT PARTIES:





EM VERIFICATION OF ONBOARD SMOKE DETECTION MODEL AND ALGORITHMS

This project builds on the earlier project Small Satellite Energy-Efficient On-Board AI Processing of Hyperspectral Imagery for Early Fire-Smoke Detection, which used simulated imagery and a low-cost emulation system to replicate HyperScout 2 AI onboard processing for fire smoke detection.

Specifically, it addressed methods to reduce raw data volumes by using AI processing to automatically detect smoke locations (by identifying and separating out clouds or landscape imagery) before downlink transfer. The imagery was partitioned into tiles and fed into the AI model, with only tiles containing smoke then downlinked to the ground for further processing. This greatly enhanced downlink transfer efficiency, which is crucial for early fire detection This second phase of the project aims to adjust and test all onboard imagery processing tasks and verify them for their error-proof deployment on the Kanyini/HyperScout 2 system.

PROJECT PARTIES:

SPACE ANALYTICS ENGINE FOR ON-BOARD MACHINE LEARNING AND MULTIMODAL DATA FUSION

The current approach to Intelligence, Surveillance and Reconnaissance (ISR) satellite operations, where data collection is passive and processing occurs on the ground, leads to significant delays and hinders real-time coordination between satellites and end-users. This can result in missing opportunities for more intelligent, adaptive sensing capabilities due to the lack of real-time data processing and analysis in space.

The project aims to develop advanced algorithms and workflows for on-board machine learning on nanosatellites, utilizing multi-modal sensors for ISR satellites. It addresses challenges such as the limited computing power on satellites compared to ground hardware, the need for model optimization, and the difficulties in updating models in space. The project's goal is to produce a novel space analytics engine that is reconfigurable after launch, which significantly increases the value proposition of on-board processing.

PROJECT PARTIES: