The Internet of Things (IoT) technology has become widespread in numerous domains, especially in remote regions where resources are scarce. The deployment of IoT technology has been facilitated by distributed edge computing (EC) systems spanning terrestrial to aerospace environments. This paper focuses on optimizing data routing, selection of computing nodes, and buffering policies for processing data in space by exploiting Low Earth Orbit (LEO) satellites. We introduce a constrained optimization problem to reduce the combined energy and latency costs associated with space data processing. This problem is structured as a hierarchical decision-making model using the Markov Decision Process (MDP) methodology and is addressed through Reinforcement Learning (RL) techniques. The proposed HRL framework employs two RL agents leveraging Deep Q Networks (DQN) to enhance data processing by optimizing node selection and improving buffering strategies for task execution. Simulation results show that the proposed framework outperforms existing methods in latency and energy efficiency.

In-Space Computing for IoT Data Processing via Low Earth Orbit Satellites / Shinde, Swapnil Sadashiv; Guruvayoorappan, Gayathri; De Cola, Tomaso; Tarchi, Daniele. - ELETTRONICO. - (2025), pp. 1171-1176. ( 2025 IEEE International Conference on Communications Workshops (ICC Workshops) Montreal, Canada 08-12 June 2025) [10.1109/iccworkshops67674.2025.11162441].

In-Space Computing for IoT Data Processing via Low Earth Orbit Satellites

Tarchi, Daniele
2025

Abstract

The Internet of Things (IoT) technology has become widespread in numerous domains, especially in remote regions where resources are scarce. The deployment of IoT technology has been facilitated by distributed edge computing (EC) systems spanning terrestrial to aerospace environments. This paper focuses on optimizing data routing, selection of computing nodes, and buffering policies for processing data in space by exploiting Low Earth Orbit (LEO) satellites. We introduce a constrained optimization problem to reduce the combined energy and latency costs associated with space data processing. This problem is structured as a hierarchical decision-making model using the Markov Decision Process (MDP) methodology and is addressed through Reinforcement Learning (RL) techniques. The proposed HRL framework employs two RL agents leveraging Deep Q Networks (DQN) to enhance data processing by optimizing node selection and improving buffering strategies for task execution. Simulation results show that the proposed framework outperforms existing methods in latency and energy efficiency.
2025
2025 IEEE International Conference on Communications Workshops (ICC Workshops)
2025 IEEE International Conference on Communications Workshops (ICC Workshops)
Montreal, Canada
08-12 June 2025
Goal 9: Industry, Innovation, and Infrastructure
Goal 13: Climate action
Goal 17: Partnerships for the goals
Shinde, Swapnil Sadashiv; Guruvayoorappan, Gayathri; De Cola, Tomaso; Tarchi, Daniele
File in questo prodotto:
File Dimensione Formato  
In-Space_Computing_for_IoT_Data_Processing_via_Low_Earth_Orbit_Satellites.pdf

Accesso chiuso

Tipologia: Pdf editoriale (Version of record)
Licenza: Tutti i diritti riservati
Dimensione 1.51 MB
Formato Adobe PDF
1.51 MB Adobe PDF   Richiedi una copia

I documenti in FLORE sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificatore per citare o creare un link a questa risorsa: https://hdl.handle.net/2158/1436133
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 0
  • ???jsp.display-item.citation.isi??? 0
social impact