Vehicular Edge Computing (VEC) represents a novel advancement within the Internet of Vehicles (IoV). Despite its implementation through Road Side Units (RSUs), VEC frequently falls short of satisfying the escalating demands of Vehicle Users (VUs) for new services, necessitating supplementary computational and communication resources. Non-Terrestrial Networks (NTN) with onboard Edge Computing (EC) facilities are gaining a central place in the 6G vision, allowing one to extend future services also to uncovered areas. This scenario, composed of a multitude of VUs, terrestrial and non-terrestrial nodes, and characterized by mobility and stringent requirements, brings in a very high complexity. Machine Learning (ML) represents a perfect tool for solving these types of problems. Integrated Terrestrial and Non-terrestrial (T-NT) EC, supported by innovative intelligent solutions enabled through ML technology, can boost the VEC capacity, coverage range, and resource utilization. Therefore, by exploring the integrated T-NT EC platforms, we design a multi-EC-enabled vehicular networking platform with a heterogeneous set of services. Next, we model the latency and energy requirements for processing the VU tasks through partial computation offloading operations. We aim to optimize the overall latency and energy requirements for processing the VU data by selecting the appropriate edge nodes and the offloading amount. The problem is defined as a multi-layer sequential decision-making problem through the Markov Decision Processes (MDP). The Hierarchical Reinforcement Learning (HRL) method, implemented through a Deep Q network, is used to optimize the network selection and offloading policies. Simulation results are compared with different benchmark methods to show performance gains in terms of overall cost requirements and reliability.

Hierarchical Reinforcement Learning for Multi-layer Multi-Service Non-Terrestrial Vehicular Edge Computing / Shinde, Swapnil Sadashiv; Tarchi, Daniele. - In: IEEE TRANSACTIONS ON MACHINE LEARNING IN COMMUNICATIONS AND NETWORKING. - ISSN 2831-316X. - ELETTRONICO. - 2:(2024), pp. 10609447.1045-10609447.1061. [10.1109/tmlcn.2024.3433620]

Hierarchical Reinforcement Learning for Multi-layer Multi-Service Non-Terrestrial Vehicular Edge Computing

Tarchi, Daniele
2024

Abstract

Vehicular Edge Computing (VEC) represents a novel advancement within the Internet of Vehicles (IoV). Despite its implementation through Road Side Units (RSUs), VEC frequently falls short of satisfying the escalating demands of Vehicle Users (VUs) for new services, necessitating supplementary computational and communication resources. Non-Terrestrial Networks (NTN) with onboard Edge Computing (EC) facilities are gaining a central place in the 6G vision, allowing one to extend future services also to uncovered areas. This scenario, composed of a multitude of VUs, terrestrial and non-terrestrial nodes, and characterized by mobility and stringent requirements, brings in a very high complexity. Machine Learning (ML) represents a perfect tool for solving these types of problems. Integrated Terrestrial and Non-terrestrial (T-NT) EC, supported by innovative intelligent solutions enabled through ML technology, can boost the VEC capacity, coverage range, and resource utilization. Therefore, by exploring the integrated T-NT EC platforms, we design a multi-EC-enabled vehicular networking platform with a heterogeneous set of services. Next, we model the latency and energy requirements for processing the VU tasks through partial computation offloading operations. We aim to optimize the overall latency and energy requirements for processing the VU data by selecting the appropriate edge nodes and the offloading amount. The problem is defined as a multi-layer sequential decision-making problem through the Markov Decision Processes (MDP). The Hierarchical Reinforcement Learning (HRL) method, implemented through a Deep Q network, is used to optimize the network selection and offloading policies. Simulation results are compared with different benchmark methods to show performance gains in terms of overall cost requirements and reliability.
2024
2
1045
1061
Shinde, Swapnil Sadashiv; Tarchi, Daniele
File in questo prodotto:
File Dimensione Formato  
rdshdwfkmygcvqjpzrpwwscwprrxnwrf.pdf

Accesso chiuso

Licenza: Tutti i diritti riservati
Dimensione 2.24 MB
Formato Adobe PDF
2.24 MB Adobe PDF   Richiedi una copia
Hierarchical_Reinforcement_Learning_for_Multi-Layer_Multi-Service_Non-Terrestrial_Vehicular_Edge_Computing.pdf

Accesso chiuso

Licenza: Tutti i diritti riservati
Dimensione 2 MB
Formato Adobe PDF
2 MB Adobe PDF   Richiedi una copia

I documenti in FLORE sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificatore per citare o creare un link a questa risorsa: https://hdl.handle.net/2158/1381020
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus ND
  • ???jsp.display-item.citation.isi??? ND
social impact