Current edge computing (EC) solutions face the sig- nificant challenge of limited computational capacities. Effectively allocating resources and controlling the system to ensure task timeliness remains an open problem. The age of information (AoI) is a metric to measure the freshness of information that circulates in a system. While the AoI is primarily influenced by packet generation rate, transmission latency, and queuing delays, the processing time becomes notably significant when dealing with Internet-of-Things (IoT) computationally intensive tasks. Such IoT applications necessitate processing before embedded information can emerge and status can be acquired. This paper proposes a combined system control and resource assignment policy, in new-generation EC environments, where edge nodes have limited capacity and task flows are computationally inten- sive. The objective is to assign task flows to dedicated resource capacity, minimizing the worst AoI experienced by flows. For this purpose, three problem formulations for the flow-resource assignment are considered: i) the assignment with fixed arrival and service processes; ii) the service process control problem; iii) the arrival process control problem. For each problem formulated, a matching game with externalities is designed, and preference lists are built considering the mean AoI of an M/G/1 system, here exploited as reference model to represent each computation partition. The stability of matching games proposed is investigated, and experimental results are presented to highlight the validity of the matching approaches, providing critical discussion about the performance impact of the three problems addressed, also compared with a reservoir learning approach. The proposed matching algorithm surpasses the state- of-the-art Deferred Acceptance method by achieving a lower maximum AoI, thereby meeting the optimization objective. It also demonstrates improved performance over the data-driven approach. While comparable maximum AoI values can be attained with sufficiently large training datasets, the proposed algorithm consistently yields superior results.
Age-Oriented Resource Allocation for IoT Computational Intensive Tasks in Edge Computing Systems / Benedetta Picano; Enzo Mingozzi. - In: IEEE INTERNET OF THINGS JOURNAL. - ISSN 2327-4662. - ELETTRONICO. - (2025), pp. 0-0. [10.1109/JIOT.2025.3525997]
Age-Oriented Resource Allocation for IoT Computational Intensive Tasks in Edge Computing Systems
Benedetta Picano
;
2025
Abstract
Current edge computing (EC) solutions face the sig- nificant challenge of limited computational capacities. Effectively allocating resources and controlling the system to ensure task timeliness remains an open problem. The age of information (AoI) is a metric to measure the freshness of information that circulates in a system. While the AoI is primarily influenced by packet generation rate, transmission latency, and queuing delays, the processing time becomes notably significant when dealing with Internet-of-Things (IoT) computationally intensive tasks. Such IoT applications necessitate processing before embedded information can emerge and status can be acquired. This paper proposes a combined system control and resource assignment policy, in new-generation EC environments, where edge nodes have limited capacity and task flows are computationally inten- sive. The objective is to assign task flows to dedicated resource capacity, minimizing the worst AoI experienced by flows. For this purpose, three problem formulations for the flow-resource assignment are considered: i) the assignment with fixed arrival and service processes; ii) the service process control problem; iii) the arrival process control problem. For each problem formulated, a matching game with externalities is designed, and preference lists are built considering the mean AoI of an M/G/1 system, here exploited as reference model to represent each computation partition. The stability of matching games proposed is investigated, and experimental results are presented to highlight the validity of the matching approaches, providing critical discussion about the performance impact of the three problems addressed, also compared with a reservoir learning approach. The proposed matching algorithm surpasses the state- of-the-art Deferred Acceptance method by achieving a lower maximum AoI, thereby meeting the optimization objective. It also demonstrates improved performance over the data-driven approach. While comparable maximum AoI values can be attained with sufficiently large training datasets, the proposed algorithm consistently yields superior results.I documenti in FLORE sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.