Developing effective and accurate methods for automatic estimation of pain level is vital, particularly for monitoring individuals, such as newborns and patients in intensive care units, who cannot communicate verbally. This paper introduces a novel video-based approach for sequence-level pain estimation that addresses two primary challenges in the existing literature. Firstly, we address privacy concerns in methods that rely on full facial images, which expose patient identities, limiting their applicability in healthcare. Our approach uses facial landmarks that offer insights into facial expressions, while preserving privacy as they do not suffice for personal identification. Secondly, pain is a dynamic state with intensity varying over time. Our approach analyzes temporal features at short-and long-term levels, adapting to continuous frame sequences. In essence, we develop a regression model with two components: 1) A Short-term Dynamics Network, where a Spatio-temporal Attention Graph Convolution Network (STAGCN) extracts short-term features from a spatio-temporal graph constructed with nodes representing facial landmarks extracted from each frame, and 2) A Long-term Dynamics Network, where a Gated Recurrent Unit (GRU) processes the sequence of short-term features to learn long-term patterns across the entire sequence. We validated our approach using the BioVid Heat Pain dataset (Parts A, B, and D) and MIntPain assessing performance in multi-class and binary (pain vs. no pain) classifications. Results demonstrate the approach's potential, even with partially occluded faces.

Pain Level Estimation From Videos by Analyzing the Dynamics of Facial Landmarks With a Spatio-Temporal Graph Neural Network / Alhamdoosh, Fatemah; Pala, Pietro; Berretti, Stefano. - In: IEEE TRANSACTIONS ON BIOMETRICS, BEHAVIOR, AND IDENTITY SCIENCE. - ISSN 2637-6407. - STAMPA. - 7:(2025), pp. 610-619. [10.1109/tbiom.2025.3592836]

Pain Level Estimation From Videos by Analyzing the Dynamics of Facial Landmarks With a Spatio-Temporal Graph Neural Network

Alhamdoosh, Fatemah;Pala, Pietro;Berretti, Stefano
2025

Abstract

Developing effective and accurate methods for automatic estimation of pain level is vital, particularly for monitoring individuals, such as newborns and patients in intensive care units, who cannot communicate verbally. This paper introduces a novel video-based approach for sequence-level pain estimation that addresses two primary challenges in the existing literature. Firstly, we address privacy concerns in methods that rely on full facial images, which expose patient identities, limiting their applicability in healthcare. Our approach uses facial landmarks that offer insights into facial expressions, while preserving privacy as they do not suffice for personal identification. Secondly, pain is a dynamic state with intensity varying over time. Our approach analyzes temporal features at short-and long-term levels, adapting to continuous frame sequences. In essence, we develop a regression model with two components: 1) A Short-term Dynamics Network, where a Spatio-temporal Attention Graph Convolution Network (STAGCN) extracts short-term features from a spatio-temporal graph constructed with nodes representing facial landmarks extracted from each frame, and 2) A Long-term Dynamics Network, where a Gated Recurrent Unit (GRU) processes the sequence of short-term features to learn long-term patterns across the entire sequence. We validated our approach using the BioVid Heat Pain dataset (Parts A, B, and D) and MIntPain assessing performance in multi-class and binary (pain vs. no pain) classifications. Results demonstrate the approach's potential, even with partially occluded faces.
2025
7
610
619
Goal 9: Industry, Innovation, and Infrastructure
Alhamdoosh, Fatemah; Pala, Pietro; Berretti, Stefano
File in questo prodotto:
File Dimensione Formato  
tbiom_pain2025.pdf

Accesso chiuso

Tipologia: Pdf editoriale (Version of record)
Licenza: Tutti i diritti riservati
Dimensione 4.7 MB
Formato Adobe PDF
4.7 MB Adobe PDF   Richiedi una copia

I documenti in FLORE sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificatore per citare o creare un link a questa risorsa: https://hdl.handle.net/2158/1436380
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 1
  • ???jsp.display-item.citation.isi??? ND
social impact