Purpose: In this short communication, we consider the need for explainable AI from the perspective of a large multi-disciplinary research project for predicting cachexia in cancer patients. Materials and methods: In a series of meetings, comprising expertise from medicine, data science, sociology, and philosophy, project participants discussed the need for explainability. Results: We distinguish between contexts in which a black box AI tool undertakes tasks that users can perform or validate themselves and contexts in which this is not the case. Conclusion: We conclude that explanations are likely required when a black box AI tool undertakes tasks that users cannot perform or validate themselves. If the user can verify outputs manually, documented reliability and accuracy may suffice, but explainability can still add value when outputs are uncertain or errors occur. More generally, close collaboration among physicians, AI developers, and other stakeholders is crucial to ensure that AI tools are trustworthy and useful in clinical practice.

Explainable AI in nuclear medicine / Holm, Sune; Ferrara, Daria; Pepponi, Miriam; Abenavoli, Elisabetta; Frille, Armin; Duke, Shaul; Grünert, Stefan; Hacker, Marcus; Hennig, Bengt; Hesse, Swen; Hofmann, Lukas; Lund, Thomas B; Sabri, Osama; Sandøe, Peter; Sciagra, Roberto; Sundar, Lalith Kumar Shiyam; Yu, Josef; Beyer, Thomas. - In: EUROPEAN JOURNAL OF NUCLEAR MEDICINE AND MOLECULAR IMAGING. - ISSN 1619-7089. - STAMPA. - 53:(2026), pp. 2648-2651. [10.1007/s00259-025-07675-4]

Explainable AI in nuclear medicine

Pepponi, Miriam;Abenavoli, Elisabetta;Sciagra, Roberto;
2026

Abstract

Purpose: In this short communication, we consider the need for explainable AI from the perspective of a large multi-disciplinary research project for predicting cachexia in cancer patients. Materials and methods: In a series of meetings, comprising expertise from medicine, data science, sociology, and philosophy, project participants discussed the need for explainability. Results: We distinguish between contexts in which a black box AI tool undertakes tasks that users can perform or validate themselves and contexts in which this is not the case. Conclusion: We conclude that explanations are likely required when a black box AI tool undertakes tasks that users cannot perform or validate themselves. If the user can verify outputs manually, documented reliability and accuracy may suffice, but explainability can still add value when outputs are uncertain or errors occur. More generally, close collaboration among physicians, AI developers, and other stakeholders is crucial to ensure that AI tools are trustworthy and useful in clinical practice.
2026
53
2648
2651
Holm, Sune; Ferrara, Daria; Pepponi, Miriam; Abenavoli, Elisabetta; Frille, Armin; Duke, Shaul; Grünert, Stefan; Hacker, Marcus; Hennig, Bengt; Hesse,...espandi
File in questo prodotto:
File Dimensione Formato  
259_2025_Article_7675.pdf

accesso aperto

Tipologia: Pdf editoriale (Version of record)
Licenza: Open Access
Dimensione 653.32 kB
Formato Adobe PDF
653.32 kB Adobe PDF

I documenti in FLORE sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificatore per citare o creare un link a questa risorsa: https://hdl.handle.net/2158/1462378
Citazioni
  • ???jsp.display-item.citation.pmc??? 1
  • Scopus 0
  • ???jsp.display-item.citation.isi??? 0
social impact