Unlabelled: Explainability is a leading solution offered to address the challenge of AI's black boxing. However, a lot can go wrong when trying to apply explainability, and its success is far from certain. Moreover, there is insufficient empirical data regarding the effectiveness of concrete explainability efforts. We examined an explainability scenario for an AI decision support tool under development for the early detection of cancer-related cachexia, a potentially fatal metabolic syndrome. We conducted 13 interviews with clinicians who deal with cachexia, and asked about their prior experience with AI tools, their views on explainability, and presented an explainability scenario based on the Shapley Additive Explanations (SHAP) method. Most clinicians we interviewed had limited prior experience with AI tools, and a majority of them believed that the explainability of such an AI system for the early detection of cachexia is essential. When presented with the SHAP explainability scheme, they had limited familiarity with the features that contributed to the tool's ruling, and only a minority of the clinicians (nuclear medicine experts) stated that they could utilize these features in a meaningful manner. Paradoxically, it is the clinicians who come in contact with patients who cannot make use of this specific SHAP explanation. This study highlights the challenges of offering a hyper-selective explainability tool in clinical settings. It also shows the challenge of developing explainable-by-design AI systems. Supplementary information: The online version contains supplementary material available at 10.1007/s43681-025-00837-y.
Hyper-selective explainability: an empirical case study of the utility of explainability in a clinical decision support system / Duke, Shaul A; Sandøe, Peter; Lund, Thomas Bøker; Abenavoli, Elisabetta Maria; Beyer, Thomas; Ferrara, Daria; Frille, Armin; Gruenert, Stefan; Sabri, Osama; Sciagra, Roberto; Pepponi, Miriam; Swen, Hesse; Tönjes, Anke; Wirtz, Hubert; Yu, Josef; Shiyam Sundar, Lalith Kumar; Holm, Sune. - In: AI AND ETHICS. - ISSN 2730-5961. - ELETTRONICO. - 6:(2026), pp. 0-0. [10.1007/s43681-025-00837-y]
Hyper-selective explainability: an empirical case study of the utility of explainability in a clinical decision support system
Abenavoli, Elisabetta Maria;Sciagra, Roberto;Pepponi, Miriam;
2026
Abstract
Unlabelled: Explainability is a leading solution offered to address the challenge of AI's black boxing. However, a lot can go wrong when trying to apply explainability, and its success is far from certain. Moreover, there is insufficient empirical data regarding the effectiveness of concrete explainability efforts. We examined an explainability scenario for an AI decision support tool under development for the early detection of cancer-related cachexia, a potentially fatal metabolic syndrome. We conducted 13 interviews with clinicians who deal with cachexia, and asked about their prior experience with AI tools, their views on explainability, and presented an explainability scenario based on the Shapley Additive Explanations (SHAP) method. Most clinicians we interviewed had limited prior experience with AI tools, and a majority of them believed that the explainability of such an AI system for the early detection of cachexia is essential. When presented with the SHAP explainability scheme, they had limited familiarity with the features that contributed to the tool's ruling, and only a minority of the clinicians (nuclear medicine experts) stated that they could utilize these features in a meaningful manner. Paradoxically, it is the clinicians who come in contact with patients who cannot make use of this specific SHAP explanation. This study highlights the challenges of offering a hyper-selective explainability tool in clinical settings. It also shows the challenge of developing explainable-by-design AI systems. Supplementary information: The online version contains supplementary material available at 10.1007/s43681-025-00837-y.| File | Dimensione | Formato | |
|---|---|---|---|
|
43681_2025_Article_837.pdf
accesso aperto
Tipologia:
Pdf editoriale (Version of record)
Licenza:
Open Access
Dimensione
985.76 kB
Formato
Adobe PDF
|
985.76 kB | Adobe PDF |
I documenti in FLORE sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.



