This dissertation is concerned with optimization problems related to transparent machine learning. In the contemporary landscape of machine learning, the importance of interpretability has become essential, resonating across diverse industries and applications. As complex machine learning models, often characterized as "black boxes," continue to demonstrate remarkable predictive capabilities, there arises a pressing need to explain their decision-making processes. Interpretability is crucial not only for building trust in automated systems but also for meeting ethical, legal, and regulatory requirements. In healthcare, interpretable models can offer insights into diagnostic reasoning, aiding healthcare professionals in decision-making and enhancing patient trust. In finance, transparent models are essential for ensuring fair lending practices and regulatory compliance. Legal systems increasingly demand explainability to justify algorithmic decisions, emphasizing the significance of interpretable machine learning in justice. Within this setting,we first briefly describe the most used interpretable models then we discuss novel optimization strategies to handle their learning problem. More precisely, we describe state-of-art approaches to learn decision tree models and, to this aim, we propose two new techniques based to genetic algorithms and mixed integer programming. Afterwards, we present a work on risk score models, enhancing their expressiveness by extending these estimators in a more generalized framework. Finally, in the context of local explanations, we propose a recommendation system for random forest that dinamically maps each point to a single shallow tree in the ensemble.

Optimization Methods for Interpretable Machine Learning / Tommaso Aldinucci. - (2024).

Optimization Methods for Interpretable Machine Learning

Tommaso Aldinucci
2024

Abstract

This dissertation is concerned with optimization problems related to transparent machine learning. In the contemporary landscape of machine learning, the importance of interpretability has become essential, resonating across diverse industries and applications. As complex machine learning models, often characterized as "black boxes," continue to demonstrate remarkable predictive capabilities, there arises a pressing need to explain their decision-making processes. Interpretability is crucial not only for building trust in automated systems but also for meeting ethical, legal, and regulatory requirements. In healthcare, interpretable models can offer insights into diagnostic reasoning, aiding healthcare professionals in decision-making and enhancing patient trust. In finance, transparent models are essential for ensuring fair lending practices and regulatory compliance. Legal systems increasingly demand explainability to justify algorithmic decisions, emphasizing the significance of interpretable machine learning in justice. Within this setting,we first briefly describe the most used interpretable models then we discuss novel optimization strategies to handle their learning problem. More precisely, we describe state-of-art approaches to learn decision tree models and, to this aim, we propose two new techniques based to genetic algorithms and mixed integer programming. Afterwards, we present a work on risk score models, enhancing their expressiveness by extending these estimators in a more generalized framework. Finally, in the context of local explanations, we propose a recommendation system for random forest that dinamically maps each point to a single shallow tree in the ensemble.
2024
Prof. Fabio Schoen
Tommaso Aldinucci
File in questo prodotto:
File Dimensione Formato  
thesis.pdf

accesso aperto

Tipologia: Pdf editoriale (Version of record)
Licenza: Open Access
Dimensione 2.05 MB
Formato Adobe PDF
2.05 MB Adobe PDF

I documenti in FLORE sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificatore per citare o creare un link a questa risorsa: https://hdl.handle.net/2158/1399745
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus ND
  • ???jsp.display-item.citation.isi??? ND
social impact