Domain experts are desperately looking to solve classification problems by designing and training Machine Learning algorithms with the highest possible accuracy. No matter how hard they try, classifiers will always be prone to misclassifications due to a variety of reasons that make the decision boundary unclear. This complicates the integration of classifiers into critical systems and infrastructures, where misclassifications could directly impact the health of people, infrastructures, or the environment. Differently, a classifier should be considered as a component to be deployed into a system, and never in isolation. This provides more flexibility to the classifier, which can even afford to reject those outputs that are likely to be misclassifications, triggering system-level mitigation strategies instead. The resulting fail-controlled classifier will output a noticeably lower amount of misclassifications, making this system-level conceptualization and design of ML classifiers an actual step toward the deployment of classifiers in real, critical systems. Evaluation metrics should be adapted to cope with this paradigm change, scoring rejections differently from predictions, which are either correct or incorrect.
Position Paper - Bringing Classifiers into Critical Systems: Are We Barking up the Wrong Tree? / Zoppi T.; Kohkar F.A.; Ceccarelli A.; Bondavalli A.. - ELETTRONICO. - 14989:(2024), pp. 351-357. (Intervento presentato al convegno 19th Workshop on Dependable Smart Embedded and Cyber-Physical Systems and Systems-of-Systems, DECSoS 2024, 11th International Workshop on Next Generation of System Assurance Approaches for Critical Systems, SASSUR 2024, Towards A Safer Systems architecture Through Security, TOASTS 2024 and 7th International Workshop on Artificial Intelligence Safety Engineering, WAISE 2024 held in conjunction with the 43rd International Conference on Computer Safety, Reliability, and Security, SAFECOMP 2024 tenutosi a ita nel 2024) [10.1007/978-3-031-68738-9_27].
Position Paper - Bringing Classifiers into Critical Systems: Are We Barking up the Wrong Tree?
Zoppi T.;Ceccarelli A.;Bondavalli A.
2024
Abstract
Domain experts are desperately looking to solve classification problems by designing and training Machine Learning algorithms with the highest possible accuracy. No matter how hard they try, classifiers will always be prone to misclassifications due to a variety of reasons that make the decision boundary unclear. This complicates the integration of classifiers into critical systems and infrastructures, where misclassifications could directly impact the health of people, infrastructures, or the environment. Differently, a classifier should be considered as a component to be deployed into a system, and never in isolation. This provides more flexibility to the classifier, which can even afford to reject those outputs that are likely to be misclassifications, triggering system-level mitigation strategies instead. The resulting fail-controlled classifier will output a noticeably lower amount of misclassifications, making this system-level conceptualization and design of ML classifiers an actual step toward the deployment of classifiers in real, critical systems. Evaluation metrics should be adapted to cope with this paradigm change, scoring rejections differently from predictions, which are either correct or incorrect.File | Dimensione | Formato | |
---|---|---|---|
Position-Paper--Bringing-Classifiers-into-Critical-Systems-Are-We-Barking-up-the-Wrong-TreeLecture-Notes-in-Computer-Science-including-subseries-Lecture-Notes-in-Artificial-Intelligence-and-Lecture-Notes-in-Bioinfo.pdf
Accesso chiuso
Tipologia:
Pdf editoriale (Version of record)
Licenza:
Tutti i diritti riservati
Dimensione
347.06 kB
Formato
Adobe PDF
|
347.06 kB | Adobe PDF | Richiedi una copia |
I documenti in FLORE sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.