Automated vehicles ( "AV ") can greatly improve road safety and societal welfare, but legal systems have struggled with the prospect of whom to hold criminally liable for resulting harm, and how. This difficulty is derived from the characteristics of modern artificial intelligence ( "AI ") used in AV technology. Singapore, France and the UK have pioneered legal models tailored to address criminal liability for AI misbehaviour. In this article, we analyse the three models comparatively both to determine their individual merits and to draw lessons from to inform future legislative efforts. We first examine the roots of the problem by analysing the characteristics of modern AI vis-a-vis basic legal foundations underlying criminal liability. We identify several problems, such as the epistemic problem, a lack of control, the issue of generic risk, and the problem of many hands, which discommode the building blocks of criminal negligence such as awareness, foreseeability and risk taking - a condition we refer to as negligence failures. Subsequently, we analyse the three models on their ability to address these issues. We find diverging philosophies as to where to place the central weight of criminal liability, but nevertheless identify common themes such as drawing bright-lines between liability and immunity, and the introduction of novel vocabulary necessary to navigate the new legal landscape sculpted by AI. We end with specific recommendations for future legislation, such as the importance of implementing an AI training and licensing regime for users, and that transition demands must be empirically tested to allow de facto control.
Negligence Failures And Negligence Fixes. A Comparative Analysis of Criminal Regulation of AI and Autonomous Vehicles / Giannini Alice, Jonathan Kwik. - In: CRIMINAL LAW FORUM. - ISSN 1046-8374. - ELETTRONICO. - 34:(2023), pp. 43-85. [10.1007/s10609-023-09451-1]
Negligence Failures And Negligence Fixes. A Comparative Analysis of Criminal Regulation of AI and Autonomous Vehicles
Giannini Alice
;
2023
Abstract
Automated vehicles ( "AV ") can greatly improve road safety and societal welfare, but legal systems have struggled with the prospect of whom to hold criminally liable for resulting harm, and how. This difficulty is derived from the characteristics of modern artificial intelligence ( "AI ") used in AV technology. Singapore, France and the UK have pioneered legal models tailored to address criminal liability for AI misbehaviour. In this article, we analyse the three models comparatively both to determine their individual merits and to draw lessons from to inform future legislative efforts. We first examine the roots of the problem by analysing the characteristics of modern AI vis-a-vis basic legal foundations underlying criminal liability. We identify several problems, such as the epistemic problem, a lack of control, the issue of generic risk, and the problem of many hands, which discommode the building blocks of criminal negligence such as awareness, foreseeability and risk taking - a condition we refer to as negligence failures. Subsequently, we analyse the three models on their ability to address these issues. We find diverging philosophies as to where to place the central weight of criminal liability, but nevertheless identify common themes such as drawing bright-lines between liability and immunity, and the introduction of novel vocabulary necessary to navigate the new legal landscape sculpted by AI. We end with specific recommendations for future legislation, such as the importance of implementing an AI training and licensing regime for users, and that transition demands must be empirically tested to allow de facto control.File | Dimensione | Formato | |
---|---|---|---|
s10609-023-09451-1 (1).pdf
accesso aperto
Tipologia:
Pdf editoriale (Version of record)
Licenza:
Open Access
Dimensione
370.46 kB
Formato
Adobe PDF
|
370.46 kB | Adobe PDF |
I documenti in FLORE sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.