This paper investigates the potential for regulatory sandboxes, a new and innovative regulatory instrument, to improve the cybersecurity posture of high-risk AI systems. Firstly, the paper introduces AI regulatory sandboxes and their relevance under both the AI Act and the GDPR. Attention is paid to the overlapping cybersecurity requirements derived from both pieces of legislation. The paper then outlines two emerging challenges of AI cybersecurity. The first, factual challenge, relates to the still under-developed state-of-the-art of AI cybersecurity, while the second legal challenge relates to the overlapping and uncoordinated cybersecurity requirements for high-risk AI systems stemming from both the AI Act and GDPR. The paper argues that AI regulatory sandboxes are well-suited to address both challenges which, in turn, will likely promote the uptake of AI regulatory sandboxes. Subsequently, it is argued that this novel legal instrument aligns well with emerging trends in the field of data protection, including Data Protection as Corporate Social Responsibility and Cybersecurity by Design. Taking stock from this ethical dimension, the many ethical risks connected with the uptake of AI regulatory sandboxes are assessed. It is finally suggested that the ethical and corporate social responsibility dimension may provide a potential solution to the many risks and pitfalls of regulatory sandboxes, although further research is needed on the topic.

AI Regulatory Sandboxes between the AI Act and the GDPR: the role of Data Protection as a Corporate Social Responsibility / Davide Baldini; Kate Francis. - ELETTRONICO. - (2024), pp. 1-13.

AI Regulatory Sandboxes between the AI Act and the GDPR: the role of Data Protection as a Corporate Social Responsibility

Davide Baldini
;
2024

Abstract

This paper investigates the potential for regulatory sandboxes, a new and innovative regulatory instrument, to improve the cybersecurity posture of high-risk AI systems. Firstly, the paper introduces AI regulatory sandboxes and their relevance under both the AI Act and the GDPR. Attention is paid to the overlapping cybersecurity requirements derived from both pieces of legislation. The paper then outlines two emerging challenges of AI cybersecurity. The first, factual challenge, relates to the still under-developed state-of-the-art of AI cybersecurity, while the second legal challenge relates to the overlapping and uncoordinated cybersecurity requirements for high-risk AI systems stemming from both the AI Act and GDPR. The paper argues that AI regulatory sandboxes are well-suited to address both challenges which, in turn, will likely promote the uptake of AI regulatory sandboxes. Subsequently, it is argued that this novel legal instrument aligns well with emerging trends in the field of data protection, including Data Protection as Corporate Social Responsibility and Cybersecurity by Design. Taking stock from this ethical dimension, the many ethical risks connected with the uptake of AI regulatory sandboxes are assessed. It is finally suggested that the ethical and corporate social responsibility dimension may provide a potential solution to the many risks and pitfalls of regulatory sandboxes, although further research is needed on the topic.
2024
Proceedings of the 8th Italian Conference on Cyber Security (ITASEC 2024)
1
13
Davide Baldini; Kate Francis
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in FLORE sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificatore per citare o creare un link a questa risorsa: https://hdl.handle.net/2158/1372612
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus ND
  • ???jsp.display-item.citation.isi??? ND
social impact