Artificial Intelligence (AI), particularly with the introduction of LLMs, is transforming our society while raising significant regulatory concerns. In response, the EU introduced the EU AI Act.
The AI Act is a complex regulation that classifies AI systems according to risk levels and introduces complex transparency and compliance requirements as well as high-level indication of potential enforcement mechanisms. Organisations must ensure compliance to avoid legal sanctions or system suspension. Given the complexity of the AI Act, effective compliance requires collaboration among AI experts, IT professionals, legal specialists, and other stakeholders involved in the design, deployment and use of AI systems.
Challenges, costs and effort required to comply with the AI Act are likely to resemble those experienced with the introduction of GDPR, which imposed a demanding privacy-by-design approach (Tsohou et al. 2020; Piras et al. 2020). GDPR compliance required staff training, process redesign and collaboration among diverse experts to adapt systems and practices. Consequently, new tools and methods were developed to support organisations (Bhalavat et al. 2024; Piras et al. 2020). Similar support mechanisms will be necessary to reduce effort, cost, and time needed for AI Act compliance (Kulkarni et al. 2021).
Current approaches supporting AI Act compliance mainly rely on specific tools and sandboxes, designed to detect specific AI biases and risks, while valuable, such solutions are often limited in scope, do not account for collaboration among heterogeneous professionals, and rarely address the entire AI lifecycle. However, AI systems require continuous monitoring, especially when updated or retrained, as new biases or risks may emerge and affect compliance. What is still missing is an integrated environment that supports organisations holistically, considering regulatory, technical and organisational aspects, enabling collaboration among diverse stakeholders, and continuously monitoring AI systems. Such an environment, potentially using LLMs, could assist analysts in promoting compliance and anticipating potential non-compliance, for instance through predictive mechanisms such as a digital twin.
The candidate of this research may focus on the design of the concept and prototype of such research environment for supporting organisations towards AI Act compliance, by identifying some of the most important aspects contributing to the compliance, creating the environment for supporting the collaboration of different professional roles, and evaluating it in realistic/real settings, potentially from some of our industry partners, with the use of critical and relevant scenarios.
AI Act is the currently most interesting regulation to consider, however this research can consider and explore compliance with other regulations (e.g., NIS2, EHDS, CRA, DORA), and potentially cross-regulatory compliance.