Supporting AI Act Compliance via an Intelligent Holistic Environment

Università degli Studi di Trento

Information Engineering And Computer Science
Cycle: 42

Artificial Intelligence (AI), particularly with the introduction of LLMs, is transforming our society while raising significant regulatory concerns. In response, the EU introduced the EU AI Act.

The AI Act is a complex regulation that classifies AI systems according to risk levels and introduces complex transparency and compliance requirements as well as high-level indication of potential enforcement mechanisms. Organisations must ensure compliance to avoid legal sanctions or system suspension. Given the complexity of the AI Act, effective compliance requires collaboration among AI experts, IT professionals, legal specialists, and other stakeholders involved in the design, deployment and use of AI systems. 

Challenges, costs and effort required to comply with the AI Act are likely to resemble those experienced with the introduction of GDPR, which imposed a demanding privacy-by-design approach (Tsohou et al. 2020; Piras et al. 2020). GDPR compliance required staff training, process redesign and collaboration among diverse experts to adapt systems and practices. Consequently, new tools and methods were developed to support organisations (Bhalavat et al. 2024; Piras et al. 2020). Similar support mechanisms will be necessary to reduce effort, cost, and time needed for AI Act compliance (Kulkarni et al. 2021).

Current approaches supporting AI Act compliance mainly rely on specific tools and sandboxes, designed to detect specific AI biases and risks, while valuable, such solutions are often limited in scope, do not account for collaboration among heterogeneous professionals, and rarely address the entire AI lifecycle. However, AI systems require continuous monitoring, especially when updated or retrained, as new biases or risks may emerge and affect compliance. What is still missing is an integrated environment that supports organisations holistically, considering regulatory, technical and organisational aspects, enabling collaboration among diverse stakeholders, and continuously monitoring AI systems. Such an environment, potentially using LLMs, could assist analysts in promoting compliance and anticipating potential non-compliance, for instance through predictive mechanisms such as a digital twin.
The candidate of this research may focus on the design of the concept and prototype of such research environment for supporting organisations towards AI Act compliance, by identifying some of the most important aspects contributing to the compliance, creating the environment for supporting the collaboration of different professional roles, and evaluating it in realistic/real settings, potentially from some of our industry partners, with the use of critical and relevant scenarios.

AI Act is the currently most interesting regulation to consider, however this research can consider and explore compliance with other regulations (e.g., NIS2, EHDS, CRA, DORA), and potentially cross-regulatory compliance.

FBK Contact

SaFEWaRe

Are you ready to join FBK international community?

We welcome motivated applicants who are passionate about research, eager to learn, and driven by curiosity to explore new ideas.

Six reasons to become a PhD student at FBK

At FBK, our PhD program is designed to develop highly specialized researchers in a unique, stimulating environment

RESEARCH
AT FBK​

A Hub of innovation and collaboration​

TOWARD PHD EXCELLENCE

FBK stands out as one of Italy’s leading research institutions

international
network

National and international
companies and universities

learning opportunities

Explore a world of learning
at FBK

Discover Trento

One of the most Italy’s
livable city

Join FBK

A truly international
community