This doctoral thesis investigates the design and development of Artificial Intelligence (AI) systems characterized by variable risk levels according to Art. 6 and Annex III in [2], with the aim of ensuring their compliance with legal and ethical standards, either imposed or recommended by the European Union, with particular reference to the regulatory frameworks [1] and [2], as well as the ethical guidelines [3]. The research alternates between theoretical exploration, both technological and regulatory, and operational phases in which acquired knowledge is implemented and translated into concrete and applicable results. The adopted approach is highly transversal, allowing exploration from Machine Learning to Deep Learning, encompassing a wide range of tasks (classification, regression, object detection), various data types (tabular data, time-series, XCA and US images), and different domains (industrial decision support, municipal tax revenue, and healthcare) characterized by varying degrees of risk and multiple compliance requirements. Compliance has been implemented, often voluntarily, always respecting the principle of proportionality, without ever being redundant or obstructive, and has specifically addressed the following profiles: Privacy by Design (Considerando 78, Art. 25(1)) including Data Anonymization, Data Minimization, Computational and Output Privacy; Privacy by Default (Considerando 78, Art. 25(2)) in [1]; Technical Robustness (Art. 16), Data Governance (Art. 10), Transparency (Art. 13), Human Oversight (Art. 14) in [2]. From a methodological perspective, the scientific-technological and regulatory dimensions develop in parallel and are been both considered from the earliest stages of the AI system life-cycle, in an extended “by design” perspective. The five case studies, which belong to the industrial decision support, tax revenue, and healthcare sectors, constitute the operational core of the research, each representing a specific combination of task, application domain, data type, and risk category. Results are presented in five publications, either accepted or under review. The sixth contribution focuses on the operationalization of the regulatory frameworks [1][2], proposing Cohesive Impact Assessment (COHESIA), an integrated methodological framework that unifies the Data Protection Impact Assessment (DPIA) and the Fundamental Rights Impact Assessment (FRIA) into a semi-quantitative compliance model. COHESIA operationalizes the concept of trustworthy AI in concrete scenarios, supporting systematic evaluation of AI applications across different risk levels and regulatory dimensions, facilitating DPIA and FRIA comparison, both diachronically (across successive versions of the same document) and synchronically (across multiple DPIAs or FRIAs, or between the DPIA and the FRIA for the same system). To demonstrate its practical utility, COHESIA is applied to the FAITH-RSDD AI system, designed to support municipal fiscal efficiency, as well as to a series of AI systems inspired by the other four contributions developed within the thesis. The need for a tool facilitating the coordinated drafting of DPIA and FRIA arises from the principle that these two documents should not be regarded as mere formal obligations to be completed only at deployment or produced in case of a data breach, but as operational instruments accompanying the entire system life-cycle [1][2][4], continuously guiding design choices to enhance the quality of the final product and foster a culture of responsible know-how. To support this approach, key compliance requirements are also evaluated for AI systems not classified as high-risk, or in contexts where they are not explicitly mandatory, in line with voluntary alignment (Considerando 7 and 27, Art. 95 in [2], Art. 40 in [1]) and [3]. In this context, it is necessary to determine, on a case-by-case basis, the optimal trade-off between regulatory compliance and operational utility, in order to maximize both without conflict, according to the principle of proportionality. In conclusion, this research allows for an integrated experience and leads to the concrete development of AI systems oriented toward trustworthiness, a crucial ethical principle that enables stakeholders to adopt new AI-based solutions with confidence, awareness, and safety.
Il presente lavoro di tesi indaga la progettazione e lo sviluppo di sistemi di Intelligenza Artificiale (IA) caratterizzati da livelli di rischio variabili in base all'Art. 6 e Annesso III in [2], al fine di garantirne la conformità agli standard legali ed etici rispettivamente imposti e raccomandati dall’Unione Europea, con particolare riferimento ai quadri normativi [1] e [2], nonché alle linee guida etiche [3]. La ricerca alterna fasi di approfondimento teorico, sia tecnologico che normativo, a fasi operative in cui le conoscenze acquisite vengono implementate e tradotte in risultati concreti e applicabili. L’approccio adottato è improntato alla massima trasversalità e consente di spaziare dal Machine Learning al Deep Learning, includendo un’ampia gamma di task (classificazione, regressione, forecast di serie storiche, object detection), vari tipi di dato (dati tabulari, serie storiche, immagini radiografiche ed ecografiche) e domini differenti (settore moda, settore fiscale, settore sanitario) caratterizzati da diversi gradi di rischiosità e molteplici esigenze di compliance. La compliance è stata impostata, spesso su base volontaria, ma sempre nel rispetto del principio di proporzionalità, senza mai assumere carattere ridondante od ostativo, e ha riguardato in particolare i seguenti profili: Privacy by Design (Considerando 78, Art. 25(1)) inclusiva di Data Anonymization, Data Minimization, Computational e Output Privacy; Privacy by Default (Considerando 78, Art. 25(2)) in [1]; Technical Robustness (Art. 16), Data Governance (Art. 10), Transparency (Art. 13), Human Oversight (Art. 14) in [2]. Dal punto di vista metodologico, le dimensioni scientifico-tecnologica e normativa si sviluppano in parallelo e sono curate fin dalle prime fasi del ciclo di vita del sistema AI, in un’ottica by design estesa. I cinque casi di studio, appartenenti ai settori moda, fiscale, e sanitario, costituiscono il nucleo operativo della ricerca, ciascuno rappresentando una combinazione specifica di task, dominio applicativo, tipo di dato e categoria di rischio, e hanno portato alla redazione di cinque pubblicazioni, accettate o in fase di valutazione. Il sesto contributo si concentra sull’operazionalizzazione delle normative [1][2], proponendo il COHEsive Impact Assessment (COHESIA), un quadro metodologico integrato che unifica Data Protection Impact Assessment (DPIA) e Fundamental Rights Impact Assessment (FRIA) in un modello di conformità semi-quantitativo. COHESIA rende operativo il concetto di IA affidabile in scenari concreti, supportando la valutazione sistematica delle applicazioni di IA attraverso diversi livelli di rischio e dimensioni regolatorie, promuovendo un confronto tra DPIA e FRIA sia diacronico (tra versioni successive dei documenti) che sincronico (tra DPIA e FRIA redatte per sistemi di IA simili ma diversi o tra la DPIA e la FRIA redatte per lo stesso sistema, eventualmente da diversi operatori). Per dimostrarne l'utilità pratica, COHESIA è stato applicato al sistema di IA FAITH-FDSS, progettato per supportare l’efficienza fiscale dei comuni, nonché a una serie di sistemi di IA, ispirati agli altri quattro casi di studio sviluppati nella tesi. La necessità di uno strumento che faciliti la redazione coordinata di DPIA e FRIA deriva dal principio secondo cui questi due documenti non devono essere considerati meri adempimenti formali da compilare esclusivamente al momento del deployment o da esibire in caso di data breach, ma strumenti operativi che accompagnano l’intero ciclo di vita del sistema [1][2][4], guidando costantemente le scelte progettuali a beneficio della qualità del prodotto finale e fondando una cultura del know-how responsabile. A supporto di questa impostazione, i requisiti chiave di compliance sono stati valutati anche per sistemi di IA non classificati come high-risk, o in contesti in cui non risultavano esplicitamente obbligatori, in linea con un adeguamento volontario (Considerando 7 e 27, Art. 95 in [2], Art. 40 in [1]) e [3]. In tale contesto, si è reso necessario determinare, caso per caso, il trade-off ottimale tra conformità normativa e utilità operativa, al fine di massimizzare entrambe senza conflitti, secondo il principio di proporzionalità. In conclusione, il presente lavoro di ricerca rappresenta una esperienza integrata e conduce allo sviluppo concreto di sistemi di IA orientati all'affidabilità, principio etico cruciale che consente ai diversi stakeholder di adottare con fiducia, consapevolezza e sicurezza le nuove soluzioni basate sull’IA.
Feasibility and Legal Compliance of AI Systems Across Domains: Five Case Studies in Industrial Decision Support, Municipal Tax Revenue, and healthcare under EU Regulations 2016/679 and 2024/1689, and a Cohesive Impact Assessment Framework for DPIA and FRIA / Migliorelli, G.. - (2026 Apr 24).
Feasibility and Legal Compliance of AI Systems Across Domains: Five Case Studies in Industrial Decision Support, Municipal Tax Revenue, and healthcare under EU Regulations 2016/679 and 2024/1689, and a Cohesive Impact Assessment Framework for DPIA and FRIA.
Migliorelli, G.
2026-04-24
Abstract
This doctoral thesis investigates the design and development of Artificial Intelligence (AI) systems characterized by variable risk levels according to Art. 6 and Annex III in [2], with the aim of ensuring their compliance with legal and ethical standards, either imposed or recommended by the European Union, with particular reference to the regulatory frameworks [1] and [2], as well as the ethical guidelines [3]. The research alternates between theoretical exploration, both technological and regulatory, and operational phases in which acquired knowledge is implemented and translated into concrete and applicable results. The adopted approach is highly transversal, allowing exploration from Machine Learning to Deep Learning, encompassing a wide range of tasks (classification, regression, object detection), various data types (tabular data, time-series, XCA and US images), and different domains (industrial decision support, municipal tax revenue, and healthcare) characterized by varying degrees of risk and multiple compliance requirements. Compliance has been implemented, often voluntarily, always respecting the principle of proportionality, without ever being redundant or obstructive, and has specifically addressed the following profiles: Privacy by Design (Considerando 78, Art. 25(1)) including Data Anonymization, Data Minimization, Computational and Output Privacy; Privacy by Default (Considerando 78, Art. 25(2)) in [1]; Technical Robustness (Art. 16), Data Governance (Art. 10), Transparency (Art. 13), Human Oversight (Art. 14) in [2]. From a methodological perspective, the scientific-technological and regulatory dimensions develop in parallel and are been both considered from the earliest stages of the AI system life-cycle, in an extended “by design” perspective. The five case studies, which belong to the industrial decision support, tax revenue, and healthcare sectors, constitute the operational core of the research, each representing a specific combination of task, application domain, data type, and risk category. Results are presented in five publications, either accepted or under review. The sixth contribution focuses on the operationalization of the regulatory frameworks [1][2], proposing Cohesive Impact Assessment (COHESIA), an integrated methodological framework that unifies the Data Protection Impact Assessment (DPIA) and the Fundamental Rights Impact Assessment (FRIA) into a semi-quantitative compliance model. COHESIA operationalizes the concept of trustworthy AI in concrete scenarios, supporting systematic evaluation of AI applications across different risk levels and regulatory dimensions, facilitating DPIA and FRIA comparison, both diachronically (across successive versions of the same document) and synchronically (across multiple DPIAs or FRIAs, or between the DPIA and the FRIA for the same system). To demonstrate its practical utility, COHESIA is applied to the FAITH-RSDD AI system, designed to support municipal fiscal efficiency, as well as to a series of AI systems inspired by the other four contributions developed within the thesis. The need for a tool facilitating the coordinated drafting of DPIA and FRIA arises from the principle that these two documents should not be regarded as mere formal obligations to be completed only at deployment or produced in case of a data breach, but as operational instruments accompanying the entire system life-cycle [1][2][4], continuously guiding design choices to enhance the quality of the final product and foster a culture of responsible know-how. To support this approach, key compliance requirements are also evaluated for AI systems not classified as high-risk, or in contexts where they are not explicitly mandatory, in line with voluntary alignment (Considerando 7 and 27, Art. 95 in [2], Art. 40 in [1]) and [3]. In this context, it is necessary to determine, on a case-by-case basis, the optimal trade-off between regulatory compliance and operational utility, in order to maximize both without conflict, according to the principle of proportionality. In conclusion, this research allows for an integrated experience and leads to the concrete development of AI systems oriented toward trustworthiness, a crucial ethical principle that enables stakeholders to adopt new AI-based solutions with confidence, awareness, and safety.| File | Dimensione | Formato | |
|---|---|---|---|
|
MIGLIORELLI_Tesi_compressed.pdf
accesso aperto
Descrizione: Feasibility and Legal Compliance of AI Systems Across Domains: Five Case Studies in Industrial Decision Support, Municipal Tax Revenue, and Healthcare under EU Regulations 2016/679 and 2024/1689, and a Cohesive Impact Assessment Framework for DPIA and FRIA.
Tipologia:
Tesi di dottorato
Licenza:
Creative commons
Dimensione
5.28 MB
Formato
Adobe PDF
|
5.28 MB | Adobe PDF | Visualizza/Apri |
I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.


