This paper focuses on one of the most urgent risks raised by artificial intelligence (AI), that is, the risk of AI perpetuating or exacerbating unfair social inequalities. Specifically, this paper argues for the need of decolonizing ethical principles underpinning current AI design through relational theories in order to overcome the current limits of an oversimplified and mainstream mainly Western understanding of ethics in AI, which is hampering the design of AI systems as a force for a fairer and more just society.
Inequalities and artificial intelligence
S. Tiribelli
2023-01-01
Abstract
This paper focuses on one of the most urgent risks raised by artificial intelligence (AI), that is, the risk of AI perpetuating or exacerbating unfair social inequalities. Specifically, this paper argues for the need of decolonizing ethical principles underpinning current AI design through relational theories in order to overcome the current limits of an oversimplified and mainstream mainly Western understanding of ethics in AI, which is hampering the design of AI systems as a force for a fairer and more just society.File in questo prodotto:
File | Dimensione | Formato | |
---|---|---|---|
Filosofia morale_Moral philosophy_Tiribelli.pdf
solo utenti autorizzati
Descrizione: Articolo
Tipologia:
Versione editoriale (versione pubblicata con il layout dell'editore)
Licenza:
Copyright dell'editore
Dimensione
121.13 kB
Formato
Adobe PDF
|
121.13 kB | Adobe PDF | Visualizza/Apri Richiedi una copia |
I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.