This paper focuses on one of the most urgent risks raised by artificial intelligence (AI), that is, the risk of AI perpetuating or exacerbating unfair social inequalities. Specifically, this paper argues for the need of decolonizing ethical principles underpinning current AI design through relational theories in order to overcome the current limits of an oversimplified and mainstream mainly Western understanding of ethics in AI, which is hampering the design of AI systems as a force for a fairer and more just society.

Inequalities and artificial intelligence

S. Tiribelli
2023-01-01

Abstract

This paper focuses on one of the most urgent risks raised by artificial intelligence (AI), that is, the risk of AI perpetuating or exacerbating unfair social inequalities. Specifically, this paper argues for the need of decolonizing ethical principles underpinning current AI design through relational theories in order to overcome the current limits of an oversimplified and mainstream mainly Western understanding of ethics in AI, which is hampering the design of AI systems as a force for a fairer and more just society.
2023
9791222303239
File in questo prodotto:
File Dimensione Formato  
Filosofia morale_Moral philosophy_Tiribelli.pdf

solo utenti autorizzati

Descrizione: Articolo
Tipologia: Versione editoriale (versione pubblicata con il layout dell'editore)
Licenza: Copyright dell'editore
Dimensione 121.13 kB
Formato Adobe PDF
121.13 kB Adobe PDF   Visualizza/Apri   Richiedi una copia

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11393/312990
 Attenzione

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo

Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus ND
  • ???jsp.display-item.citation.isi??? ND
social impact