The rapid evolution of conversational artificial intelligence (AI) has sparked an ongoing debate regarding its ability to replicate, or even experience, human emotions. While early conversational chatbots such as Joseph Weizenbaum’s ELIZA (1966) relied on simple pattern recognition to create the illusion of understanding, modern AI systems like ChatGPT generate highly sophisticated, contextually appropriate responses that can convincingly mimic emotional engagement. This paper draws upon cinematic reflections, such as Spike Jonze’s Her (2013), to offer a critical examination of the question of whether AI is capable of genuine emotional experience or merely simulating such experiences through advanced language modelling. Utilising a theoretical framework grounded in philosophy, psychology and communication studies, this research critically assessesvAI’s capacity for emotional experience, positing that while chatbots may convincingly simulate humanvemotional expression, they lack the subjective element that is integral to genuine emotional experience. Thisvdistinction, nowadays, has profound implications for human-AI interaction, ethics, and our understanding of artificial intelligence’s humanity in contemporary society.

From ELIZA to Conversational AI: Can a Chatbot Develop Emotions? Her as a Case Study

Petrassi Danilo
2025-01-01

Abstract

The rapid evolution of conversational artificial intelligence (AI) has sparked an ongoing debate regarding its ability to replicate, or even experience, human emotions. While early conversational chatbots such as Joseph Weizenbaum’s ELIZA (1966) relied on simple pattern recognition to create the illusion of understanding, modern AI systems like ChatGPT generate highly sophisticated, contextually appropriate responses that can convincingly mimic emotional engagement. This paper draws upon cinematic reflections, such as Spike Jonze’s Her (2013), to offer a critical examination of the question of whether AI is capable of genuine emotional experience or merely simulating such experiences through advanced language modelling. Utilising a theoretical framework grounded in philosophy, psychology and communication studies, this research critically assessesvAI’s capacity for emotional experience, positing that while chatbots may convincingly simulate humanvemotional expression, they lack the subjective element that is integral to genuine emotional experience. Thisvdistinction, nowadays, has profound implications for human-AI interaction, ethics, and our understanding of artificial intelligence’s humanity in contemporary society.
2025
Vincenzo Cuomo
Internazionale
https://www.kaiakpj.it/wp-content/uploads/2025/03/Danilo-Petrassi-From-ELIZA-to-Conversational-AI.pdf
File in questo prodotto:
File Dimensione Formato  
Danilo-Petrassi-From-ELIZA-to-Conversational-AI.pdf

accesso aperto

Descrizione: Paper
Tipologia: Versione editoriale (versione pubblicata con il layout dell'editore)
Licenza: Tutti i diritti riservati
Dimensione 183.74 kB
Formato Adobe PDF
183.74 kB Adobe PDF Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11393/352750
 Attenzione

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo

Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus ND
  • ???jsp.display-item.citation.isi??? ND
social impact