The rapid evolution of conversational artificial intelligence (AI) has sparked an ongoing debate regarding its ability to replicate, or even experience, human emotions. While early conversational chatbots such as Joseph Weizenbaum’s ELIZA (1966) relied on simple pattern recognition to create the illusion of understanding, modern AI systems like ChatGPT generate highly sophisticated, contextually appropriate responses that can convincingly mimic emotional engagement. This paper draws upon cinematic reflections, such as Spike Jonze’s Her (2013), to offer a critical examination of the question of whether AI is capable of genuine emotional experience or merely simulating such experiences through advanced language modelling. Utilising a theoretical framework grounded in philosophy, psychology and communication studies, this research critically assessesvAI’s capacity for emotional experience, positing that while chatbots may convincingly simulate humanvemotional expression, they lack the subjective element that is integral to genuine emotional experience. Thisvdistinction, nowadays, has profound implications for human-AI interaction, ethics, and our understanding of artificial intelligence’s humanity in contemporary society.
From ELIZA to Conversational AI: Can a Chatbot Develop Emotions? Her as a Case Study
Petrassi Danilo
2025-01-01
Abstract
The rapid evolution of conversational artificial intelligence (AI) has sparked an ongoing debate regarding its ability to replicate, or even experience, human emotions. While early conversational chatbots such as Joseph Weizenbaum’s ELIZA (1966) relied on simple pattern recognition to create the illusion of understanding, modern AI systems like ChatGPT generate highly sophisticated, contextually appropriate responses that can convincingly mimic emotional engagement. This paper draws upon cinematic reflections, such as Spike Jonze’s Her (2013), to offer a critical examination of the question of whether AI is capable of genuine emotional experience or merely simulating such experiences through advanced language modelling. Utilising a theoretical framework grounded in philosophy, psychology and communication studies, this research critically assessesvAI’s capacity for emotional experience, positing that while chatbots may convincingly simulate humanvemotional expression, they lack the subjective element that is integral to genuine emotional experience. Thisvdistinction, nowadays, has profound implications for human-AI interaction, ethics, and our understanding of artificial intelligence’s humanity in contemporary society.| File | Dimensione | Formato | |
|---|---|---|---|
|
Danilo-Petrassi-From-ELIZA-to-Conversational-AI.pdf
accesso aperto
Descrizione: Paper
Tipologia:
Versione editoriale (versione pubblicata con il layout dell'editore)
Licenza:
Tutti i diritti riservati
Dimensione
183.74 kB
Formato
Adobe PDF
|
183.74 kB | Adobe PDF | Visualizza/Apri |
I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.


