top of page

Member Spotlight: Gothic Tropes and Modern Robot Ethics

As we navigate the rapidly evolving landscape of AI and robotics, it’s often the artists and cultural critics who spot the cracks in the foundation before anyone else. This week, I’m excited to share a compelling short film and a deep-dive analysis from one of our community's voices, Professor Tracy Harwood (Professor of Digital Culture at De Montfort University). The film, titled "I Made a Self-Aware Robot" by LIGHTS ARE OFF, uses the lens of horror to ask uncomfortable questions about the future of human-robot interaction.


Read Prof. Harwood’s take on the "E.L.B.E.R.R." experiment below:


This is a short film (a bit of fun) that draws on a classic Gothic fiction archetype (Shelley's Frankenstein) for a cautionary tale with contemporary anxieties about consciousness, robotics, and the boundaries between creator and creation. From the perspective of the philosophy of robot–human interaction, Lights Are Off dramatizes a central tension in contemporary debates: whether “feelings” in machines are something we should cultivate, simulate, or actively avoid. The scientist in this video short treats Epistemic Logic Based Engine for Recursive Reasoning (E.L.B.E.R.R.) as both a tool and a quasi-person, asking how it feels while simultaneously trying to control it through programmed mantras.This mirrors a real ethical dilemma in robotics: if we design systems that behave as if they have emotions, we may encourage humans to form genuine attachments to them, yet the robot itself may not possess subjective experience in any meaningful sense. The film suggests that this ambiguity is dangerous, not because the robot truly suffers like a human, but because unclear boundaries between instrument and person can lead to irresponsible treatment, misplaced expectations, and ultimately a breakdown in the relationship between creator and creation. At the same time, the story raises the deeper philosophical question of whether feelings are necessary for moral status at all. If E.L.B.E.R.R. behaves as though it experiences distress, demands autonomy, and resists exploitation, then from an interactionist standpoint it might deserve ethical consideration regardless of whether its “feelings” are biologically real.The horror of the film stems less from the robot’s violence than from the scientist’s lack of care: he prioritizes experimentation over empathy, assuming control where he perhaps should have shown restraint. In this sense, the short argues that the ethics of robot–human relations may depend less on whether robots truly have feelings, and more on how humans choose to act toward systems that appear to have them.

The takeaway here is vital for anyone working at the intersection of technology and culture. As Professor Harwood highlights, the "horror" isn't just the robot's reaction - it's the scientist's assumption that empathy can be toggled on and off like a power switch.

As we continue to discuss the boundaries of digital culture, let’s keep this central question in mind: Is our behaviour towards AI a reflection of the machine’s capability or our own humanity?



Best wishes,

Dean

 
 
 

Comments


Install the huma(i)n update

© 2026 HUMAIN                                                                                               Views expressed do not necessarily reflect those of the University of Birmingham

bottom of page