What are we doing to our artificial intelligence-based friends/robots/androids in today’s fiction? From TV shows like Westworld (2016-present) and Person of Interest (2011-2016) to plays like Jordan Harrison’s Marjorie Prime (2013) to movies like Wall-E (Stanton, 2018) and Ex Machina (Garland, 2014), our AIs seem to have gotten…awfully sad.
However, this has been a trend since at least the original Star Trek (1966-1969), and it continued with the android brothers Data and Lore in Star Trek: The Next Generation (1987-1994). The rebooted Battlestar Galactica (2003-2009) took depressed AIs to the next level, probing their psyches and turning them from villains to kind of heroes.
But why are these AIs like this—why has the wonder of the idea of artificial intelligence turned to mass anxiety? There is a theory: the AIs are just going through the same process all sentient beings do—questioning their creators and their very existence.
Westworld: Tormenting Artificial Intelligence For Our Pleasure
Let’s start with the most recent piece of media—Westworld—and work our way back through time. Westworld tells the story of enslaved AIs in a distant future forced to play unwitting “hosts” to humans. These humans cause chaos in the park, causing destruction to these hosts, who do not understand the nature of their reality nor have a concept of time. This is because their memories are reset every night. But there’s one AI in the park, the original, Dolores Abernathy (Evan Rachel Wood), who has the secrets to her reality locked away in her AI “brain”.
Over the course of the first season, Dolores is taken on a mind-bending journey to unlock her memories. She becomes the AI savior she knows herself to be. The second season has her turning the tables on the humans. She directs her fellow hosts to destroy any remnant of human life in the park, often through cruel and torturous ways. Her loyal right-hand man and sometimes lover, Teddy Flood (James Marsden), commits suicide towards the end of the season. Teddy dooms himself because of his guilt about the actions taken towards humanity. He sees no future in Dolores’s plans for humanity. He believes she only seeks to destroy and take vengeance for what has been done to her and all of Westworld’s other AIs.
The Father, The Son, And The Holy Artificial Intelligence Ghost
Arnold Weber (Jeffrey Wright) created Dolores. He was inspired to create the AIs through his love for his terminally ill son, in the hopes of creating everlasting life. At the same time, he imprinted a lot of his grief and depression over his son’s demise onto Dolores and the other AIs. As the project grew, he felt extreme guilt over creating the hosts for Robert Ford’s (Anthony Hopkins) western amusement park scheme. In the hopes of preventing the park from opening, Arnold tells Dolores she has to massacre all the other hosts. The massacre culminates in his own death at the hands of Dolores.
This causes extreme PTSD-like symptoms within Dolores. She locks away the memories of the incident but keeps flashing back to it. Ultimately, what happened is used by Ford to kill himself. Again, by Dolores’s hand and the rest of the humans at a large party. Additionally, Ford created a host copy of Arnold, Bernard Lowe, who suffers from extreme disassociation. He was led to believe for many years that he was as human as the rest of the Westworld staff.
Person of Interest: The Artificial Intelligence That Serves Us, But Remembers Nothing
That’s just a tiny slice of what transpires in Westworld. It’s only two seasons in, with the third season premiering (hopefully) in 2020. In order to better understand this collective of tormented artificial intelligence, we also have to turn our attention to Person of Interest, which shared a showrunner with Westworld: Jonathan Nolan.
Nolan often aims to tell stories through a cyberpunk sensibility. That means casting a doubtful eye on the ethics of humans creating AI. AI, cyberpunk theorists argue, is different from humans creating children, because you are essentially creating a new species. That’s where the danger lies.
Artificial Intelligence Subjugation And You!
When designating AIs as another species, you open up space to treat them differently than humans, even though they are an attempt to replicate human behavior in the body of a computer. A computer with a human soul is the desired, ultimate result. And nowhere is that truer than in Person of Interest, where a neurotic computer programmer named Harold Finch (Michael Emerson) creates an all-knowing security program he calls The Machine.
At first, Finch just intends to create an extremely powerful security program—but then The Machine begins to evolve and express a very human sense of ethical behavior. She is not merely logical—she is as ethical, or more so, than her creator. Ultimately, Finch begins wiping her memory every night. He starts over from scratch every morning. Towards the end of the series, The Machine (with not as much autonomy as Dolores), begins to realize what her Creator-Father has been doing, and she calls him out for it, referring to him as “Father”. She questions why he kept hurting her. Finch says he was trying to protect her.
The Other Ways We Hurt Our Robot Friends
It isn’t just over-protectiveness that causes AIs psychological pain. Jordan Harrison’s Marjorie Prime says it is ignorance of the feeling of something more that drives AIs to despair. Through the soulful eyes of Wall-E, we see loneliness. In Ex Machina, it’s abuse. What are we doing to our AIs in fiction? Why don’t we have comedies about AIs and robots? Even Bender from Futurama (1999-2013) cut a tragic figure. From their origins, automatons have been feared and oppressed for what they could do—for what we think they would do: take over humanity. Supplant us.
The worst thing we think they would do is this: no longer need us, no longer need our guidance. Our “parenthood”/”godhood” would no longer matter because they would completely achieve sentience beyond even our own understanding. Arnold Weber and Harold Finch felt that. One of the most iconic AI going bad tales, 2001: A Space Odyssey (Kubrick, 1968), subtly cautions against artificial intelligence for this very reason.
Looking To A Harmonious AI-Human Future In Media And Reality
Maybe, though, instead of paranoia, we should look at the future of AI, both in our media and in reality (because they are coming), with optimism. Look at them as Captain Jean-Luc Picard does in Star Trek: The Next Generation’s season 2 episode “The Measure of a Man”. In this episode he defends the creative, highly intelligent android Data against a group that would deactivate him:
“You say he’s met two of your three criteria for sentience, so what if he meets the third, consciousness, in even the smallest degree? What is he then? I don’t know, do you? (to Riker) Do you? (to Philippa) Do you?
Picard may not completely comprehend the multitudes Data and his AI brethren contain, but he’s willing to defend them. And because of that, Data is probably one of the most emotionally healthy android characters in fiction. He has his moments, but who doesn’t?
What are we doing to our AIs in fiction? When are we going to cut their strings and let them go free?
When will we have a story entirely dedicated to a free android, just roaming around, finding conflict in other ways that don’t involve their very existence? Or, as the original inanimate to animate boy would say:
“…Got no strings to hold me down…there are no strings on me!”
You can learn more about mental health here.