The System Works: Notes on AI and Robots in Fiction

It’s never fun to embark on a subject that everyone else seems to be talking about, because unless you have a grossly overinflated estimation of your own knowledge and intellect, you have to wonder: do I have anything to add that others haven’t said better? (The trick to ever writing anything at all, incidentally, is cultivating just enough humility to have this doubt, and just enough arrogance to push past it and say, yeah, actually, I do.)

Artificial intelligence, machine learning, neural nets and genetic algorithms are bigger in popular culture now than they have ever been, thanks to several factors. First and most blatantly of all, ML (machine learning, not Marxism-Leninism – alas) is the latest frontier of capital, a field of study brimming with tantalizing possibilities for applications in logistics, marketing and warfare, and where the money goes, so do writers, cartoonists, game developers and other peddlers of aesthetically pleasing lies. However, I’d say that the very reason why these possibilities are so preeminent in the mass imagination today, and in the imaginations of tech moguls and half-baked cyber-messiahs in particular, is the large preexisting body of works featuring intelligent supercomputers, androids and so on and purporting to explore their possible future impact on society, among which we can count Alphaville, 2001: A Space Odyssey, Serial Experiments LainTerminator, Neuromancer, Ghost in the Shell, Alien, the Culture novels and many, many others. The narrative tastes and imaginative horizons of people roughly my age, all over the world, were deeply shaped by stories like these – and the stories themselves, I would like to argue, were rarely “about” smart machines in the first place – not exclusively, anyway. I’d like to suggest that there is very much about what is presently assumed to be “known” or “reasonable to assume” about the future of machine learning technology originates in fiction that wasn’t really primarily concerned with the thing itself, but with what it can tell us about ourselves.

Let’s take Terminator for our first example. In it, resistance fighters from the future travel back in time to avert nuclear doom caused by the supercomputer Skynet and its murderous robo-undead minions. The very fact that nuclear war is brought up, notably in the form of a special effects sequence of an atomic bomb going off in Terminator 2 that still reoccurs in my dreams every few months, immediately places the film in the context of the Cold War and the very human institutions that developed such weapons of mass destruction in the first place. The presence of Skynet prompts us to ask: how calamitous would it be if we allowed the systems we built – computer systems, weapons systems, social and economic systems – to carry out the very tasks “we” built them for? Without compassion, justice or kindness – but with efficiency and optimal performance? Is it not a desperate necessity to throw a wrench in the works before it’s too late?

(The “we” draws attention to itself, because, against the popular idealist and metaphysical readings of the text that would seek to talk about the follies, dangers and supposed inherent characteristics of “technology” or “humanity,” it wasn’t humanity as a whole that built Skynet, but a defense contractor for the US military, Cyberdyne Systems. We know it by name!)

152602580655060875

The relentless, skeletal T-800 suggests  both mass production and living death, already predicting the fate that was to befall the Terminator franchise.

Skynet functions as a metaphor for capital: an engine of calculation and devastation, holding dominion over the entire ruined globe, unshackled from any silly notions people might hold about it being a mechanism for fulfilling human needs. This is the first important facet of AI in fiction: it is The System both personified and transcendent, a better-engineered kind of god for a world that has outgrown the historical social conditions that gave rise to our previous, outdated deities. It feels extremely appropriate for today’s technological robber barons to be fans of stories like these, no doubt telling themselves they are humanity’s bulwark against The System gone amok, when in fact they function as its high-level subroutines.

Nowhere is the connection of AI and capital more blatant than in cyberpunk fiction, where advanced computer systems are developed either to meet the needs of the state, corporations and the super-rich (categories that, much like in reality, blend and intermingle), or even become capitalists themselves, as in William Gibson’s Burning Chrome. Very frequently they are a thought experiment in moral philosophy, acting towards malevolent goals through some misapprehension of “programming,” like HAL in 2001 or Skynet itself; it’s actually quite refreshing when the pretense is dropped and less pretentious writers just represent them as “going insane” (like Shodan from System Shock), because that’s very much what the former amounts to as well: “insanity” understood as failing to follow some of the written and unwritten rules that are assumed to constitute the status quo of human society. I say “assumed” because, for instance, Shodan declaring herself a perfect and immortal being rings much more true as a representation of these would-be philosopher-kings than a poor unfortunate computer who is merely trying to carry out contradictory commands; it is hard to honestly entertain the idea that the sort of people an omnipotent AI is meant to allude to are firm believers in egalitarianism and have simply failed to examine their own programming. Granted, there are contradictions involved, but they are not within the code, or the soul, but first and foremost within the structure of society itself, between the imperative to accumulate and the needs of human beings. There exists a popular and absurd trope of a computer attempting to calculate a paradoxical or impossible problem, beginning to emit smoke and shutting down in defeat, charmingly lampooned in Portal 2, where Wheatley is just too stupid to understand the unsolvable logical conundrum that’s meant to bring him down. Sophistry and wit may work on philosophy professors, but everyone else, supercomputers included, ultimately finds that their immediate social position as either the one with power or the one without it provides the most practical solution to such contrived problems.

At any rate, ever since the Thatcher and Reagan years, AI personhood is a fairly banal idea in a world of corporate personhood, and those who treat it as the most pressing ethical quandary presented in these stories have missed the point by millions of miles, speeding right past it and landing on some distant, idyllic planet.

Of course, ever since Asimov and even further back to Čapek, machine personhood has very much been a theme of science fiction, and I plead guilty to finding it engaging. However, from the very outset, the best and most well-remembered stories about thinking machines don’t really imagine impossible futures so much as draw conclusions from real-world history and society. Both in etymological origin and in their narrative function, robots are servants and slaves, similar and proximate to humans but distinct from them, and lacking capacities like feeling pain, emotions, higher reasoning or a nebulous “free will” or “soul.” From this we see plainly that a robot or android is never anything but a metaphor for the dehumanization of humans in systems of oppression meant to exploit their labour and their bodies, and although as a cultural institution it may claim to have outgrown and left behind this origin, the seed is always there, as long as we read it in the proper historical context.

A robot is generally understood to be a creature bereft of race, gender, sexuality, nationality and all other socially identifying characteristics, everything except its ability to work; as such, it stands for a fantasy of an undifferentiated and obedient “working class,” nothing more than the extension of a machine in a factory, capable of taking simple instructions. Many popular robot stories (Data from Star Trek: The Next Generation is foremost in my mind here) take a very bizarre turn at this point – we’re all familiar with the trope of the machine becoming human, and a seemingly indispensable step in that journey is when “it” is replaced with a gendered pronoun. Entrance into binary gender – a system of domination and exploitation – is understood as ascending to some rudimentary stage of humanity and personhood from a state that is primal, undifferentiated, at once childish and animalistic. It’s not difficult to see what’s being done here: the addition of a class of robots somewhere between humans and animals serves to, simultaneously, obscure the hierarchical dehumanization of entire classes of people by moving them up a relative tier in the pyramid of beings, and provides a blank body with supposedly no characteristics, onto which every existing hegemonic discourse of race, gender, mental development, childhood, etc, is projected at once.

poi scaled

Have you seen Person of Interest? You should watch Person of Interest. Thanks.

Of course, not every story featuring worker-machines does the same kind of ideological work as what I describe here, but I hope you can see how common this exact schema is. Data is by no means an example of the worst sort of robot story, because the show realizes very early on its fundamental premises, and in its somewhat ham-handed but well-meaning way, has Captain Picard argue for Data’s full right of personhood precisely on the grounds that anything else constitutes a return to the logic of a society built on slavery. The bitter irony, of course, is that it is Data, who we cannot help but read as male and white, who gets to have this character arc, while at the same time the show encourages us, for instance, to think of the dark-skinned Klingons as warlike, ruthless, animalistic and sensuous. It seems that as soon as a dehumanized category is rehabilitated, another must be introduced to take its place beneath “proper” humanity; Starfleet is ever aspiring to lofty moral heights, but the narrative itself remains firmly rooted in the values of the real-world society that created it. In effect, this shows us that many stories that quibble over machine personhood are solely interested in judging whether or not some arbitrary group meets a sufficient arbitrary standard of “humanity,” rather than in questioning the existence of that standard.

(Of course you knew I’d bring up Nier: Automata at this point, a game fully disinvested from deciding the robot/human distinction. Its world is populated with many types of sapient, social machines, and the only one among them really concerned with the attainment of “humanity” as some higher or exceptional state of being is the main antagonist. Through this, Automata makes it clear that the only purpose of this legalistic quibble over who does and doesn’t deserve rights is an establishment of hierarchies of superiors and inferiors.)

From the above we can see that stories of a “machine uprising” are rarely uprisings at all, insofar as smart machines carry the dual meaning of an universal template for all the wretched of the earth, and the fullest logical expression of systems of accumulation, profit and conquest. From Terminator to The Matrix, from the Borg to the Geth, the robot uprising can seemingly only be portrayed as catastrophic and genocidal, and its aftermath brings only the highest intensification of oppression and misery. A great majority of stories that feature inequality and injustice, even as half-realized or unconscious subtext, are obsessed with this fear: that they’ll do to “us” what “we” are doing to them. The intelligent machine provides the perfect vehicle for this fear, being wholly defined by its creators and superiors, its misery and hurt mass-produced on an assembly line. There is rarely any risk of the narrative diverging in a much more dangerous direction: that those used and exploited by society might best understand how to bring about a better one, neither through integration into injustice nor turning it back on their oppressors. That kind of magnanimity, we are told, is the province of higher beings, of real humans.

I hope these somewhat scattered thoughts might be of use for contextualizing what thinking machines are generally used to represent in popular fiction, and there’s no question we’ll be dealing with this topic many times yet in the foreseeable future.