I wrote this essay as part of A Strategic Multilayer Assessment (SMA) Periodic Publication entitled AI, China, Russia, and the Global Order: Technological, Political, Global, and Creative Perspectives edited by Nicholas D. Wright and Mariah C. Yager. For the full paper, visit the NSI website.
In my recent reflections about the exponential growth in artificial intelligence and the potential implications for humanity and the global order, a pulse fired across the synapses in my brain. Seemingly out of nowhere, I began humming a familiar tune set to Lewis Carroll’s famous poem entitled Jabberwocky.
“Beware the Jabberwock, my son! The jaws that bite, the claws that catch! Beware the Jubjub bird, and shun the frumious Bandersnatch!”
The poem depicts a terrifying beast called the Jabberwocky and a valiant hero who takes up arms in a violent confrontation. For some strange reason, my brain substituted “AI Monster” for the Jabberwock in Carroll’s poem, leading musical notes from the distant past to enter my mind. I hadn’t sung or even thought about the tune since my days of singing in the St. Cecilia Youth Chorale—more than twenty-five years ago. What mysterious links was my brain connecting here?
Naturally, I turned to Google’s powerful search algorithm for answers. I’d forgotten that Lewis Carroll wrote the nonsensical poem for Through the Looking Glass (1871), the sequel to his more famous novel Alice in Wonderland (1865). Both works were written by the mathematician under a pen name. Though considered children’s books today, they were intended as scathing critiques of prevailing trends in the field of mathematics and designed to parody several of his colleagues. Immediately, I connected the dots between our perception of pending doom at the hands of AI to the dark atmosphere and intense feelings of disorientation and angst in Carroll’s stories. The tale of Alice travelling down the rabbit hole to meet a sequence of demented, off-kilter, and nonsensical characters gives me a jarring sense of discomfort to this day—not much unlike my fears regarding the rise of AI.
After a moment of awe for the mystifying inner workings of the human brain, I felt another curious tug at my consciousness after reading Carroll’s poem. I’d set out a pile of my favorite sci-fi films from which to draw inspiration for my next fiction project—a dystopian science fiction trilogy rooted in current digital trends. The movies were stacked in no particular order, and I decided to watch my all- time favorite, The Matrix.
My pulse quickened as Neo receives a message on his computer screen: “Follow the white rabbit.” That’s not Alice’s white rabbit, is it? Shortly afterwards, Neo spots a white rabbit tattoo on the girl’s shoulder and follows a wild rabble to an all-night rave. Then a thought crossed my mind. Is my brain showing me the link to the old musical tune? I reached the part where Morpheus meets Neo, and my buried memories started to surface. I froze in my chair, my heart now pounding against my chest. The blue and red pills are analogous to “Drink Me” and “Eat Me” in Alice in Wonderland, aren’t they? A few moments later, Morpheus says to Neo: “I imagine you’re feeling a bit like Alice… tumbling down the rabbit hole.” Yes, Morpheus. Yes, I do.
By now, my mind was blown. In making my film choice, I didn’t realize my brain was doing its thing again. It was drawing connections from the depths of my complex neural network and bringing them to the surface.
For the umpteenth time, this experience reinforced what I’ve always known to be true—that science fiction plays an important role in shaping our understanding of the implications of science and technology and helping us to cope with things to come. My brain was leading me down a rabbit hole to confront the horrifying AI monsters depicted in science fiction as one day disrupting the global order and destroying humanity—the automation monster, the supermachine monster, and the data monster.
The Automation Monster
In the first and oldest nightmare AI scenario, the future is automated. Humans have been completely sidelined by robots—stronger, tireless, and inexpensive versions of themselves—as depicted in Kurt Vonnegut’s Player Piano (1952). Fears about robotics have pervaded pop culture since Karol Capek, a Czech playwright, coined the term “robot” in 1920 in his play entitled Rossum’s Universal Robots (RUR). The satire depicts robots performing the activities that humans typically find undesirable— the dirty, dull, and dangerous. As demonstrated in the end of Capek’s play when the robots rebel against humans and eliminate nearly all of humanity, automation, though more convenient, cheaper and faster, presents new dangers.
In a series of short stories entitled I, Robot (1950), Issac Asimov effectively demonstrates how humans may lose control of robots, even if they are programmed not to harm humans according to his three famous laws. He warned that as automated systems become more complex, humans will not be able to anticipate all the unintended consequences of rule-based systems.
Potential scenarios about the loss of control were also featured in several classic films during the Cold War period. In Stanley Kubrik’s Dr. Strangelove (1964), a doomsday device thwarts efforts by the US and the USSR to prevent nuclear war, leading to the destruction of both countries and a devastating nuclear winter. The removal of human meddling through automation was intended to increase the credibility of mutual assured destruction. The strategy goes awry because the Soviets fail to communicate its new capability to the US in a timely manner. Once the doomsday device is activated, it cannot be deactivated since automation is the essential property of the system.
Another Kubrik film, 2001: A Space Odyssey (1968), features the HAL 9000 supercomputer (aka “Hal”), which was designed to automate most of the Discovery spaceship’s operations. Although the computer is considered foolproof, the human crew discovers Hal made an error in detecting a broken part. The crew decides to disconnect the supercomputer, but not before Hal discovers their plan and manages to kill off most of the crew.
In WarGames (1983), doubts surface about military officers’ willingness to launch a missile strike. Consequently, the government decides to turn over the control of the nuclear attack plan to the War Operation Planned Response (WOPR), a NORAD supercomputer, capable of running simulations and predicting outcomes of nuclear war. A young hacker inadvertently accesses the computer and launches a nuclear attack simulation, which begins to have real-world effects. To stop the computer from carrying out its automated nuclear attack, the system’s original programmer and the young hacker must first teach the computer the concept of mutual assured destruction in which there is no winner.
The predicted outcomes of the automation monster range from terrible to apocalyptic. In the most likely scenario, robots will destroy our jobs, leaving humans out of work and without any hope for economic mobility. The impact on the global order would be devastating, potentially leading to mass migrations, societal unrest, and violent conflict between nation-states. These fears appear to be substantiated by a wide range of studies from companies, think tanks, and research institutions, which predict as many as 800 million jobs will be lost to automation by 2030 (Winick, 2018).
Another frightening scenario involves autonomous weapons going awry. In an era of autonomous weapons, warfare will increasingly leverage machine speed and pose a challenge to the need for human control. Whereas humans require time to process complex information and reach decisions, machines can achieve the same in nanoseconds. Despite advantages in analyzing complex datasets, however, the decisions reached by machines may not be optimal due to the nature of information— its inaccuracy, incompleteness, bias, missing context, etc. To prevent some nightmare scenarios, humans must remain in the loop. To prevent others, humans might need to step aside to let the machines lead the action… because speed can kill (Scharre, 2018).
In another terrifying scenario portrayed in GhostFleet (2016) by P. W. Singer and August Cole, overdependence on automation technologies creates critical vulnerabilities that can be exploited by adversaries. Recent news headlines regarding the vulnerabilities of US weapons systems and supply chains suggest that this scenario is a near-term possibility (GAO, 2018). US superiority in automation technologies offers our adversaries powerful incentives for conducting first-move asymmetric attacks that exploit theses vulnerabilities (Schneider, 2018).
Taken to the worst extreme, automation combined with machine intelligence could potentially lead to the destruction of the world by autonomous machines and networks of machines—the supermachine monster.
The Supermachine Monster
In recent years, the supermachine monster has dominated the tech headlines as the scariest potential AI scenario. A number of public figures including Elon Musk and the late Stephen Hawking have issued dramatic warnings about the prospect of reaching singularity in 2045—the point at which Futurist Ray Kurzweil suggests machine intelligence will match and inevitably exceed human intelligence.
Inspired by fears about supermachines, The Terminator (1984) tackles the theme of a coming war between humans and machines, a result of an automation scenario gone awry. A defense contractor builds the Global Digital Defense Network, an AI computer system later referred to as Skynet. The system is designed to control all US computerized military hardware and software systems including the B-2 bomber fleet and the nuclear weapons arsenal. Built with a high level of machine intelligence, Skynet becomes self-aware, determines humanity to be a threat to its existence, and sets out to annihilate the human race using nuclear weapons and a series of lethal autonomous and intelligent machines called terminators.
The Matrix (1999) picks up where The Terminator leaves off, depicting the aftermath of war between humans and machines, the initial triumph of the machines, and the enslavement of humans. The majority of humans are prisoners in a virtual reality system called the matrix and being farmed in pods as a source of energy for the machines. A small number of freed humans live in a deep underground colony called Zion and carry on a violent struggle against the sentinels. By the end of the trilogy, Neo convinces the machines to reach peace with Zion and to fight against a common enemy—a malignant computer program called Mr. Smith.
There are few scenarios more frightening than apocalyptic wars between humans and machines. Indeed, we are so afraid of the automation and supermachine monsters these days that we’re failing to see the scariest monster of them all—lurking beneath the surface of our consciousness—the data monster.
The Data Monster
My brain made connections that were deep beneath my consciousness, linking Carroll’s poem to Alice in Wonderland’s rabbit hole and The Matrix to the AI monster that keeps me up at night—the data monster. Lately, I’ve been wondering whether we are already controlled by the machines and just aren’t fully aware of it yet.
In Plato’s Republic, Socrates describes a group of people chained to the wall of a cave who think the shadows on the wall are real because it’s all they’ve ever seen; they are prisoners of their own reality. How is it that we are not seeing the dangers of the data monster? Even while the pernicious beast stalks us everywhere, lurking in the corners, ready to enslave us at any moment. Or are we already its prisoners and unable to see the truth?
For me, the real Jabberwocky is the three-headed data monster combo of the Internet, digitization, and algorithms. Somewhere deep down, we realize the data monster is stealthily assaulting our sense of truth, our right to privacy, and our freedoms. Most of us sense this is happening, but we suppress such concerns in favor of obsessing over the other more sexy AI monsters. But if we don’t take the red pill now and wake up from our digital slumber, we may end up prisoners in the matrix— controlled by our machines.
Much has changed since The Matrix was first released in 1999—particularly our inextricable relationship with smartphones, the rapidly accumulating crumbs of our digital trail, and our growing interconnectedness through the Internet of Things. The image of sleeping humans imprisoned in pods, connected to the machine world by thick, black cables attached to their spines, and ruthlessly exploited as an energy source hits home in a whole new way in 2018. At its essence, the matrix is a digital world designed by the machines to fool humans into thinking it is real. Are we in a matrix?
Our common sense of truth has been eroding for the past few years at the hands of endless political spin, outright lies, and allegations of fake news. The propaganda has gotten bad enough to invoke images from George Orwell’s dystopian novel 1984 in which Party Member Winston Smith works diligently at Oceania’s Ministry of Truth to rewrite history based on the ever-changing truth propagated by the Party. The bleak world of newspeak and doublethink created by Orwell in 1949 resonates so well today, the novel became an Amazon bestseller in 2017. Although Winston rebelled against the Party, he was in the end compelled to reject the evidence of his eyes and ears. “It was their final, most essential command.”
French philosopher Jean Baudrillard, a muse of the Wachowski brothers, argued that in a postmodern world dominated by digital technology and mass media, people no longer interact with physical things, but rather the imitations of things. And so, technology has altered our perceptions of reality and made it more difficult to identify truth. Our growing interdependence with machines causes intense confusion about what parts of our human experience on this earth are more real—those in the physical world or that in the digital one. How do we know what we know is true or real?
At the beginning of the movie, Neo asks “do you ever have the feeling where you’re not sure if you’re awake or you’re dreaming?” Deep down, he senses the pernicious illusion of the matrix. When Morpheus meets Neo for the first time, he gives Neo a choice: take the blue pill and wake up as if nothing ever happened, or take the red pill and learn the truth. Later in the story, Neo’s power as “The One” derives from his ability to see the matrix for what it really is. At times in the movie, it’s unclear which form of existence is preferable—the matrix or the real world. Indeed, the villain of the movie, a freed human by the name of Cypher, betrays Morpheus for a chance to get back into the matrix and deny the truth of his existence.
But truth is not the only vital element under siege by the data monster. Slowly, but surely, the data monster has been jealously chipping away at our right to privacy. Here again, we are partners in our own demise. With every digital action, each one of us produces new data points—e.g., every email, text, and phone call, Internet download , online purchase, GPS input, social media post and contact, daily numbers of steps, and camera selfie. The list could go on and on. With all the data we produce, we are essentially handing over the tools of surveillance and control. But to whom?
In 1984, George Orwell creates a world in which the citizens of Oceania are monitored via telescreens, hidden microphones, and networks of informants. The notion that Big Brother is always watching keeps most citizens in line. For those who rebel, extraordinary measures are taken to bring them back in line by the Thought Police. Such a social control experiment, while leveraging technology, is happening in the real world as we speak.
Leveraging the data trail of its population, the Chinese government has begun testing a social credit system which assigns a trustworthiness score to citizens based on their behavior—including their social network, debt, tax returns, bill payment, tickets, legal issues, travel and purchase habits, and even disturbances caused by pets. Blacklisted Chinese citizens with low scores face limitations in their freedoms, ability to travel, employment opportunities, and much more. As such a credit system takes effect, citizens will conform their behavior to avoid negative outcomes.
Perhaps, many of us can breathe a sigh of relief—at least we don’t live in China. Thus far, most democracies have resisted the alluring pull of monitoring technologies in the name of protecting privacy. Or have we? If our data trail is not being funneled to our government, then to whom are we giving the power? And do we trust them to do the right thing?
In Future Crimes (2015), Marc Goodman describes in compelling detail how we fail to see the reality of our digital actions and gambling away our privacy: we are the product of the tech giants. Every day, we have grown accustomed to exchanging small pieces of our privacy for free services by clicking the box “agree to terms and conditions.” Most of us skip the pages of legalese to download the app and get access to the convenient and “free” services. When we use Gmail from Google, update our status on Facebook to share news with our friends, purchase stuff from Amazon to avoid going to the store, we agree to the use and tracking of our data.
All of this data is out there somewhere, waiting to be mined and exploited. Until something bad happens like a stolen credit card number or identity theft, most people don’t think about the consequences. But if we’re being honest with ourselves, the data monster probably knows us better than we know ourselves. And that means, there are private-sector companies that know us, too. Tech giants such as Facebook and Twitter already assign its users a reputation score based on activity and social networks. Big Brother is watching you.
But the power of data goes far beyond monitoring and surveillance to allow for predictive control. In Minority Report (1956), a short story by Philip K. Dick, a set of precogs are able to see and predict all crime before it occurs, eliminating crime in a future society. Instead, people are arrested and tried for precrimes based solely on the logical progression of their thoughts. We may shudder at the notion of such a world, but AI and big data are already being used to forecast our behavior on a daily basis— and shape our future behavior. For example, Amazon tracks every purchase you make on its website and uses its algorithm to predict what item you are most likely to buy next. This sees harmless enough. For now.
But what happens when machine learning tools begin making more important decisions than our retail purchases? The data we produce today will shape the future, possibly even control it. What is the nature of that data? How reliable is it? Has someone accounted for false information, missing information, partial truths, and bias?
Last year, the British police began using “predictive crime mapping” to determine where and when crime will take place. Some allege the system has learned racism and bias, leading to increased policing in areas with high crime rates and to self-fulfilling prophecies.
Machine learning tools analyze data, but they cannot determine what is true and what is false unless they’ve been trained to do so. If it’s difficult for humans to identify truth these days, how can we expect machine learning tools to do itbe better? In a recent example, Amazon attempted to use a machine learning algorithm to simplify its hiring process. The training data included resumes submitted to Amazon over ten years, the majority of which came from male candidates. By using this dataset, the algorithm learned to prefer male applicants over females and downgraded the latter in making its recommendations.
Although Armageddon-like scenarios do not loom large for the data monster, its impact could be far more pernicious to us in the near-term.
Overcoming the Monsters
My brain was not merely connecting the dots across disparate images stored in my memory bank. It was also providing me with a primal emotional response to my fears about AI. Carroll’s poem offers a good example of an “overcoming the monster” plot where characters find themselves “under the shadow of a monstrous threat” (Kakutani, 2005). At the climax, the hero has a final confrontation with the monster, deftly wielding his sword and slaying the Jabberwocky.
“One, two! One, two! And through and through, The vorpal blade went snicker-snack! He left it dead, and with its head, He went galumphing back.”
In reality, we are still quite far away from the worst-case AI scenarios, especially in light of human adaptability, ingenuity, and resilience. To achieve sentience or mindedness of a human, a machine would have to excel in and leverage all forms of human intelligence simultaneously (Gardner, 1983).
It’s time to put on our battle armor, wield our swords, and address the risks of AI head-on with creative determination—let’s do what humans do best, to imagine the future we want for ourselves and put the pieces in place to achieve it. When we put aside our terror, we’ll find the beast is not quite as powerful as we imagined. If we can overcome the data monster, then we can certainly triumph over the worst of the automation and supermachine monsters. Let’s take the red pill and get started today.
The views expressed in this piece belong to the author and do not reflect the official policy or position of the National Defense University, the Department of Defense or the U.S. Government.
Dearden, L. (2017, October 7). How technology is allowing police to predict where and when crime will happen. Independent. Retrieved from https://www.independent.co.uk/news/uk/home- news/police-big-data-technology-predict-crime-hotspot-mapping-rusi-report-research- minority-report-a7963706.html
Gardner, H. (1983). Multiple intelligences: Challenging the standard view of intelligence. Retrieved from http://www.pz.harvard.edu/projects/multiple-intelligences
Gonzalez, G. (2018, October 10). How Amazon accidentally invented a sexist hiring algorithm. Retrieved from https://www.inc.com/guadalupe-gonzalez/amazon-artificial-intelligence- ai-hiring-tool-hr.html
Goodman, M. (2015). Future crimes: Inside the digital underground and the battle for our connected world. New York: Anchor Books.
Kakutani, M. (2005, April 15). The plot thins, or are no stories new? The New York Times. Retrieved from https://www.nytimes.com/2005/04/15/books/the-plot-thins-or-are-no-stories- new.html
Scharre, P. (2018). Warfare enters the robotics era. Retrieved from https://davemarash.com/2018/10/25/paul-scharre-center-for-a-new-american-security- warfare-enters-the-robotics-era/
Schneider, J. (2018). Digitally-enabled warfare: the capability-vulnerability paradox. Retrieved from https://www.cnas.org/publications/reports/digitally-enabled-warfare-the-capability- vulnerability-paradox
Singer P. W. & Cole, A. (2016). Ghost fleet: A novel of the next world war. New York: Mariner Books.
Winick, E. (2018, January 25). Every study we could find on what automation will do to jobs, in one chart. MIT Technology Review. Retrieved from https://www.technologyreview.com/s/610005/every-study-we-could-find-on-what- automation-will-do-to-jobs-in-one-chart/