海角大神

Why AI stories are more about humans than about machines

|
20th Century Studios 漏 2023
Madeleine Yuna Voyles stars as the robot child Alphie in 鈥淭he Creator,鈥 a film arriving Sept. 29.

Humans are at war with machines. In the near future, an artificial intelligence defense system detonates a nuclear warhead in Los Angeles. It deploys a formidable army of robots, some of which resemble people. Yet humans still have a shot at victory. So a supersoldier is dispatched on a mission to find the youth who will one day turn the tide in the war.

No, it鈥檚 not another movie in 鈥淭he Terminator鈥 series.聽

In 鈥淭he Creator,鈥 opening Sept. 29, the hunter is a human named Joshua (John David Washington). He discovers that the humanoid he鈥檚 been sent to retrieve looks like a young Asian child (Madeleine Yuna Voyles). It even has a teddy bear. As Joshua bonds with the robot, he wonders whether machines are really the bad guys.聽

Why We Wrote This

A story focused on

Representations of artificial intelligence in popular culture help push society to think more about technology鈥檚 role 鈥 and which human values it reflects.

鈥淎ll sorts of things start to happen as you start to write that script where you start to think, 鈥楢re they real? And how would you know?鈥欌 writer and director Gareth Edwards told the Monitor during a virtual Q&A session for journalists. 鈥溾榃hat if you didn鈥檛 like what they were doing 鈥 could you turn them off? What if they didn鈥檛 want to be turned off?鈥欌

Popular culture has profoundly influenced how we think and talk about artificial intelligence. Since ChatGPT鈥檚 giant leap forward, AI has often been cast as the villain. AI supercomputers go rogue in Gal Gadot鈥檚 Netflix thriller 鈥淗eart of Stone鈥 and the latest 鈥淢ission: Impossible鈥 movie. For dramatic effect, AI is often embodied in robots. They鈥檙e not only sentient, but also the killer who鈥檚 in the house 鈥 quite literally, in the case of 鈥淢3GAN,鈥 the murderous high-tech doll. The message: Be kind to your Alexa, or it may set the Roomba on you.

Geoffrey Short/Universal Pictures
In the 2022 horror movie 鈥淢3GAN,鈥 a lifelike doll develops a mind of her own.

But the more thoughtful AI stories are really more about humans than about machines. The scenarios about good AI versus evil AI push society to consider ethical frameworks for the technology: How can it represent and embody our best and highest values?聽

鈥淪ometimes we are so excited about the technology that we forget why we build the technology,鈥 says Francesca Rossi, president of the Association for the Advancement of Artificial Intelligence (AAAI). 鈥淲e want our humanity to progress in the right direction through the use of technology.鈥澛

Before ChatGPT was a twinkle in the eye of a search engine, Arthur C. Clarke, Philip K. Dick, and William Gibson were writing about the ethics of AI. Isaac Asimov鈥檚 stories posited the Three Laws of Robotics: (1) Robots may not injure humans. (2) Robots must obey human commands, unless they conflict with the first law. (3) A robot must protect its own existence but without conflicting with the first or second laws.

At first, the laws sound good. A closer examination reveals that they鈥檙e a literary device with loopholes that the author could exploit for 鈥渨hodunit鈥 murder mysteries. But in an era in which the Australian military has developed combat AI robodogs 鈥 reminiscent of the machine K-9s in the 鈥淏lack Mirror鈥 episode 鈥淢etalhead鈥 鈥 Mr. Asimov鈥檚 framing seems freshly relevant.

Courtesy of Jeff Vintar
鈥淭he real issue is the ethics of the people behind the robots. Do we want robots that can kill? Apparently we do, because we鈥檙e making them right now.鈥 鈥 Jeff Vintar, a screenwriter for the 2004 blockbuster 鈥淚, Robot鈥

鈥淭he real issue is the ethics of the people behind the robots,鈥 says Jeff Vintar, a screenwriter for the 2004 blockbuster 鈥淚, Robot,鈥 named after a collection of Mr. Asimov鈥檚 short stories. 鈥淒o we want robots that can kill? Apparently we do, because we鈥檙e making them right now.鈥澛

AI and human aims

If AI should be aligned with human goals, the question is, which ones? HAL 9000, the onboard computer in the 1968 film 鈥2001: A Space Odyssey,鈥 illustrates the dilemma of conflicting values. An astronaut returns to the spaceship and asks HAL to open the pod bay doors. The computer refuses. It places a higher priority on the success of the mission than on the life of the astronaut.聽

The 2002 movie 鈥淢inority Report鈥 is a more earthbound example of competing values. In the story by Mr. Dick, police can predict criminal acts in advance. The result is a tension between safety and privacy. In real life, police are now using AI technology to identify potential future crime by analyzing data about previous arrests, specific locations, and events. Critics claim the algorithms are racially biased.

鈥淭his does seem to be coming true, and 鈥榩redictive policing鈥 doesn鈥檛 seem to be so great in that movie,鈥 says Lee Barron, author of 鈥淎I and Popular Culture.鈥 鈥淸Mr. Dick] is a particularly prescient writer.鈥

Perhaps some of the time. The sci-fi author鈥檚 book 鈥淒o Androids Dream of Electric Sheep?鈥 鈥 later adapted as the movie 鈥淏lade Runner鈥 鈥 imagined AI robots that are indistinguishable from humans. But it also predicted that we鈥檇 have flying cars by now.

鈥淲e鈥檙e not good at futurism,鈥 says influential philosopher Fredrik deBoer, who has AI for Persuasion, an online magazine, in a Zoom call. 鈥淔uture forecasting is really hard for us.鈥澛

Mr. deBoer cautions that humankind is prone to overhype the impact of new technologies, for example the Human Genome Project. He wonders if AI will ultimately prove to be less revolutionary than imagined.聽

20th Century Fox
Will Smith stars as a Chicago police detective trying to solve a murder with a nonhuman suspect in the 2004 film 鈥淚, Robot,鈥 named after a collection of stories by Isaac Asimov.

The arrival of ChatGPT certainly startled and awed the world with its astonishing grasp of the language and communicative abilities. It has amplified debates over whether AI will become sentient 鈥 or at least evolve into such a convincing simulacrum of consciousness that we will imagine it to be a living entity with a soul. Could we fall hopelessly in love with sultry-voiced AI entities on our phones, as聽Joaquin Phoenix聽does in聽鈥淗er鈥?

Pop culture may have conditioned us to fear that AI will destroy humanity if it becomes sentient. That prevalent notion amounts to fearmongering, says聽Ian Watson, co-writer of the Steven Spielberg movie 鈥淎.I. Artificial Intelligence.鈥澛燦onbiological machines are heuristic algorithms, the sci-fi author says in a phone interview. It鈥檚 possible that self-aware machines may never exist, he adds. In his Pinocchio-like screenplay, which he originally wrote for聽Stanley Kubrick, a robot boy named David wants to become human. At the end of the movie, David discovers that鈥檚 impossible.

Daniel H. Wilson, author of the bestselling 2011 novel 鈥淩obopocalypse,鈥 thinks that AI could someday pass the Turing test 鈥 that is, appear to think like a human. But he says there hasn鈥檛 been the requisite big breakthrough in mathematics and algorithms to make so-called artificial general intelligence possible. By contrast, ChatGPT is known as generative AI. It lacks the ability to understand context. The technology is a predictive algorithm that scrapes the web to calculate the most likely response to queries. He finds that worrisome.

鈥淕enerative AI is creating humanlike intelligence by regurgitating billions of data points taken mostly from people on the internet,鈥 says Mr. Wilson, a former聽robotics engineer. 鈥淐an you imagine a worse mirror to hold up to humanity than all of our moments from the internet?鈥

The role the public plays聽

Some computer scientists are working to create healthier AI inputs. A 2023 college textbook titled 鈥淐omputing and Technology Ethics: Engaging Through Science Fiction鈥 includes reprints of short sci-fi stories that prompt students to contemplate ethical dilemmas in computer programming.

鈥淥nce you鈥檙e inside a story, thinking from another point of view, issues of motivation [and] issues of social effects are much clearer,鈥 says Judy Goldsmith, a professor of computer science at the University of Kentucky and one of five co-authors of the textbook. The book helps students think beyond the value of utilitarianism, she adds.

Ms. Rossi from AAAI has a copy of that textbook on her desk. Her favorite sci-fi allegory is Pixar鈥檚 鈥淲ALL-E.鈥 In the 2008 movie, obese humans aboard an intergalactic ship have become wholly beholden to AI. They鈥檝e forfeited meaningful connections with others because they鈥檙e constantly staring at screens.聽

鈥溾榃ALL-E鈥 is one that really brings up this concept of passively accepting the technology because it makes our life easier,鈥 says Ms. Rossi, who is also the AI ethics global leader at IBM. 鈥淚n order to keep AI safe and take care of the ethics issues, companies have to do their part. The regulators have to do their part. But every user has to use it responsibly and with awareness.鈥

You've read  of  free articles. Subscribe to continue.
Real news can be honest, hopeful, credible, constructive.
海角大神 was founded in 1908 to lift the standard of journalism and uplift humanity. We aim to 鈥渟peak the truth in love.鈥 Our goal is not to tell you what to think, but to give you the essential knowledge and understanding to come to your own intelligent conclusions. Join us in this mission by subscribing.
QR Code to Why AI stories are more about humans than about machines
Read this article in
/The-Culture/Movies/2023/0928/Why-AI-stories-are-more-about-humans-than-about-machines
QR Code to Subscription page
Start your subscription today
/subscribe