海角大神

海角大神 / Text

鈥2001: A Space Odyssey鈥 turns 50: Why HAL endures

Even after five decades of technological advancement, the murderous artificial intelligence in Stanley Kubrick鈥檚 philosophical sci-fi film remains the definitive metaphor for technology鈥檚 dark side.

By Eoin O'Carroll, Staff writerMolly Driscoll, Staff editor

鈥淚鈥檓 sorry, Dave. I鈥檓 afraid I can鈥檛 do that.鈥

With those nine words, HAL-9000, the sentient computer controlling the Jupiter-bound Discovery One, did more than just reveal his murderous intentions. He intoned a mantra for the digital age.

In the 50 years since the US premiere of Stanley Kubrick鈥檚 鈥2001: A Space Odyssey,鈥 virtually everyone who has used a computer has experienced countless HAL moments: 鈥渁n unexpected error has occurred,鈥 goes the standard digital non-apology. The machine whose sole purpose is to execute instructions has chosen, for reasons that are as obscure as they are unalterable, to do the opposite.

There鈥檚 something about HAL鈥檚 bland implacability that makes him such an enduring symbol of modernity gone awry, and such a fitting vessel for our collective anxiety about an eventual evolutionary showdown against our own creations.

鈥淗AL is the perfect villain, essentially...,鈥 says聽John Trafton, a lecturer in film studies at Seattle University who has taught a course on Stanley Kubrick through the Seattle International Film Festival. 鈥淗e鈥檚 absolutely nothing except for a glowing eye.... Essentially we鈥檙e just projecting our own fears and emotions onto HAL.鈥

HAL鈥檚 actual screen time is scant, beginning an hour into the nearly three-hour film and ending less than an hour later. And yet, during that interlude, his personality eclipses those of the film鈥檚 humans, whom聽Roger Ebert described in his 1968 review as 鈥渓ifelike but without emotion,聽like figures in a wax museum.鈥

While the film鈥檚 human characters joylessly follow their regimens of meals, meetings, exercise routines, and birthday greetings, we see HAL, whose his name stands for 鈥淗euristically programmed ALgorithmic computer,鈥 expressing petulance, indecisiveness, apprehension, and at the end, remorse and dread.

It鈥檚 this blending of human聽emotionality聽with mathematical inflexibility that some experts find troubling. Human biases have a way of creeping into code for mass-produced products, giving us automatic soap dispensers that聽ignore dark skin, digital cameras that confuse East Asian eyes聽with blinking, surname input fields that聽reject apostrophes and hyphens, and no shortage of other small indignities that try to nudge us, however futilely, into the drab social homogeneity of Kubrick鈥檚 imagined future.

鈥淥ne of the things that makes HAL a really enduring character is he faces us with that kind of archetypal technological problem, which is that it鈥檚 a mirror of our own biases and predilections and things that we are maybe not conscious of,鈥 says聽Alan Lazer, who teaches courses including 鈥淭he Films of Stanley Kubrick鈥 at the University of Alabama in聽Tuscaloosa.

Moral machines?

Machine learning 鈥 a programming method in which software can progressively improve itself through pattern recognition 鈥 is being used in more walks of life. For many Americans, artificial intelligence is shaping聽how our communities are policed,聽how we choose a college聽and聽whether we get admitted, and whether we can get a job and聽whether we keep it.

Catherine Stinson, a postdoctoral fellow at the University of Western Ontario who specializes in philosophy of science, cautions that the software engineers who are writing the algorithms governing more and more socially sensitive institutions lack training in ethics.

鈥淓verybody thinks that they are an expert in ethics. We all think that we can tell right from wrong that if presented with a situation we鈥檒l just know what to do,鈥 says Dr. Stinson. 鈥淚t鈥檚 hard for people to realize that there there are actually experts in this and there is space for expertise.鈥

In an op-ed in The Globe and Mail published last week, Dr. Stinson聽echoed Mary Shelley鈥檚 warning聽in 鈥淔rankenstein,鈥 a novel that turned 200 this year, of what happens when scientists attempt to exempt themselves from the moral outcomes of their creations.

She points out that MIT and Stanford are launching ethics courses for their computer science majors and that the University of Toronto already has long had such a program in place.

Other groups of computer scientists are trying to crowdsource their algorithm鈥檚 ethics, such as MIT鈥檚聽Moral Machine聽project, which will聽help determine whose lives聽鈥 women, children, doctors, athletes, business executives, large people, jaywalkers, dogs 鈥 should be prioritized in the risk-management algorithms for self-driving cars.

But those who crowdsource their ethics are ignoring the work of professional moral theorists. Stinson notes that many computer scientists have an implicit orientation to utilitarianism, an ethical theory that aims to maximize happiness for the greatest number by adding up each action鈥檚 costs and benefits.

Utilitarianism enjoys support in American philosophy departments, but it鈥檚 far from unanimous. Critics charge that such an approach denies basic social and familial attachments and that it permits inhumane treatment in the pursuit of the greatest good.

Ordinary people tend to hold a mix of utilitarian and non-utilitarian views. For instance, most survey participants say that self-driving cars should be聽programmed to minimize fatalities. But when asked what kind of self-driving car they鈥檇 be willing to buy, most people say they would want one that prioritizes the lives of the vehicle鈥檚 occupants over all else.

Either way, there鈥檚 something undeniably creepy about dealing with an autonomous machine that reduces your personal worth and dignity down to code. 鈥淲e can鈥檛 use our human wiles on them,鈥 says Stinson.

It鈥檚 this disquiet that HAL evokes that聽Matthew Flisfeder, a professor at the University of Winnipeg聽department of rhetoric, writing, and communications says is the same unease we feel when our social choices are determined by impersonal forces of the market.

鈥淭here鈥檚 this constant goal,鈥 says Dr. Flisfeder, 鈥渢o try to be efficient and objective and rational, and when we see that presented to us back in the form of the dryness of a machine like HAL, we started to realize the instrumentality in that and how it鈥檚 actually very dehumanizing.鈥

Predicting technology鈥檚 triumph over humanity was not, however Kubrick鈥檚 aim. HAL is ultimately defeated, in one of cinema鈥檚 most poignant death scenes, and Dave moves on to the film鈥檚 鈥 and humanity鈥檚 鈥 next chapter.

鈥淓ssentially you have a film of this fear of artificial intelligence making humans obsolete,鈥澛爏ays Trafton, the Seattle University lecturer.聽鈥漎et what does the movie end with? It ends with a Star Child. It ends with human beings recycling back.鈥