AI in the real world: Tech leaders consider practical issues.
The practical ethics of AI may have less to do with the Terminator, and more to do with terminated workers.
Facebook CEO Mark Zuckerberg appears at the Facebook F8 conference in San Francisco, California on April 12, 2016. Zuckerberg and other Silicon Valley leaders have convened on a new panel to consider the practical future of artificial intelligence.
Stephen Lam/Reuters/File
The discussion on artificial intelligence has been flooded with concerns of 鈥渟ingularity鈥 and futuristic robot takeovers. But how will AI impact our lives five years from now, compared to 50?
That鈥檚 the focus of Stanford University鈥檚 One Hundred Year Study on Artificial Intelligence, or . The study, which is led by a panel of 17 tech leaders, aims to predict the impact of AI on day-to-day life 鈥 everywhere from the home to the workplace.
鈥淲e felt it was important not to have those single-focus isolated topics, but rather to because that鈥檚 where you really see the impact happening,鈥 Barbara Grosz, an AI expert at Harvard and chair of the committee, said in a statement.
Researchers kicked off the study with a report titled 鈥淎rtificial Intelligence and Life in 2030,鈥 which considers how advances like delivery drones and autonomous vehicles might integrate into American society. The panel 鈥 which includes executives from Google, Facebook, Microsoft and IBM 鈥 plans to amend the report with updates every five years.
For most people, the report suggests, self-driving cars will be the technology that brings AI to mainstream audiences.
鈥淎utonomous cars are getting close to being ready for public consumption, and we made the point in the report that for many people, autonomous cars will be their ,鈥 Peter Stone, a computer scientist at the University of Texas at Austin and co-author of the Stanford report, said in a press release. 鈥淭he way that is delivered could have a very strong influence on the way the public perceives AI for the years coming.鈥
Stone and colleagues hope that their study will dispel misconceptions about the fledgling technology. They argue that AI won鈥檛 automatically replace human workers 鈥 rather, that they will supplement the workforce and create new jobs in tech maintenance. And just because an artificial intelligence can drive your car, doesn鈥檛 mean it can walk your dog or fold your laundry.
鈥淚 think the biggest misconception, and the one I hope that the report can get through clearly, is that there is not a single artificial intelligence that can just be sprinkled on any application to make it smarter,鈥 Stone said.
The group has also considered regulation. Given the diversity of AI technologies and their wide-ranging applications, panelists argue that a one-size-fits-all policy simply wouldn鈥檛 work. Instead, they advocated for increased public and private spending on the industry, and recommended increased AI expertise at all levels of government. The group is also working to create a framework for self-policing.
鈥淲e鈥檙e not saying that there should be no regulation,鈥 Stone told The New York Times. 鈥淲e鈥檙e saying that there is and a wrong way.鈥
But there are other issues, some even trickier than regulation, which the study has not yet considered. AI applications in warfare and 鈥渟ingularity鈥 鈥 the notion that artificial intelligences could surpass human intellect and suddenly trigger runaway technological growth 鈥 did not fall within the scope of the report, panelists said. Nor did it focus heavily the moral status of artificially intelligent agents themselves.
No matter how 鈥渋ntelligent鈥 they become, AI鈥檚 are still based on human-developed algorithms. That means that human biases can be infused in a technology that would otherwise think independently. A number of photo apps and facial recognition聽softwares, for example,聽have been found to misidentify nonwhite people.
鈥淚f we look at how systems can be discriminatory now, we will be much better placed to design ,鈥 Kate Crawford, a principal researcher at Microsoft and co-chairwoman of a White House symposium on society and AI, wrote in a New York Times op-ed. 鈥淏ut that requires far more accountability from the tech community. Governments and public institutions can do their part as well: As they invest in predictive technologies, they need to commit to fairness and due process.鈥
As it turns out, there are already groups dedicated to tackling these ethical concerns. LinkedIn founder Reid Hoffman has collaborated with the Massachusetts Institute of Technology Media Lab, both to exploring the socioeconomic effects of AI and to design new tech with society in mind.
鈥淭he key thing that I would point out is computer scientists have not been good at interacting with the social scientists and the philosophers,鈥 Joichi Ito, the director of the MIT Media Lab, told The New York Times. 鈥淲hat we want to do is support and reinforce the social scientists who are doing research which will play a role in setting policies.鈥