海角大神

What will artificial intelligence look like in 15 years?

As the conversation about artificial intelligence grows louder, public perception of its eventual integration into every day life has shifted from general fears to more specific questions about implementation.

|
Shizuo Kambayashi/AP
FILE - In this July 17, 2016, file photo, shoppers talk to SoftBank Corp.'s companion robot Pepper, equipped with a "heart" designed to not only recognize human emotions but react with simulations of anger, joy and irritation, at a store in Tokyo.

Whether they are assisting your doctor in surgery, driving your car, analyzing crime patterns, or cleaning and providing the security system for your home, artificial intelligence (AI) will play a big role in urban living in by 2030. But to maximize the benefits of an AI-wired city tomorrow, expert and the public need to have a frank conversation today, according to the first report from Stanford University's听, which was released last week.

"As a society, we are now听at a crucial juncture in determining how to deploy AI-based technologies in ways that听promote, not hinder, democratic values such as freedom, equality, and transparency," , which analyzes the role AI will play in the typical North American city in 2030, focusing on eight domains: transportation, home robots, healthcare, education, entertainment, low-resource communities, public safety and security, and employment and the workplace.

"Policies should be evaluated听as to whether they foster democratic values and equitable sharing of AI鈥檚听benefits, or concentrate power and benefits in the hands of a fortunate few."

The report outlines not only the ways AI could potentially be used throughout everyday life, but also how public opinion surrounding the implementation of AI has changed, and will continue to.

"In each domain there is a high potential for artificial intelligence technologies to improve the quality of life in the typical north American city by the year 2030, but in each case there are barriers to overcome, and [in] some more than others," Peter Stone, a computer scientist at the University of Texas at Austin and chair of the 17-member panel of international experts, tells 海角大神.

With autonomous cars immanently on the horizon 鈥撎齜ut still getting into accidents听鈥撎齛nd new Federal Aviation Association (FAA) rules over drone use, we have already encountered examples of both the barriers and the solutions: some are technological, like meeting safety standards, but others are social.

"There are definitely people who bring up fears that are spurred on by science fiction literature and movies, but then we also get a lot of interest in what we are doing," Dr. Stone tells the Monitor. "AI tends to be very polarizing: some people tend to be very excited about it, others are very fearful, and sometimes the same people have both of those different attitudes."

Often, doubts about AI's integration into society take on a dystopian tone. However, more mundane topics like job loss and inequality have come to dominate the conversation, Stone says.听

"Throughout history technological advances have affected the workplace," Stone tells the Monitor. "In the perceivable future, most jobs will not be replaced by AI technologies, but will be augmented or changed. The healthcare advances are not going to replace doctors, but they may change the skills that the doctors need or how doctors spend their time."

The question of exacerbating already existing inequalities is more difficult. However, the report recommends the immediate start of a discussion on how the additional wealth created by AI can be spread equitably and fairly鈥 a point often overlooked by worries of robots displacing human workers. AI methods can help plan equitable food distribution, for example, or to spread health and safety information.

"Care must also be taken to prevent AI systems from reproducing discriminatory behavior, such as machine learning that identifies people through illegal racial indicators, or through highly-correlated surrogate factors, such as zip codes," the panel notes. "But if deployed with great care, greater reliance on AI may well result in a reduction in discrimination overall, since AI programs are inherently more easily audited than humans."

But AI's potential to help all communities, not just wealthier ones, needs to be talked about in order to boost trust in new technologies, too, helping build relationships to ensure they can be implemented in the most helpful way possible down the road. Building that trust has been challenging when AI debates seem theoretical, but that is quickly changing.

Stone says that there are different pathways to trust. There are technological solutions that would prove the machine's reliability, such as a drivers license test-type screening that AI machines must pass before being deployed for public use. However, Stone believes that something as simple as increased exposure to AI will ultimately be most successful.

"People need to have first hand experience with them," Stone tells the Monitor. "I trust the various applications on my computer not because someone certified them, but because I have used them hundreds of times and I have seen the same behavior over and over again. Once people start seeing autonomous cars on the road and get experience with them stopping at the right time and starting at the right time, that will add to the trust."听

You've read  of  free articles. Subscribe to continue.
Real news can be honest, hopeful, credible, constructive.
What is the Monitor difference? Tackling the tough headlines 鈥 with humanity. Listening to sources 鈥 with respect. Seeing the story that others are missing by reporting what so often gets overlooked: the values that connect us. That鈥檚 Monitor reporting 鈥 news that changes how you see the world.
QR Code to What will artificial intelligence look like in 15 years?
Read this article in
/Technology/2016/0906/What-will-artificial-intelligence-look-like-in-15-years
QR Code to Subscription page
Start your subscription today
/subscribe