Why Google AlphaGo's victory over a human isn't just about Go
The software won a closely watched showdown against human Go champion Lee Sedol on Tuesday, four games to one. How Google plans to use the machine learning technology that powers AlphaGo.
Google DeepMind CEO Demis Hassabis, center, receives a Go board from South Korean professional Go player Lee Sedol, right, with his autograph as Korea Baduk (Go) Association Chairman Hong Seok-hyun, left, applauds in Seoul, South Korea on Tuesday. Google's AlphaGo software beat Mr. Lee four games to one, a feat that artificial intelligence researchers previously believed could be decades in the future.
Lee Jin-man/AP
On Tuesday, Google鈥檚 artificial intelligence program AlphaGo completed a feat many thought would still be decades in the future 鈥 it beat human Go champion Lee Sedol in the complex, centuries-old game of strategy, winning four games to one.
The computer鈥檚 victory, which came at the end of a week-long showdown that was closely watched in Mr. Lee鈥檚 native South Korea and in China, marked a major milestone for artificial intelligence. Lee conceded at the end of a nearly five-hour-long final game, having won the previous match on Sunday.
鈥淲hen it comes to psychological factors and strong concentration power, humans cannot be a match,鈥 Lee . But he added, 鈥淚 don鈥檛 necessarily think AlphaGo is superior to me. I believe there is still more a human being can do to play against artificial intelligence.鈥
Beating a human at Go, which originates in ancient China, was long thought of as one of the most challenging tasks for artificial intelligence software, which has steadily made inroads into other games, include chess, Jeopardy!, and several Atari games, which Google鈥檚 software learned to play by analyzing the raw pixels onscreen.
The complex strategy game begins with one player using black stones while the other has white. The players take turns placing the stones on a 19x19 grid, each attempting to capture more territory on the board than the other.
Unlike chess or checkers, go pieces aren't moved around on the board. Players must capture an opponent鈥檚 stone by surrounding it with their own. That gives players an average of 200 possible moves for any particular position, compared with an average of 20 in chess, .
That means Go has more possible positions than the the number of atoms in the universe, he wrote in a blog post in January after AlphaGo beat the European Go champion five games to zero.
The AlphaGo software draws on Google鈥檚 expertise in machine learning, a subset of AI that allows a computer to 鈥渓earn鈥 to complete particular tasks, and has been used for a range of applications, including identifying images, translating speech, or responding to emails.
The technique that Google used to crack the game combines a traditional artificial intelligence method where a computer figures out all the possible moves, then sorts through them to determine the best one. To do this, the computer must "see" all the way to the end of the game and form a search tree of possibilities to calculate if an individual move will help it win, 海角大神鈥檚 Eva Botkin-Kowacki reported.
The Google researchers combined that with two deep neural networks that narrow down the game鈥檚 vast possibilities. One, known as the policy network, narrows the search to only include moves that are most likely to include a win, while the second 鈥渧alue network鈥 evaluates if a move is stronger than others. This approach only sees far ahead enough to determine the immediate best move.
Mirroring the approach Google uses to 鈥渢each鈥 its self-driving cars the rules of the road, AlphaGo was then put through a training method that combined studying moves made by human Go experts with playing millions of games against itself using the information from the two neutral networks.
That yielded what David Silver, the lead author of a paper the company鈥檚 researchers , has described as a 鈥渕uch more humanlike鈥 approach to the complex game.
But the company says the purpose of AlphaGo isn鈥檛 simply to best some of the best players in the world, but to use the technology to expand the capabilities of artificial intelligence into other complex tasks.
While artificial intelligence technologies have also faced criticism 鈥 with physicist Stephen Hawking, entrepreneur Elon Musk, and others decrying the technology鈥檚 potential to be weaponized 鈥 researchers say computers鈥 ability to "learn" could have a number of applications.
In the short term, Google says it hopes to advance products such as smartphone assistants, while eventually using machine learning in applications such as healthcare and robotics.
鈥淏ecause the methods we鈥檝e used are general-purpose, our hope is that one day they could be extended to help us address some of society鈥檚 toughest and most pressing problems, from climate modeling to complex disease analysis,鈥 Dr. Hassabis wrote in the blog post.
Researchers at Texas Tech for example, have since 2012, while Facebook recently revealed it was using machine learning tools to develop highly detailed population maps that could be used to see how people connect to the Internet.
IBM鈥檚 competing Watson platform, which draws more extensively on cloud computing technology, is being used in a partnership with Memorial Sloan Kettering Cancer Center to in diagnosing cancer cases in two hospitals in Thailand and India, tech site .
Watson, which uses what IBM calls 鈥渃ognitive learning,鈥 that relies more on predictive analysis than Google鈥檚 Alpha Go, is learning to recognize anomalies in medical images and flag them for a physician while learning to suggest possible treatments.
A more predictive approach could also be helpful for other areas, such as providing information about recidivism, or why people released from prison do or don鈥檛 commit additional crimes. That's a tool that researchers who have studied the technology say could be useful for judges making sentencing decisions, for example.
Google鈥檚 Go match may reveal some areas for the researchers to improve machine-learning technology.
Despite AlphaGo鈥檚 success in beating Lee, who was been making a living from playing Go since he was 12, the computer wasn鈥檛 completely infallible.
After he defeated the software during Sunday鈥檚 game, Lee noted that it didn鈥檛 handle surprise moves well. It also played less capably with a black stone, which needs to claim a larger amount of territory to win than an opponent with a white stone. In the final match, Lee chose the black stone, the Associated Press reports.
Other players pointed to the role of human fatigue in playing the game, a condition that didn鈥檛 affect Google鈥檚 computer, which could play hundreds of practices matches to sharpen its skills.
鈥淚t does not seem like a good thing for we professional Go players,鈥 Chinese world champion Ke Jie told the AP on Tuesday, 鈥渂ut the match played a very good role in promoting Go.鈥