Play Live Radio
Next Up:
0:00
0:00
Available On Air Stations
Watch Live

How Google's Neural Network Hopes To Beat A 'Go' World Champion

South Korean Go champion Lee Sedol (right) poses with Google DeepMind head Demis Hassabis. On Wednesday, Sedol will begin a five-match series against a computer.
Jung Yeon-Je AFP/Getty Images
South Korean Go champion Lee Sedol (right) poses with Google DeepMind head Demis Hassabis. On Wednesday, Sedol will begin a five-match series against a computer.

How Google's Neural Network Hopes To Beat A 'Go' World Champion

In South Korea on Wednesday, a human champion of the ancient game of "Go" will square off against a computer programmed by Google DeepMind, an AI company owned by the search giant. If the machine can beat the man over a five-day match, then researchers say it will be a milestone for artificial intelligence.

Here are the key things to know about the match and what it will mean for the future, both of humanity and our robot overlords.

Advertisement

1. A computer won at chess 20 years ago. Go is tougher.

IBM grabbed the headlines when its Deep Blue supercomputer bested world champion Gary Kasparov in 1997.

But chess is a computer's game. It has strict rules and a limited number of moves each turn. Deep Blue gained the upper hand by crunching a huge volume of possible moves to see which ones would lead to a win.

Go is a very different kind of game. Players use stones to fence off territory and capture each other's pieces. It has fewer rules and more choices each turn. In fact, "there are more possible 'Go' positions than there are atoms in the Universe," says Demis Hassabis, a researcher with Google DeepMind.

Computers hate choices. Go is a nightmare for rule-bound computers.

Advertisement

2. This program taught itself how to play.

The Google program, known as "Alpha Go," actually learned the game without much human help. It started by studying a database of about 100,000 human matches, and then continued by playing against itself millions of times.

As it went, it reprogrammed itself and improved. This type of self-learning program is known as a neural network, and it's based on theories of how the human brain works.

AlphaGo consists of two neural networks: The first tries to figure out the best move to play each turn, and the second evaluates who is winning the match overall.

It's far more powerful than any Go-playing computer program to date

3. The machine is not guaranteed to win.

In October, AlphaGo beat a European champion of the game, Fan Hui. But Hui is ranked far below the program's current opponent, Lee Sedol, who is considered among the best Go players in the world. Sedol may still be able to beat AlphaGo.

Nevertheless, the overall approach is clearly working, and soon AlphaGo, or another similar program, likely will overtake the world's best

4. This program will not lead to a dystopian future in which humanity is enslaved by killer robots. At least not for a few more years.

The deep-learning approach is making great strides. It's getting particularly good at recognizing images (and more creepily, human faces).

But skull-crushing mechanical suzerain? Probably not. For one thing, physical robots still suck. Seriously. They're just terrible.

And Google has a rosier purpose in mind anyway. It hopes programs such as AlphaGo can improve language translation and health care tools. It might even someday be used to build a sophisticated virtual assistant. "I've concluded that the winner here, no matter what happens, is humanity," Eric Schmidt, the chairman of Google's parent company, Alphabet, said in a pre-match news conference.

Regardless of what you think about AI, it seems likely this sort of program will change the way we live and work in the years ahead.

Copyright 2016 NPR. To see more, visit http://www.npr.org/.