Computers have been able to beat the best of humanity reliably at checkers for decades, and at chess more recently, but Go is one game that has historically stymied artificial intelligence — until now. Researchers at Google DeepMind have created a Go-playing program that, for the first time, beat a professional player at this ancient and deceptively complex board game.
Related: Poker Pros Win Man vs. Machine Showdown - But Not by Much
Go, believed to have originated in China more than 2,500 years ago, is played with black and white stones on a 19x19 grid. Players take turns putting down stones anywhere on the board, attempting to surround and capture their opponent's pieces. It's the epitome of "easy to learn, hard to master": There are simply so many possibilities for moves (far more than chess) that even supercomputers can't penetrate far enough ahead to find winning strategies.
"The search space of Go is too enormous, too vast for brute force approaches to have any chance of succeeding," said Google DeepMind's David Silver in a video explaining the research. "As a result, AI researchers have to turn to more interesting approaches than brute force search, which are perhaps more human-like."
Related: This AI Learned Atari Games Like Humans Do - And Now it Beats Them
Their program, Alphago, uses "neural networks" with access to millions of games and strategies to significantly reduce the options it has to sift through. The result is an AI player even stronger than they expected — it beat European champ Fan Hui in October, and has a date with the world champion in March.
Its toughest opponent, however, may not be a human: Facebook CEO Mark Zuckerberg announced Wednesday that Facebook is working on its own Go-playing program. No word on when they'll face each other.