Finally, a machine has goed where no machine has goed before.
For the first time, researchers have developed a computer program that can beat a professional human player at the venerable and fiendishly subtle game of Go.
The feat is being hailed as a breakthrough, not just because it topples a long-standing challenge to artificial intelligence, but also because of the human-like way that the winning computer program achieved its mastery of the game.
Scientists say the development opens up new possibilities for the burgeoning field of deep learning, a branch of computer science whose foundations were partly laid by Canadian researchers. In the future, the approach could revolutionize tasks as diverse as smartphone assistance and medical diagnostics to more general and challenging tasks involving computers that can perceive and plan.
"Because the methods we used are general-purpose, our hope is that one day they can be used to help address some of society's toughest and most pressing problems," said Demis Hassabis, vice-president of engineering at Google DeepMind, the British-based research centre that led the work.
Together with colleagues at Google's headquarters in Mountain View, Calif., Dr. Hassabis and his team developed a system, dubbed AlphaGo, that learned how to win by studying how human experts play the game and by playing against itself.
Their approach, whose technical details were published on Wednesday in the journal Nature, is markedly different from that taken by computer scientists at IBM who designed the chess-playing computer Deep Blue two decades ago. Deep Blue relied on a brute-force method of calculation that could rapidly assess a myriad of possible chess positions for many moves ahead. In 1997, it defeated Garry Kasparov, at that time the world's top chess player, becoming the first computer to do so.
But experts have long regarded Go as a much tougher nut for computers to crack.
Invented about 2,500 years ago in China, "Go is probably the most complex game ever devised by humans," Dr. Hassabis said, speaking from London during a conference call with reporters.
A Korean man and woman play Go sometime between 1910 and 1920.
LIBRARY OF CONGRESS
Played on a 19-by-19 grid with black and white markers alternately placed on the intersection points, Go is governed by only a few basic rules, which makes it deceptively simple to learn. Yet in that simplicity lies an extraordinarily rich set of possibilities, vastly greater than those that arise in chess and therefore much more daunting for a computer to calculate.
A second way that Go differs from chess is that it is often very difficult to tell which player has a stronger position. That makes it virtually impossible for a conventional computer program to pick out which possible moves it should evaluate in detail from among those it can safely ignore. While there are Go programs that can beat human amateurs, until now computers have proved to be no match against an expert player's intuition.
AlphaGo has cleared this hurdle with the help of a unique architecture that includes two deep neural networks. Each network consists of millions of interconnected nodes where the connections can vary in strength as the program learns from experience, thereby mimicking the behaviour of groups of neurons in the human brain.
One network, which researchers call the "policy network," is tasked with narrowing the program's search for a next move at each point in the game. The second network, called the "value network," is used to evaluate those options by estimating who will win in each case rather than by trying to work out every possible sequence of moves to its conclusion.
Researchers trained the program by feeding it information on 30 million positions from a database of games played by human Go experts. The policy network learned to predict expert moves with 57-per-cent accuracy, a big improvement compared with previous efforts. The system's acquired expertise was then reinforced by playing against itself and tested against several other Go programs.
Finally, last October, the system played a five-game match against European Go champion Fan Hui. AlphaGo won all five games. A match against Lee Sedol of South Korea, the world's top-ranked Go player, is set for March.
Neural networks and the related concept of deep learning have become increasingly prominent in artificial-intelligence circles, but the fundamentals go back many years and owe much to the pioneering work of Geoff Hinton, a computer science professor at the University of Toronto.
Ilya Sutskever, who earned his PhD under Dr. Hinton and later moved to Google, is among those who played a supporting role in the development of AlphaGo.
The program's demonstrated prowess "is really a big deal," said Dr. Sutskever, who is now a research director for OpenAI, a fledgling not-for-profit company based in Silicon Valley that specializes in advancing artificial-intelligence technologies.
Noting that many research teams have been grappling with how to designing a winning Go system for years, he said AlphaGo is different from previous efforts because it "takes machine learning and makes the best possible use of it."
Yoshua Bengio, who specializes in machine learning at the University of Montreal and co-directs a program in neural computation for the Canadian Institute for Advanced Research, called AlphaGo's achievement "very significant and unexpected."
"It seemed like a problem that would take many more years to crack," he said.
Dr. Bengio added that other groups would be studying the program and likely applying its principles in new directions.
But across the full spectrum of tasks that human brains routinely perform, computers will continue to fall short for the foreseeable future. "We are in very exciting times with rapid and measurable progress, but still very far from human-level artificial intelligence," Dr. Bengio said.
MORE FROM THE GLOBE AND MAIL
Canadian makes perfect replica of Star Wars droid BB-8
1:19
How robots will play a pivotal role in maintaining China’s economic dominance
1:57