Skip to main content
rapid development

Former JEOPARDY! Champion contenstants during the final day of sparring sessions against Watson, IBM TJ Watson Research Center, Yorktown Heights, NY. Jeopardy! panel - NOVA "Smartest Machine on Earth"

Rapid developments in artificial intelligence are redefining the limits of what computers can do.

But the birth of the ultra-modern technological movement actually occurred at an academic conference at Dartmouth College in the mid-1950s. Organized by English computer scientist John McCarthy, the month-long event brought pioneers of the field together to lay the groundwork for what would be known from then on as artificial intelligence.

Computer science experts say AI has the potential to help doctors make faster medical diagnoses, improve national security, cut the prevalence of financial fraud and make numerous other advances.

"Every few years AI turns heads by doing something that previously could only be done by humans," said Cory Butz, computer science professor at the University of Regina.

The path that has led to these possibilities, however, has been filled with relentless frustrations and seemingly insurmountable obstacles. In fact, the challenges that lay ahead for AI have some members of the field convinced the goal of creating machines that can behave and adapt like humans will never be realized.

Although computers have been around less than a century, references to intelligent machines date back thousands of years, when ancient scholars surmised about the possibility of inanimate objects having intelligent capabilities.

But the actual possibility of humans being able to create intelligent machines didn't become a reality until the mid-20th century, when the first computers were developed.

Researchers began discussing the idea of intelligent machines in earnest in the 1940s, and AI was established as a research discipline at the infamous Dartmouth conference of 1956.

What AI looked like in the first few decades following that seminal conference is starkly different from the direction it is heading today.

Back then, computer scientists believed the key to developing machines with human-like intelligence centred on reasoning. The basic notion was that computers could accomplish tasks or reach goals by using deduction and reason to move each step of the way. In a chess match, for instance, a computer would analyze the outcome of each possible move in order to choose the one that would help it beat its competitor.

Programs used algorithms, or a step-by-step set of instructions for solving a problem, to allow computers to deduce which decisions to make.

For many years, this reason-based approach to AI dominated the field, with many experts convinced it would be a matter of years before they had created a machine that could meet or surpass human intelligence. Their optimism was met with a major infusion of government funding.

But, as it turns out, things were not that simple.

One of the main problems - which would prove to be a severely limiting factor - was that reason-based approaches to AI required complex programs based on vast amounts of data that needed to be put in by hand. Not only was it intensive, laborious, but it meant programs could only be as good as the humans who wrote them. Under this approach, there was no way computers could actually learn, or make inferences and deductions, beyond what they were specifically programmed to accomplish.

"It didn't work, I think, because a lot of the knowledge, the way we make decisions, isn't something we can explain in full details," said Yoshua Bengio, professor in the department of computer science and operations research and Canada Research Chair in statistical learning algorithms at the University of Montreal.

Prof. Bengio highlighted that under reason-based approaches to AI, machines couldn't comprehend natural language because it's such a complex function humans have been unable to write programs properly explaining it to computers.

These challenges resulted in serious setbacks to AI, causing funds to dry up and research priorities to be directed elsewhere.

Computer scientists who felt the future of AI depended on taking research in an entirely different direction began pursuing what is now known as machine learning. Although there are many sub-fields of AI, machine learning has become the dominant focus of research in recent years as experts made important breakthroughs that renewed hope in the mission to create intelligent machines.

Machine learning also uses algorithms, but instead of reason and deduction, computers are able to generalize and "learn" by sifting through data.

That type of learning is geared toward narrow applications. For instance, computers at banks can analyze massive amounts of data in order to determine which customers are risky loan applicants. Search engines sort through millions of web pages to generate results, which become more refined and increasingly accurate as the machines learn which pages generate the most clicks.

Geoffrey Hinton, a computer science professor at the University of Toronto and leading AI expert, said one of the reasons machine learning holds so much promise is that today's computers have more power and memory, enabling researchers to input almost limitless amounts of data the machines can use to "learn" from.

"In the old days there was a limit to how much [machines]could learn," Prof. Hinton said. "Now that limit is disappearing."

Artificial intelligence milestones

Although references to intelligent mechanical devices date back to ancient times, artificial intelligence as we know it only began to emerge in the last several decades. A few major developments provide a glimpse at the milestones that has helped shape the field:

  • 1950: Computer scientist Alan Turing creates a test to see how well computers can behave like humans. In the Turing Test, as it is now known, human judges have typed conversations with a human and a machine. If the judge can't distinguish which is human and which is machine, the machine passes the test. The premise behind the test is that if a computer is able to fool someone into thinking it is human, then it could be considered "intelligent."
  • 1956: The Dartmouth Summer Research Conference on Artificial Intelligence, a month-long event, takes place, where pioneers in the field led by computer scientist John McCarthy are credited with establishing AI as a research discipline.
  • 1997: A computer named Deep Blue, developed by IBM manages to beat world champion Garry Kasparov in a chess match. Mr. Kasparov accused IBM of cheating and demanded a re-match, but the company refused. Regardless, the event is regarded as evidence of intelligent machines.
  • 2011: Watson, an artificial intelligence computer system developed by Watson, beats two humans at the game show Jeopardy! It's considered a landmark achievement because Watson was able to answer questions posed in human language, which has been a major obstacle to AI researchers.

Follow related authors and topics

Authors and topics you follow will be added to your personal news feed in Following.

Interact with The Globe