You're speaking my language
Montreal, a city where languages mingle and sometimes clash, is the setting for an all-out effort to create machines that can communicate on human terms. /By Ivan Semeniuk

“We want to make AI accessible to everybody” says Layla El Asri of Microsoft’s Montreal lab
LM CHABOT/l’Éloi
Anyone looking to study the subtleties of human communication would be hard-pressed to find a better place to do it than Montreal. Here, multiple languages brought together by history and circumstance share a fluid co-existence that must be experienced to be appreciated. In Montreal, perhaps more than in any other city in North America, communication is all about context.
For Layla El Asri, that message holds the key to the next big revolution in artificial intelligence. A specialist in the field of dialogue systems, El Asri, 29, is a research manager at Microsoft Corp.'s newly launched Montreal research lab and representative of the growing wave of young scientists flocking to the city from around the world. Working in a spacious 14th-floor facility in the city's downtown, she and her colleagues are grappling with the challenge of designing computer systems that can interact with humans the way humans interact with one another.
"Our ultimate goal is to have machines that can have a dialogue about anything," El Asri says.
Until recently, such an aim would have been dismissed as a pipe dream. What has changed is the rise of deep learning, an AI technique that allows computer algorithms, through training, to discover what features matter in the world in which they're trying to function. Hampered at first by a lack of data to train systems on, proponents of deep learning began scoring big gains in visual search, speech recognition and strategy games starting around 2012. That, in turn, has put Montreal—one of the world's foremost centres for deep learning—at the heart of the current AI boom.
At Microsoft, El Asri is looking to extend the recent advances in deep learning into the domain of language, a big focus of Montreal's burgeoning AI research hub. While the challenge is daunting, the commercial incentives are huge. A computer that can comprehend language at a deeper level than today's state of the art systems would represent a leap forward for all sorts of customer-facing applications. In addition to being able to maintain a coherent conversation with a human, any system that can understand language in a meaningful way can be used for summarizing documents or teasing valuable information out of reams of written and audio records.
"Businesses are drowning in data, and they are looking for efficient ways of leveraging that data," says Doina Precup, a professor of computer science at McGill University who also heads up the Montreal research lab for DeepMind, the AI arm of Google. Precup adds that language processing, which has lagged behind other areas that have benefited from deep learning, is now becoming a sector where "there's very good potential for big developments to happen."
For El Asri, the motivation is more than commercial. A machine that can communicate effectively with non-scientists would put AI at the service of all humanity. "We think AI is going to be helpful to people, and we want to make it accessible for everybody," El Asri says.
Born in Tours, France, El Asri grew up steeped in language through literature, philosophy and culture. But it was computer science that caused her to look at language in a different way. "When I figured out that you could test hypotheses with computers, that opened up a whole new realm of exploring language," she says.
Her timing could not have been better. At that point, the typical form of machine-human interaction was the classic voice-activated answering system, which had endless menus of options and little flexibility in dealing with a caller's individual needs. But thanks to AI, a sea change was under way, heralded by the 2011 release of Apple's then revolutionary talking personal assistant, Siri.
El Asri studied machine learning in graduate school and earned her PhD in dialogue systems while working for Orange, a French telecommunications company. In 2016, she joined Maluuba, a small Canadian firm that was hiring experts to work at the forefront of language processing. The company, launched in 2011 by students at the University of Waterloo, made a splash early on with a voice assistant for the Android platform that rivalled Siri. The firm moved into licensing its technology for consumer electronics that respond to voice commands, such as smartphones and smart TVs.
"That's when we started to see the limitations," says Mohamed Musbah, head of product for the Montreal Microsoft lab and one of Maluuba's first employees. The issue was that, beyond a highly constrained set of options, any talking system required an unwieldy set of rules to steer its interaction with a customer. Maluuba needed a more robust sort of talking machine that could "truly comprehend language," he says.
Maluuba sought out Yoshua Bengio, a French pioneer of deep learning and director of the Montreal Institute for Learning Algorithms (MILA). Based at the University of Montreal, Bengio has managed to train and seed an entire community of researchers and startups that have the expertise the AI world currently needs most. Maluuba's new research arm opened in Montreal to be closer to Bengio and MILA. When the lab launched in December 2015, it was billed as the largest in North America to focus on language and AI. El Asri, one of the new lab's first hires, was brought in as a research scientist on dialogue systems.
At that moment, the world's tech giants were looking to capitalize on Montreal's talent pool. A year after Maluuba opened its lab, Microsoft announced it was buying the company and doubling down on its research agenda. Now the Montreal team would be working not only on problems that might be applicable to Microsoft's business, but also on discovering new approaches to language processing through deep learning. For El Asri, the acquisition meant access to more computer power and bigger data—the raw materials driving the AI boom.
As an example of where that research is heading, El Asri cites the classic problem of a customer looking to book a trip with an automated travel agent. The agent has options to offer, including possible destinations and means of transportation, while the customer comes with constraints, such as a budget and a schedule. What happens next involves the skills computers have notoriously found difficult to master, because they require more than an exchange of information. The agent must anticipate, adapt and remember what has already been discussed to avoid making inappropriate or nonsensical suggestions.
The computer must also deal with statements that are easily decoded by human speakers but are fatally ambiguous to a machine, such as, "When my children and I have packed our bags, will there be a place to lock them up?" Human speakers know the storage locker isn't for the kids. But even the most sophisticated machines lack this basic common sense.
This challenge is one reason the lab is expanding its capacity in an area known as reinforcement learning, which allows an AI system to explore alternate ways of solving a problem and then gives rewards when its performance improves. A reinforcement learning exercise recently earned the Maluuba-Microsoft team the distinction of creating the first AI to beat Ms. Pac-Man. The classic Atari game is regarded as a challenge because it requires balancing different and sometimes competing demands—devouring fruits in a maze while dodging a horde of ghosts. The problem has parallels in language, in which a speaker must similarly decide where to take a conversation based on various objectives. El Asri's colleague Harm Van Seijen was able to train an algorithm to the point where it could play Ms. Pac-Man indefinitely, rolling over the scoreboard.
"We were expecting it to do better than the state of the art," says El Asri, "but the fact that it beat the game entirely—that was a surprise." In honour of the feat, the lab's managers had an arcade version of Ms. Pac-Man installed in the company's office.
In January, Microsoft announced that Geoffrey Gordon, an expert in reinforcement learning, would be leaving Carnegie Mellon University to become the research director at the Montreal lab. In addition to boosting the lab's activities, Gordon will be available to take up a faculty position just as MILA is preparing to expand its number of graduates to meet the growing demand for AI talent. The unrestricted collaboration between campuses and local companies is one of the defining features of the Montreal AI community. It's also a sign the field is focusing on addressing tough problems for which an advance made by one team benefits the entire community.
El Asri believes it will likely take five to 10 years for AI to crack the language challenge and transform the way humans and machines interact. But the current wave of growth suggests the next steps toward that transformation are already being taken in Montreal. For those who worry the result will be humans being replaced by talking and, to some degree, thinking machines, El Asri offers a reassuring perspective.
"When you have your hands in it, you realize we're really far from that," she says. "I think AI is about helping people and allowing them to be more productive by giving them more access to knowledge. I don't really see a dark side to it right now."
Learn, baby, learn: A brief guide to the new AI lingo
- Machine learning A branch of artificial intelligence that allows a computer to improve its performance based on experience without direct intervention from a programmer. Machine learning typically depends on large data sets that are used to “train” a system (such as millions of examples of handwritten digits in order to learn how to correctly read the amount on a cheque).
- Deep learning A form of machine learning in which interconnected nodes that loosely mimic the structure of the animal brain feed back to one another and can automatically learn to identify relevant features in the input data that will lead the system to a correct answer.
- Reinforcement learning An approach that endows learning systems with the ability to explore different possibilities. It is essentially an automated form of trial and error that can lead a system to discover better ways to solve a problem.

Brendan Frey is using deep learning to unlock the secrets of the human genome—and hopefully save lives
Spencer Blackwood
It's all in the genes
Brendan Frey went from teaching computers how to recognize cat videos to teaching them how to decode the human genome. His goal: to develop personalized medications to treat disease. /By Jason McBride
As any high school biology student knows, genes come in pairs. And when Brendan Frey recounts the origins of Deep Genomics—the Toronto AI startup he co-founded—he likewise tells a pair of stories, one intimate and tragic, the other immense and world-changing. The way these two stories dovetailed, however, completely changed his life—and will most likely change yours too.
In 2001, after about a century of effort, humanity finally unravelled the mystery of our genetic code. In February of that year, the International Human Genome Sequencing Consortium published the first draft of the human genome in Nature, with sequencing of its three billion base pairs more than 90% complete. (The full sequence was published just over two years later.) The director of the National Human Genome Research Institute described the genome as at once a history book, a shop manual and a medical textbook that would "give health-care providers immense new powers to treat, prevent and cure diseases." The only problem was that no one really knew how to read this book. The genome had been sequenced, but it was simply too complex for the human brain.
That became abundantly—and unbearably—clear when, the next year, Frey's then wife became pregnant. The couple was told there seemed to be a genetic problem with the fetus, but the genetic counsellor they saw could tell them little beyond that. Once the baby was born, the counsellor said, the problem could be nothing, or it could be catastrophic; from the limited information available to them then, it was impossible to tell. Hearts heavy, Frey and his wife decided to terminate the pregnancy. The whole event was traumatic enough, but what really nagged at Frey was, what was the point of having the genome sequenced if that information couldn't be used to save lives?
The 49-year-old Frey is affable, athletic and lantern-jawed, with his snow-white hair and beard closely clipped. On the day he told me these stories in the cramped boardroom at Deep Genomics, on the third floor of the MaRS Discovery District, he was wearing a grey hoodie and jeans, looking more like he'd just stepped off a hockey rink than out of a lab. But Frey has long been a leading light in the cutting-edge subset of AI known as deep learning. Like most of the superstars of Toronto's AI scene, Frey studied with Geoffrey Hinton, the pioneering University of Toronto scientist described by The New York Times Magazine as "the primogenitor of the contemporary field" of AI. Hinton believed he could teach computers to behave exactly like human brains—that is, they could take millions, billions, of representations or images and use that data to forge connections, make decisions and create meaning. And, over time, computers could "learn" from their experiences, training themselves to accurately interpret and respond to data. Throughout the 1990s, Frey worked with Hinton on deep learning, feeding computers YouTube's infinite catalogue of cat videos (to name just one famous, game-changing example) so they could learn to recognize and categorize images of cats.
But Frey could see that with computers becoming faster and data sets larger, deep learning's potential had barely been tapped. He decided to apply his AI knowledge to biology, a not-quite-hard science he'd always considered "flaky" and "cartoonish." With the genome sequenced, Frey realized, he had access to an enormous data set—three billion characters—with which he could use pattern-recognition techniques in myriad productive, even revolutionary, ways. "What am I doing fiddling around, detecting cats on YouTube?" he remembers thinking. "It was important, but there was an opportunity in front of me to profoundly change humanity." Hinton, himself an evangelist for AI's use in health care, did not discourage him. "Brendan is very smart and very adventurous," Hinton says. "He started applying deep learning to genomics well before it was popular." Deep Genomics is one of the startups that helped establish the Vector Institute, Hinton's AI research incubator.
In 2014, Frey developed a computer tool called SPANR (SPlicing-based ANalysis of vaRiants) that uses deep learning to analyze parts of the genome and rank how likely genetic variants in those areas could lead to disorders like autism. That same year, he co-founded Deep Genomics as a genetics testing company. But simply detecting problems was too limiting; Frey wanted to fix them. So, in March 2017, Deep Genomics became a therapeutics company, developing drugs to treat genetic diseases. "Every drug is, in a sense, a research project," Frey says. "It's all about innovation and exploration."
It helped that the market for such drugs is thousands of times larger, according to Frey, than that of genetic detection. To access that market, Deep Genomics added 12 new hires (the company is still lean, with only 34 employees). One of the world's top toxicologists is now on its advisory board. Deep Genomics operates a so-called wet lab at the Toronto outpost of Johnson & Johnson's JLabs research laboratory (also located at MaRS). In September 2017, Silicon Valley venture capital firm Khosla Ventures injected $13 million (U.S.) into the company. "Because of the quality of their science and engineering team, and the deep integration of their AI technology into their preclinical drug development pipeline," Vinod Khosla said at the time, "we are confident that a very large potential exists here" to discover new therapies.
That investment is now bankrolling a research project Frey calls Saturn, which aims to use Deep Genomics' AI platform to search 69 billion different compounds across a wide variety of tissues and diseases, and identify those that can be manipulated within cells. "Think of it as control knobs that will allow us to alter cellular chemistry in any way we want," Frey says. The company will then create and test 1,000 compounds that can manipulate cell biology, with the hopes of showing that the platform can generate a significant number of possible therapies. Finally, in two years, Deep Genomics will select three of those compounds and proceed, in collaboration with other pharmaceutical companies, to do clinical trials and toxicology studies in mice and non-human primates. They'll start, Frey says, with the "easiest" diseases—metabolic disorders and neurodegenerative diseases for which it's simple to measure if a drug works or not. In late March, the company announced it would invest $10 million toward this effort and added an expert in the field, Dr. Arthur Levin, to its scientific advisory board.
In Frey's estimation, what distinguishes Deep Genomics from other pharma companies is that AI has always been its core business; it's not just a division bolted onto an existing firm with roots in, say, chemistry. That same focus applies to Deep Genomics' mission and growth strategy. "There are a lot of companies that are trying to enter the space of informatics and medicine," Frey says, "but they have tried to grow too rapidly or have done it top-down. 'Let's bring in some Google money, and let's make the CEO this guy from Google, even though he's never worked in medicine before.…' It doesn't work, because you're going to have folks who have spent the last 15 years working hard to understand medicine reporting to someone who's been doing Internet purchasing of women's lingerie or something." When Frey talks about being a CEO himself, he seems somewhat surprised by where he has landed. "I enjoy being a CEO," he says, smiling. "It's a wonderful challenge. I've always liked doing things my own way."
His eyes light up even more when he talks about the future of personalized medicine that Deep Genomics is making possible. Hinton had told me that neural nets—the building blocks of deep learning, essentially—would be able to see patterns in millions of medical profiles that would never be noticed in the tiny number of patients seen by any individual doctor. Frey took that computational, predictive ability to its logical conclusion: The availability of such vast amounts of data will, in turn, allow doctors to tailor therapies to individual patients. Within the next 10 to 15 years, he estimates, a patient will be able to have a genetic test and, if they test positive for that disease, doctors will be in a position to immediately produce a drug tailored to that person's genetics to treat it—one to one. "There are some knobs that need to be turned," Frey says, "but I think it will happen. That's how it'll look."
The Geoffrey Hinton effect: The Toronto-based guru has mentored some of the top AI minds (including Frey)
- Ilya Sutskever: Director of OpenAI, the $1-billion non-profit he co-founded with Elon Musk
- Yann LeCun: Director of AI research at Facebook
- Ruslan Salakhutdinov: Director of AI research at Apple
- Yoshua Bengio: Head of the Montreal Institute for Learning Algorithms and co-director of CIFAR’s Learning in Machines and Brains
- Raquel Urtasun: Co-founder of the Vector Institute and head of Uber Advanced Technologies Group Toronto
- Tomi Poutanen: Founder of TD Bank’s Layer 6 AI and other startups, including one that helps power Yahoo’s search function

Google’s Deepmind wanted to work with Rich Sutton so badly that it set up a lab in his adopted home of Edmonton
Adrien Veczan
Game theory
Rich Sutton is a pioneer of reinforcement learning—teaching computers to acquire something approaching human intuition. The applications go far beyond winning at poker. /By Omar Mouallem
As a PhD student and researcher at the University of Massachusetts, Richard Sutton spent the better part of two decades giving academic credibility to reinforcement learning theory. That is, AI that would teach itself based on sets of rewards—not unlike Pavlov's dog—and, in his words, attempt to "model how the mind works." A subfield of machine learning that's largely inspired by behavioural psychology, reinforcement learning was deemed obscure by computer scientists from the moment the term came into use in the 1950s, but Sutton published foundational work proving it had purpose.
Then, in 1999, a year after he wrote the textbook on the subject, he was diagnosed with melanoma. The cancer spread to his brain. Sutton soon gave what was billed as his final lecture, on building self-maintaining AI—"the greatest single challenge facing AI today," according to the class description—and prepared to die.
He left his job at New Jersey's AT&T Shannon Laboratory and went in for surgery that gave him a slim probability of survival. But he survived. The brain tumour was gone. In time, Sutton became cancer-free. The problem then, however, was that he was virtually unemployable. Though he'd become the leader in his growing field of expertise, few universities deserving of his mind would take on his medical insurance, until an invitation came from 4,000 kilometres away.
The Alberta government had just pledged long-term funding to the University of Alberta's bid to create a centre for machine learning. The school wanted Sutton to be the lead researcher at the Alberta Machine Intelligence Institute (AMII) and head its AI laboratory of 60 graduate students. By then, Sutton was eager to leave the U.S. to protest the Iraq war. A politically attuned Illinoisian, Sutton had never felt more disillusioned by his country. He accepted the job and quickly transformed the University of Alberta into the world's academic leader for machine learning. Last year, he forged a partnership with the world's commercial leader, DeepMind, convincing owner Google to open its second research office in Edmonton, of all places. (Its first is in London.)
"DeepMind is here because Rich didn't want to leave," says Jonathan Schaeffer, a co-founder of AMII and the university's dean of sciences. "The company opened the bank to bring people like Rich to London, but if you can't move the mountain, you move to the mountain."
With 27 staff, compared to London's 600, DeepMind Alberta is more like a molehill. Helmed by Sutton, Michael Bowling and Patrick Pilarski, the lab—located inside an Edmonton mall—is heavily research-focused, reporting its achievements to London in six-month cycles. "The work we're doing is understanding the process with which we form models and understand the world," says Sutton, whose brain operation left him with a limp and paralyzed right hand. He gestures slowly and talks gently, sounding as he looks, like an aging folk singer, with a long, frizzy beard and ponytail. "What it means to understand the world is what it means to confront the data stream of experience," he continues. "You're sending bits out into the world, and bits are coming back to you through your sensors. You have to counteract, observe it and collect statistics on it, and try to find some way to organize it so you can predict it."
He apologizes for sounding abstract. "But it's a very abstract task to understand the mind."
What Sutton is essentially describing is a single machine that practises and learns whatever it's told to, without preprogramming, much like humans do. Most of today's AI programs are what you might call "idiot savants"—designed to play chess or detect credit card fraud, but inept at anything else. "Humans are general-purpose problem solvers," explains Schaeffer. "You and I could play chess, maybe not to a great level, but then we can go drive a car, we can do math problems, we can read a book."
In order to work this way, AI needs neural networks, simulated brains that use pattern recognition, data classification and other forms of general "thinking" to improve upon previous decisions. Add to this a consideration of the best long-term rewards, and you've got "deep" reinforcement learning, which DeepMind has been able to accomplish better than perhaps any company.
Founded in 2010 and purchased by Google in 2013 for $500 million, DeepMind wants to do no less than "solve intelligence." It carries special status within Google, operating independently with ample room for discovery research. It has published over 150 peer-reviewed papers, including four in Nature—twice earning the cover story. And Elon Musk is among its earliest investors, if only to keep an eye on the technology, lest it become too bright and turn into what Musk calls a Terminator scenario.
The link between the university and DeepMind stretches back to the company's first days. In 2010, DeepMind's young co-founder, Demis Hassabis, a child chess prodigy turned video game designer turned neuroscientist, was somewhat star-struck to find himself seated across from Sutton, a "founding father of reinforcement learning," at a pizzeria during an AI conference in Switzerland. Hassabis and his fellow co-founders, Shane Legg and Mustafa Suleyman, consulted Sutton about the idea of a company to build neuroscience-inspired, general-purpose AI.
It was an idea most of their peers thought naive at best, but not Sutton. "It sounded like a good, hard road to travel," he recalls. "I didn't hold my breath, but I encouraged them." Actually, Sutton became DeepMind's first adviser. And over time, DeepMind would reciprocate with donations to the University of Alberta and support for an endowed research chair.
What attracted DeepMind to Alberta, however, wasn't just the university's output of world-class machine-learning scientists, but their mutual love of games. AMII researchers have built algorithms that beat checkers champions and, more incredibly, professional players of Texas hold'em poker, a game of intuition and incomplete knowledge. DeepMind created a program that mastered 49 classic Atari games with little more than raw pixel and score inputs, and brute force. For instance, in the game Space Invaders, it began with nonsensical moves and died instantly, though it improved incrementally. Overnight, it had played so many times that it could predict where the eight-bit aliens would shift and could destroy them with almost prophetic precision.
As Sutton explains, much of the world behaves like a game. Animals exhibit game-like behaviour when hunting and feeding. Markets work in those same game-like ways. What is parking but an often frustrating activity that pits the driver against the space and objects around him? Decisions amount to either success or failure, survival or death, but games break these problems into manageable, low-stakes chunks that can be quickly repeated hundreds of thousands of times.
The power of game experimentation became apparent when DeepMind's AI mastered the ancient Chinese game Go. Considered to be the world's hardest game, Go, like poker, can rely more on instinct than logic, because it has more potential board positions than there are atoms in the universe. Each game is so unique that even the best players sometimes describe their own moves as having "felt right," so to beat Go's best is to replicate human intuition.
In 2016, 200 million watched DeepMind's AlphaGo beat world champion Lee Sedol 4–1, using moves never before recorded and that have since been studied by professionals. It was an achievement experts thought was a decade away. Hassabis compared it to the moon landing for AI exploration. The project's lead programmer, David Silver, trained under Sutton.
"Rich thinks more clearly about AI than anyone else," says Silver, one of a dozen University of Alberta graduates at DeepMind's London office. "He has been challenging the status quo for decades with views that were, for many years, viewed as esoteric. But now his viewpoint has become, arguably, the mainstream of AI research."
A few months after opening DeepMind Alberta, the company launched a lab in Montreal headed by McGill University's Doina Precup, and the two remain DeepMind's only international research labs. "We have as good a chance as any company to make major contributions to understanding how the mind works," says Sutton. With support from the $125-million Pan-Canadian Artificial Intelligence Strategy and other long-term sources, he's not going anywhere. In fact, he's doubling down on the Great White North. Concerned about America's direction under President Trump, Sutton revoked his U.S. citizenship last year. "I came to Canada for the three Ps: the people, the politics and the position."
Human versus machine: Four AI programs that beat the world's best game players
Checkers (1994)
Chinook (University of Alberta) vs. Marion Hensley
Chinook's moves were all programmed by its creators, so it wasn't true AI.
Chess (1997)
Deep Blue (IBM) vs. Garry Kasparov
Players can make 40 different possible moves, each with 40 possible responses. Deep Blue considered 100 million possibilities per second while deciding its move.
Go (2016)
Alphago (Google deepmind) vs. Lee Sedol
Alphago was taught the basics, then played millions of games against itself to perfect its game. Alphago is now being used to study misfolded proteins, which are responsible for diseases like cancer, Parkinson's and Alzheimer's.
Texas Hold'Em (2017)
Libratus (Carnegie Mellon University) vs. Daniel mcaulay, Dong Kim, Jason Les and Jimmy Chou
Libratus started learning poker—a game of "imperfect information," since players keep their cards hidden—from scratch. Other applications for this AI include financial trading, political negotiations and auctions.

Adrien Veczan