Professor Pascal Poupart is a senior researcher at Borealis AI, an RBC research institute, and a professor of computing science at the University of Waterloo.
My parents thought I was crazy when I announced my intention to study artificial intelligence.
Back then, in 1998, AI was considered a fringe group within the academic community, even though we knew the field had the potential to transform lives. Today, AI is among the hottest fields in academia, attracting the focus of every major university and the courtship of the world's biggest companies as they try to apply cognitive machine learning to just about every challenge in the marketplace. Thanks to this commercial interest and the academic pursuit of AI, we now have personal assistants in our smartphones, intelligent cybersecurity to protect our credit cards, and even recommendation engines in our favourite movie and music streaming services.
And we can expect a lot more. Global spending on AI could exceed US$57-billion by 2021, according to recent forecasts.
If that's to be money well spent, we will need unprecedented collaboration between industry and academia to ensure the AI billions go to the right causes.
Can that be done? Some academics argue that pure AI research must remain untainted by the profit motive of business, while others say one of the key roles of postsecondary education is to make research relevant to humanity, which can be done best through products and services that people use.
The reality is we need each other, and AI may be that rare academic field that connects the ivory tower with the marketplace.
Over the past decade, the share of newly minted U.S. computer-science PhDs taking industry jobs has risen to 57 per cent from 38 per cent, according to data from the National Science Foundation. Business clearly needs the academic expertise to stay on top of the latest innovations, while academics are seeing the benefits to their research that come with the scale, resources and market discipline of business. Consider the research into natural language processing and computer vision that led to Apple's Siri assistant and Tesla's Autopilot system. Those advances would not have been possible without the large datasets and access to massive computing power that those companies have. Most campus labs are simply too small to create such breakthroughs.
Around the world, we're seeing new alliances of industry and academia, which, if properly managed, could lead to enormous gains for the public good. Apple has said it will drop some of its traditional privacy rules around research to allow its AI scientists to publish their work, thus helping others advance the core science – something every academic values.
This collaboration is an important positive force in a global AI arms race that could lead to very different models of academic-led innovation. To play at that level, we need to see more Canadian businesses stepping up with AI investments and with a new approach to open and collaborative research. They can learn from DeepMind, the London-based AI company that is now part of Google and was a pioneer in the field by publishing its research and sending its scientists to conferences to share their work. Between 2012 and 2017, the AI research teams at Google published 258 scientific papers.
As DeepMind has shown, this approach is not only an effective way to develop ideas; it's essential to attracting and developing the talent that's driving the rapid evolution of AI.
In Canada, we're seeing this open model in startups like Maluuba, the Waterloo, Ont., firm that specializes in language understanding and was acquired by Microsoft last year. Can larger companies embrace this open model, too? Most Canadian companies don't have the budgets of Google, or the large datasets of Microsoft, but they do have the advantage of Canadian collaboration.
The best AI applications are coming from hybrid approaches, where science meets product development – something Canadians have been good at in decades past. Academics can help make that happen, using our collective sense of pride and responsibility to ensure the next step is the right step in a journey we began decades ago.
We need to influence the way AI is deployed to make sure the technology isn't mishandled. Researchers must also work to ensure that AI moves beyond the "black box" model and become explainable. The public needs to know the why and how of our algorithms, not just the what of their outcomes.
As scientists, we're interested in pushing the boundaries of AI and machine learning. And one of the best ways to achieve that will be to work with those who are developing products and services, if we're to ensure AI is a tool for human progress.