Skip to main content
opinion

Mike Ananny is an assistant professor of Communication and Journalism at the USC Annenberg School for Communication and Journalism. Taylor Owen is assistant professor of Digital Media and Global Affairs at the University of British Columbia.

On Thursday, Prime Minister Justin Trudeau announced the government's pan-Canadian artificial intelligence strategy.

This initiative, which includes a partnership with a consortium of technology companies to create a non-profit hub for artificial intelligence called the Vector Institute, aims to put Canada at the centre of an emerging gold rush of innovation.

There is little doubt that AI is transforming the economic and social fabric of society. It influences stock markets, social media, elections, policing, health care, insurance, credit scores, transit, and even drone warfare. AI may make goods and services cheaper, markets more efficient, and discover new patterns that optimize much of life. From deciding what movies get made, to which voters are valuable, there is virtually no area of life untouched by the promise of efficiency and optimization.

Related: Government, business leaders launch Toronto-based AI initiative

Yet while significant research and policy investments have created these technologies, the short history of their development and deployment also reveals serious ethical problems in their use. Any investment in the engineering of AI must therefore be coupled with substantial research into how it will be governed. This means asking two key questions.

First, what kind of assumptions do AI systems make?

Technologies are not neutral. They contain the biases, preferences and incentives of their makers. When technologists gather to analyze data, they leave a trail of assumptions about which data they think is relevant, what patterns are significant, which harms should be avoided and which benefits should be prioritized. Some systems are so complex that not even their designers fully understand how they work when deployed "in the wild."

For example, Google cannot explain why certain search results appeared over others, Facebook cannot give a detailed account of why your newsfeed may look different from one day to the next, and Netflix is unable to explain exactly why you got one movie recommendation over another.

While the opacity of movie choices may seem innocuous, these same AI systems can have serious ethical consequence. When a self-driving car decides to choose the life of a driver over a pedestrian; when skin colour or religious affiliation influences criminal-sentencing algorithms; when insurance companies set rates using an algorithm's guess about your genetic make-up; or, when people and behaviours are flagged as 'abnormal' by algorithms, AI is making an ethical judgment.

This leads to a second question: how should we hold AI accountable?

The data and algorithms driving AI are largely hidden from public view. They are proprietary and protected by corporate law, classified by governments as essential for national security, and often not fully understood even by the technologists who make them. This is important because the existing ethics that are embedded in our governance institutions place human agency at their foundation. As such, it makes little sense to talk about holding computer code accountable. Instead, we should see AI as a people-machine hybrid, a combination of human choices and automated decisions.

Who or what can be held accountable in this cyborg mix? Is it individual engineers who design code, the companies that employ them and deploy the technology, the police force that arrests someone based on an algorithmic suggestion, the government that uses it to make a policy? An unwanted movie recommendation is nothing like an unjust criminal sentence. It makes little sense to talk about holding systems accountable in the same way when such different types of error, injustice, consequences and freedom are at stake.

This reveals a troubling disconnect between the rapid development of AI technologies and the static nature of our governance institutions. It is difficult to imagine how governments will regulate the social implications of an AI that adapts in real time, based on flows of data that technologists don't foresee or understand. It is equally challenging for governments to design safeguards that anticipate human-machine action, and that can trace consequences across multiple systems, data-sets, and institutions.

We have a long history of holding human actors accountable to Canadian values, but we are largely ignorant about how to manage the emerging ungoverned space of machines and people acting in ways we don't understand and cannot predict.

We welcome the government's investment in the development of AI technology, and expect it will put Canadian companies, people and technologies at the forefront of AI. But we also urgently need substantial investment in the ethics and governance of how artificial intelligence will be used.

Interact with The Globe