Skip to main content
Canada’s most-awarded newsroom for a reason
Enjoy unlimited digital access
$1.99
per week
for 24 weeks
Canada’s most-awarded newsroom for a reason
$1.99
per week
for 24 weeks
// //

Ted Hewitt is president of the Social Sciences and Humanities Research Council.

Discussions and debates about artificial intelligence abound today, and it has only deepened the extreme divide between developers and tech companies applauding advances, and naysayers warning about humanity’s potential destruction by superintelligent machines.

There is, of course, a large swath of middle ground – one that already affects us practically, day-to-day. The two crashes and ensuing ban of Boeing 737 Max 8 airplanes because of a failure of complex automated systems have raised questions among passengers about the type of aircraft on which they fly. Accidents involving autonomous vehicles have led to parental concern over letting children ride to school in a driverless car or bus. Data-driven medical diagnosis and drug prescribing is on the horizon, too – but can you trust health-care AI as much as your family doctor?

Story continues below advertisement

These are vital questions about how AI’s many applications are being sold to us. For AI to succeed, consumers need to buy into the legitimacy of complex systems contained in opaque black boxes that companies guard zealously, citing proprietary reasons. And we don’t know enough about what will make consumers come around.

New technologies, such as home appliances and smartphones, have largely and successfully been sold to us on the value proposition that they’d save us time and effort. To a large extent, society has embraced this logic. But AI differs significantly from earlier technologies because in many cases, humans are not in control, as machines are left to do the “thinking”. Algorithms are influencing decisions in every aspect of our lives: shopping, banking, hiring, ways of working, dating, policing, education, health care, transportation and more. These developments are being presented as extending far beyond economizing on time or labour: It’s about economizing on thinking itself, by reducing human input in decision-making.

The industry must better understand the public appetite and expectations around AI, and define the boundaries of its social licence accordingly. Companies or governments cannot give that to themselves; evolution, diffusion and adoption of a technology or application must be based on sufficient legitimacy, accountability, acceptance and trust, along with the informed consent of those most affected. The public backlash to energy companies such as Shell in Africa and BP in the Gulf of Mexico, and to GMO crop producers in Europe, are sobering examples of what happens when social licence is forced upon a community, and then lost.

Research on social licence has predominantly been done in the mining, forestry, energy and other natural resources industries. It’s now time to grapple with these issues in the wide-reaching realm of AI. Some studies in the social sciences and humanities are already exploring privacy issues around the data gathered by AI applications. As well, others are looking at ethical and regulatory frameworks. Still others have flagged instances where developers’ inherent biases are being programmed into the machines: voice-recognition systems that privilege speakers for whom it is their first language or fail to properly decode female speech, or photo-tagging software that cross-categorize some humans and animals.

Other reports have highlighted how automated placement of advertising and other information on websites can take users into fake-news traps. Similarly, proposed links to additional content on video-streaming sites can pull unsuspecting viewers into the abyss of extremist groups. In both cases, research shows that this relativizes the truth, exaggerates threats, falsely identifies perceived enemies of the people and effectively puts individuals and democracy at risk.

AI technological advances are moving at a brisk pace, and ethical frameworks such as the Montreal Declaration are being developed to guide the responsible development of AI. There is urgency now for sound research on how humans react to or adopt machine-driven environments. What’s required to establish public trust and legitimacy for AI applications and solutions? What’s an acceptable level of risk? What are the criteria that influence which decisions we’re willing to delegate to machines? What level of human oversight are we comfortable with and in which situations? How much of the black-box workings do we need to understand to trust it?

Canadian deep-learning pioneers Yoshua Bengio and Geoffrey Hinton were among those recently recognized with the prestigious Turing Award, reflecting Canada’s broader leadership in AI scientific research. Now it’s time for Canada to step up its interdisciplinary research on the social acceptability of AI, and the dimensions of the social licence needed to inform responsible and ethical design. After all, AI – or any effective tech, really – is as much about its impact on society, and the relationship between technology and humanity, as it is about the science itself.

Your Globe

Build your personal news feed

  1. Follow topics and authors relevant to your reading interests.
  2. Check your Following feed daily, and never miss an article. Access your Following feed from your account menu at the top right corner of every page.

Follow topics related to this article:

View more suggestions in Following Read more about following topics and authors
Report an error
Due to technical reasons, we have temporarily removed commenting from our articles. We hope to have this fixed soon. Thank you for your patience. If you are looking to give feedback on our new site, please send it along to feedback@globeandmail.com. If you want to write a letter to the editor, please forward to letters@globeandmail.com.

Welcome to The Globe and Mail’s comment community. This is a space where subscribers can engage with each other and Globe staff. Non-subscribers can read and sort comments but will not be able to engage with them in any way. Click here to subscribe.

If you would like to write a letter to the editor, please forward it to letters@globeandmail.com. Readers can also interact with The Globe on Facebook and Twitter .

Welcome to The Globe and Mail’s comment community. This is a space where subscribers can engage with each other and Globe staff. Non-subscribers can read and sort comments but will not be able to engage with them in any way. Click here to subscribe.

If you would like to write a letter to the editor, please forward it to letters@globeandmail.com. Readers can also interact with The Globe on Facebook and Twitter .

Welcome to The Globe and Mail’s comment community. This is a space where subscribers can engage with each other and Globe staff.

We aim to create a safe and valuable space for discussion and debate. That means:

  • Treat others as you wish to be treated
  • Criticize ideas, not people
  • Stay on topic
  • Avoid the use of toxic and offensive language
  • Flag bad behaviour

If you do not see your comment posted immediately, it is being reviewed by the moderation team and may appear shortly, generally within an hour.

We aim to have all comments reviewed in a timely manner.

Comments that violate our community guidelines will not be posted.

UPDATED: Read our community guidelines here

Discussion loading ...

To view this site properly, enable cookies in your browser. Read our privacy policy to learn more.
How to enable cookies