Skip to main content

Kathryn Hume is VP product & strategy at integrate.ai.

Daniel Moore is chief risk officer at Scotiabank.

Michael Zerbs is group head and chief technology officer at Scotiabank.

Story continues below advertisement

Artificial-intelligence technologies will transform life and work. It’s up to us to make sure they do it right.

The data-driven algorithmic systems we refer to as AI are already on the job. They make increasingly accurate suggestions for products “you may also like.” They are automating cognitive tasks previously reserved for humans, such as diagnosing cancer or uncovering evidence inside masses of documents.

But the promise of AI comes with risks, too.

Systems based on human data can also include human failings and prejudices. Algorithms powered by that data are not objective oracles, but mathematical tools that may pick up, refract and amplify the biases that exist in society.

Developers and users of AI are working to define ethical standards to guard against unintended consequences. As consensus builds on what responsible AI will look like, it is also becoming clear that this effort can pay back in other ways. The push for ethical AI requires executive leadership to improve communication between technical, business and control functions, and to clarify the organization’s social and ethical values.

How did we get here? Because of how AI systems work. Unlike computer programs that follow the logical path set by a developer, these systems take data and outcomes defined by the builder, and then go off on their own, essentially, and find the best way to achieve those outcomes. That puts great weight on two factors: carefully defining the desired outcome so that it aligns with our values, and cleansing the data of inherent biases.

This is more difficult than it sounds. First, there may be a staggeringly large amount of data involved, since the power of AI lies in working with a quantity of data and variables beyond the capabilities of a human brain. Second, much of the data is generated by human behaviour, and humans can behave with bias. Even subtle and unconscious bias can produce data that steers systems in directions their designers would never choose.

Story continues below advertisement

The devil is in the data and even respected pioneers in AI can be caught out. Recently, Amazon made headlines for a machine-learning-powered recruitment tool that had to be scrapped when it was found to be biased against hiring women – a bias introduced not by the AI itself, but through inherent bias in the data used to train it.

As the conversation builds around responsible AI systems, Scotiabank and Kingston’s Smith School of Business co-hosted a conference in early November on Ethics and AI, where leading thinkers from the research and commercial worlds of AI found an emerging consensus.

With the only graduate program in North America to focus on the management of AI in organizations, Smith scholars are exploring how ethical enterprises should make the best use of these powerful new tools. Recent research by Stephanie Kelley, Yuri Levin and David Saunders, presented at the conference, suggests rapid technical advances in AI have driven significant innovation at the risk of outpacing existing ethical guidelines and rules.

The researchers highlight ways financial-services organizations can use principles of fairness, accountability and transparency to guide them through the ethical implications of AI.

Canadian banks are ideally placed to lead by example. Scotiabank, for example, has drafted objectives for interactive AI systems, starting with the point that those systems must be truly useful: They need to improve outcomes for customers and society as well as the bank. They should be monitored for unacceptable outcomes and accountable for any mistakes, misuse or unfair results. Safety and the protection of data privacy is also paramount. And, as the technology develops, objectives should adapt without losing sight of core values.

Integrate.ai has developed a framework for Responsible AI that helps executive leadership work with technology teams to put ethics into practice in AI systems using consumer data. Along with partners such as Scotiabank, the startup aims to hasten the adoption of AI across Canada by strengthening the trust Canadian consumers have in the businesses that shape their everyday lives.

Story continues below advertisement

Done right, AI systems can not only avoid negative outcomes but help organizations move in a positive direction. One example is a proprietary job-applicant screening tool developed by Scotiabank that helps recruiters overcome unconscious barriers and hire the best talent.

Focusing on fairness means doing the right thing. It is also good for business. Discovering that a minority population is not well represented in a data set may also point to an underserved market with a need for unique products and services. Similarly, AI can unlock better outcomes for customers with more sophisticated evaluations of creditworthiness and financing needs. But customers know best what is right for them: Ethical AI must put them first.

To achieve that end, organizations must unite in common purpose people with diverse experiences as well as a range of specialized knowledge, including mathematics, data science, risk, social sciences, ethics and law. And while practitioners may develop the systems, executive leadership is responsible for articulating the values that guide them.

As the researchers at Smith point out, the stability of the financial system and the strength of AI innovation in Canada make it an ideal location for AI ethics research – and action. As Canadians, we are proud that our researchers and innovators can compete with any in the world for discoveries and applications in the field. We’re also proud of Canadian values of social equity and inclusion. Responsible co-operation by government, academic researchers, businesses and individuals could vault Canada into global leadership in this vital, fast-developing field.

The rewards from AI will be enormous. But they should not come at the expense of our values.

Report an error
Comments

Welcome to The Globe and Mail’s comment community. This is a space where subscribers can engage with each other and Globe staff. Non-subscribers can read and sort comments but will not be able to engage with them in any way. Click here to subscribe.

If you would like to write a letter to the editor, please forward it to letters@globeandmail.com. Readers can also interact with The Globe on Facebook and Twitter .

Welcome to The Globe and Mail’s comment community. This is a space where subscribers can engage with each other and Globe staff. Non-subscribers can read and sort comments but will not be able to engage with them in any way. Click here to subscribe.

If you would like to write a letter to the editor, please forward it to letters@globeandmail.com. Readers can also interact with The Globe on Facebook and Twitter .

Welcome to The Globe and Mail’s comment community. This is a space where subscribers can engage with each other and Globe staff.

We aim to create a safe and valuable space for discussion and debate. That means:

  • All comments will be reviewed by one or more moderators before being posted to the site. This should only take a few moments.
  • Treat others as you wish to be treated
  • Criticize ideas, not people
  • Stay on topic
  • Avoid the use of toxic and offensive language
  • Flag bad behaviour

Comments that violate our community guidelines will be removed. Commenters who repeatedly violate community guidelines may be suspended, causing them to temporarily lose their ability to engage with comments.

Read our community guidelines here

Discussion loading ...

Due to technical reasons, we have temporarily removed commenting from our articles. We hope to have this fixed soon. Thank you for your patience. If you are looking to give feedback on our new site, please send it along to feedback@globeandmail.com. If you want to write a letter to the editor, please forward to letters@globeandmail.com.
Cannabis pro newsletter