As Canada continues to advance its Artificial Intelligence (AI) agenda and develop regulation around it, it is essential in these early days that all Canadians have the opportunity to share their perspective on AI and how they feel it should be developed responsibly, with an aim to minimize harmful bias and ensure it’s representative of everyone it impacts. Because it impacts all of us.
Recognizing there are both opportunities and challenges that come with using AI in society, TELUS has launched a series of public consultations to give Canadians the chance to share their ideas, questions and concerns about this revolutionary technology. In fact, we urge them to do so by visiting telus.com/responsibleAI.
We commend the investments and work done to date to develop oversight of these tools in Canada; however, this is just a start. As AI is not one monolithic technology, it is important for us to start to have more nuanced discussions about the impacts of these technologies on all individuals and communities across Canada.
Regarded as a leader in AI with our long-standing and globally recognized research institutes, Canada has made significant investments to explore the innovative application of AI across all sectors. To support the responsible adoption of these technologies, Canadian governments and regulators are developing legislation and rules around AI in an effort to balance the technology’s evolution and its social impacts. Guardrails are valuable to industry and citizens alike, ensuring innovation can happen while risks are minimized. Thoughtful regulation is welcomed by the technology industry.
To that end, TELUS and the Responsible AI Institute have signed the voluntary Code of Conduct for Generative AI Systems, which is a complement to the proposed Artificial Intelligence and Data Act (AIDA). We are encouraged that the government has taken feedback from industry, academia and civil society and recently proposed amendments to AIDA; however, we want to see more, and hear from a more diverse group of people. As the drafting of this legislation evolves, feedback from Canadians must continue to inform actions that will help to maximize AI’s benefits and minimize harms. There is room to further engage the public and industry to ensure that all impacted parties – everyone – can weigh in.
Herein lies the opportunity. Soliciting input from Canadians and incorporating their concerns and ideas will influence the discussion around AI by informing legislators and industry of differing views, concerns and opinions, thereby greatly expanding on the current perceptions of the potential impacts of the technology. By welcoming various viewpoints, the technology industry and regulators better reflect the incredible diversity of our society in their work and are better positioned to enact the responsible data systems and processes we need as a society to thrive. Ethically, this is the right thing to do – but it also makes good business sense. By developing products and services that people want and trust with their data, everyone benefits.
Recent studies from Canada’s Public Awareness Working Group and the Canadian Institute for Advanced Research (CIFAR) have indicated that while there is strong public interest in the advancement of AI, significant areas of concern remain. Addressing these concerns requires working together – as industry, AI experts, government, ethicists, academia and, most importantly, the public – to co-design what the future world looks like. Otherwise, we will lose out on opportunities to co-create a future that can benefit everyone.
We must seek to hear from a diverse set of voices from across Canada, including Indigenous Peoples, youth, LGBTQ2+ and other equity-deserving groups. The aim should be the co-design of responsible AI best practices that continue to inform the principles outlined in the code of conduct as well as future regulations from AIDA. Hearing from a diverse group of people will help mitigate bias and potentially unfair impacts in certain use cases involving AI, and ensure that social good remains entrenched in the evolution of these technologies.
The Responsible AI Institute, a global and member-driven non-profit dedicated to enabling successful responsible AI has led efforts in Canada and internationally to offer policymakers, practitioners and regulators practical tools and guidance, including a leading certification program, all with the goal of benefitting society. A proud member of the RAI Institute, TELUS has long put social good at the heart of its business model, which has focused on leveraging technological expertise to use technology to benefit as many people as possible. Together with the Responsible AI Institute, we are exploring ways to include all Canadians in co-creating a better future.
Consultative work in collaboration with academia and expert partners can foster important discussions around human-centric technology. We are just getting started. Developing responsible AI guardrails that are meaningful and effective is within our grasp, so everyone should feel welcome to share their perspective. We’re listening.
To learn more about the responsible AI consultations or to participate, please visit www.telus.com/responsibleAI.
Pamela Snively is the Chief Data & Trust Officer at TELUS. TELUS is the first company in the world to achieve ISO 31700-1 Privacy by Design Certification for its Data for Good program, an insights platform that gives leading public researchers access to high quality, strongly de-identified data. Pam encourages consumers to more fully understand what responsible private sector organizations do to protect their privacy and organizations to join her in her mission to earn and elevate consumer trust in our digital ecosystem.
Ashley Casovan is the Executive Director of the Responsible AI Institute. Ashley has had a longstanding commitment to advance the responsible and safe use of data and technology. She is committed to ensuring innovative technologies are built for the betterment of society.
Advertising feature provided by TELUS. The Globe and Mail’s editorial department was not involved.