Skip to main content

Valérie Kindarji is a PhD candidate at the University of Toronto.

Wendy H. Wong is a professor of political science and principal’s research chair at the University of British Columbia.

The recent release of AI-powered chatbots such as ChatGPT, the new Bing and Google’s Bard has fuelled a whirlwind of possibility and panic. These large language models (LLMs) imitate and augment human interaction, whether that means answering questions, filling out forms or summarizing vast amounts of literature. Microsoft and Google tell us it’s the next frontier of search, and many of our colleagues see it as a death knell for the college essay.

Disruptive innovation has changed human lives before. The printing press drove up linguistic literacy rates, upended entrenched political and social structures, and opened a world of knowledge to the masses. Today, AI and pervasive data collection are radically changing our lives and creating a need for a new kind of literacy: digital literacy.

Digital literacy is a skillset and conceptual framework that helps us more meaningfully function in our tech-driven world. Digital literacy gives us tools to search, evaluate and manage the volume of information to which we are exposed. It helps us use algorithmically powered technologies and understand generally how they produce answers. It helps us understand the machine and the data behind the AI.

It’s time that we start taking digital literacy seriously and start thinking about how to implement these skills into our social practices and bring these ideas into legislation. To date, public focus has been on AI technologies themselves. Unfortunately, the technologies being unleashed are moving faster than government can respond with punitive regulations. Many government policies, including Canadian regulations, have focused on reining in the tech, such as imposing requirements on content or finding fault in corporate practices. Recent public discussions on LLM pinpoint accuracy or dangers.

While it’s important to scrutinize corporate products, we should also be invested in helping citizens adjust to the new realities AI brings. One way to incorporate disruptive technologies is to provide citizens with the knowledge and tools they need to cope with these innovations in their daily lives. That’s why we should be advocating for widespread investment in digital literacy programs. That we live in a technologically infused world is not going to change. Digital literacy can help us start seeing AI-related technologies for what they are: massive data pooling, sorting and crunching systems that are prediction machines.

Digital literacy is particularly important in democracies, political systems that rely on citizen knowledge, participation and choices to govern. Some countries are ahead of the curve. For instance, digital literacy is part of the core school curriculum in Finland and Estonia. Students learn to code from a young age and take media and disinformation courses. However, most other countries, including Canada, are lagging. Existing digital literacy programs are patchy and education policy may not fall under a single jurisdiction. Moreover, the burden currently falls on organizations that offer supplemental education, such as public libraries or community-based programs, to deliver digital literacy training. A lack of funding and knowledge-sharing across programs complicates course delivery and access.

The importance of digital literacy moves beyond the scope of our day-to-day interactions with the online information environment. LLMs pose a serious risk to democracy because they disrupt our ability to access high-quality information, a critical pillar of democratic participation. Basic rights such as the freedom of expression and assembly are hampered when our information is distorted. We need to be discerning consumers of information in order to make decisions to the best of our abilities and participate politically.

Digital literacy is a long-term investment. It is about helping citizens navigate their lives. AI-driven technologies are only going to get more accurate, less detectable and more widespread. Observers tend to poke fun when AI makes mistakes – and they can be amusing – but this isn’t the best way to push back on algorithms.

We need to understand how LLMs (and other AI technologies) generate their answers in order to make use of these powerful tools. We tend to fall prey to automation bias, downgrading human decision-making in favour of the machine. But perhaps that’s because we don’t often think about how the machine works to produce answers. How does a tool like ChatGPT gather and deliver information to me? How can I use the chatbot to spark my creativity instead of making it speak for me? What are the limitations of this tool? How are algorithmic choices biasing the output of these tools?

Distorted perceptions of reality affect our trust in our institutions, and our trust in each other – especially those with whom we disagree. Already, social media’s algorithms are filtering us into opposing camps that increasingly don’t speak to one another. But there is reason to be hopeful. Just recently in the podcast Hard Fork, Sam Altman, the CEO of OpenAI (which created ChatGPT), said, “And given how strongly I believe [AI] is going to change many, maybe the great majority of aspects of society, people need to be included early.”

Mr. Altman is right – we do need to be included. On what and whose terms, with what background knowledge, and with which tech developers remain open questions. Digital literacy is an important part of the answer, and it is currently deprioritized. It may be the stealth weapon to combat misinformation and make us more active, confident consumers of AI technologies.