Skip to main content

Over the course of a few short months, artificial-intelligence tools for working with text, photography and video have moved from the realm of science fiction to reality. Like many organizations in industries across the country, The Globe is devoting significant energy to thinking through how these new tools will change the way we work – particularly the promise of AI as a journalistic tool, but also the threat it may pose to truth, trust and transparency, which are the core tenets of The Globe and Mail’s journalism.

Recently, we sent a memo to our newsroom outlining our initial guidelines on how to approach the use of AI tools in our daily work. Below is a copy of that note, lightly edited to provide context and clarity to Globe readers.

If you’re a Globe and Mail reader and have thoughts or feedback on how we should be thinking about AI in our newsroom, please reach out to our Standards Editor, Sandra Martin.


Hi all,

Since ChatGPT was released to the world late last year, the advances in AI tools for working with text, images and video have continued at a blistering pace. Artificial intelligence will bring big changes to the way we work, think and collaborate as a newsroom. But we also have many questions about how these tools will influence the journalism we do.

As a first step in navigating these waters, we would like to lay out a few guiding principles to help with decision-making in the newsroom. This is a starting point, and will no doubt change as technology (and our thinking about it) evolves. Hopefully it will provide guardrails for your daily work, but please feel free to get in touch with any questions or feedback, or specific use cases.

In general, our Code of Conduct provides some good guidance:

  • It is unacceptable to represent another person’s [or machine’s] work as your own.
  • Information from another publication must be checked and credited before it is used.
  • Illustrative and conceptual photography must be labelled “photo illustration.”

So, treat ChatGPT and similar products as you would a tool like Wikipedia or a tip from a new source: be skeptical and always independently verify any facts presented. Ninety per cent accuracy might work for winning an argument over coffee, but would be a reputation killer for any journalist.

There is also a flood of new tools coming online every week – some with questionable ethics and opaque terms of use. If you would like to experiment with something beyond what is mentioned below, please reach out first. Also, any use of AI to work with confidential, proprietary or personal information must be first discussed with your manager and approved.

Here are some more specific guidelines:

Research and reporting

AI language tools can be a great starting point for things like research, brainstorming and interview preparation. But any response from these tools should be treated with skepticism: it is unverified information, and AI is not concerned with truth in the same way we are as journalists.

Writing

AI tools like ChatGPT should not be used to condense, summarize or produce writing for publication. Doing so would potentially risk our reputation and the confidentiality of our reporting. These tools may be used when writing about the topic of AI or adjacent technologies, as long as we are upfront with the reader (As Ian Brown did in this recent piece and the audience team did in this TikTok).

ChatGPT and other tools like to impersonate the “human” voice. Let’s guard against any ambiguity on the source of AI text.

Display writing and social media

AI tools like ChatGPT can be used to brainstorm ideas for headlines and social posts. However, any result should be vetted by an editor for accuracy before publication. Unedited drafts or unpublished stories should not be put through any AI tool before publication.

Editing

AI tools should not be used to edit stories, as errors may be introduced in the process.

Photography

AI image tools like Midjourney and DALL-E are not to be used for news photography. We will continue to follow our Code of Conduct on this matter: only basic image editing is permitted. Doing otherwise would risk our reputation and could draw into question the veracity of other images we publish.

Feature illustrations and photo illustrations

AI tools may be used as part of the conception of feature illustrations, either as research or as part of a composite image. AI-generated visuals should not attempt to reproduce the likeness of a public person or brand. These works should always be credited as “photo illustration” or “illustration.” If an image is produced entirely by AI, it should be credited as “AI-generated image” or “AI-generated illustration.” (We should not assume readers are conversant in all of the AI tools available, so general labelling is best.) Finally, any work we create using AI must abide by that tool’s terms of use.

Video

Similarly, AI video tools should not be used to create news video, except in cases where AI or similar technology is the subject of the video. The label “AI-generated video” should appear on screen at all times whenever an AI tool is used as part of a feature or news video.

Code and data

AI tools can be used for prototyping, testing and debugging code, but we should not publish the final output without vetting and oversight. Any output must be independently verified before publication. Assume that the AI will occasionally get things wrong just as a human might. AI tools should not be used to process confidential, proprietary or personal information. If you have any question on a tool or process, please check with your manager before diving in.

Follow related authors and topics

Authors and topics you follow will be added to your personal news feed in Following.

Interact with The Globe