Skip to main content
opinion

For decades, artificial intelligence was the stuff of science fiction. And those stories tended to depict one of two futures.

Dystopia was the first. This was the world of the Terminator film franchise, in which a military AI named Skynet achieves self-awareness and launches a devastating nuclear strike on the human population.

Then there was the utopia. This was the world of Star Trek, in which a sentient android named Data served as an invaluable member of a starship crew and, in his spare time, looked after a cat named Spot and painted.

As science fiction becomes science fact, the near-term reality of artificial intelligence is probably going to be a lot more mundane than we once thought. Instead of either Terminator or Star Trek, we’re more likely this year to see AI that’s a better version of Clippy, the annoying guide to Microsoft Office.

AI is a rapidly advancing technology, yes, but at its silicon heart it is a tool made by humans – one that should be carefully regulated, but that is unlikely either to doom us or to save us.

Amid all the hype surrounding AI, it is difficult to be pragmatic. It was apparently the talk of the World Economic Forum held in Davos, Switzerland, this week: “You know the conversations – it’s AI, AI and more AI,” one tech executive told Reuters.

To hear it from some boosters – who often have an economic stake in the technology – investing in AI is a matter of life and death. “We believe any deceleration of AI will cost lives,” influential Silicon Valley magnate Marc Andreessen recently wrote in his Techno-Optimist Manifesto. “Deaths that were preventable by the AI that was prevented from existing is a form of murder.”

Such a declaration may make one wonder: Is the economic and intellectual heft of Silicon Valley being directed to solve humanity’s most burning health and social problems? Will hunger be ended and every conflict settled? Well, for the most part … no.

The biggest focuses of attention in AI so far have been in the field of generative AI, for the creation of text, images and video. And this area – which drew great public attention when OpenAI’s ChatGPT debuted a little over a year ago – is likely to get much better this year. Industry analysts predict generative AI will get to the point that any user can easily generate short video clips from a simple prompt. This will almost certainly cause a rise in disinformation, an immediate threat that needs to be guarded against.

But it will also almost certainly cause a rise in more advanced customer support chatbots, a development not so much dangerous as it is tedious when you’re a customer trying to resolve a problem.

There could be applications of generative AI that would be transformative to people’s lives. The technology holds promise in the field of generating new molecular combinations that could yield drug discoveries. And even a high-quality text generator could prove valuable for doctors by allowing them to spend more time with patients and less time on the drudgeries of paperwork. On the flip side, some white-collar jobs, such as paralegals, could end up disappearing as AI evolves.

But in the framework of viewing AI as a tool, it is worth asking: who is making the tool, and who has access to it? The biggest advances of AI are being driven by tech behemoths such as Microsoft (in partnership with OpenAI), Google and Meta. These are not companies that are economically incentivized to either reduce the planet to a series of burning craters, or to create a life of AI-assisted leisure for every human being on Earth.

The likely future is less dramatic: companies entering a new market who see an opportunity to extend the dominance they enjoy in other technology sectors, such as social media and search. As this space has argued before, this should be an area that gives regulators concern. Will there be robust competition among productivity-enhancing AI tools, and will a wide swath of businesses be able to access them? The answer to that question is one that policy-makers, not AI, will determine.

All tools, including AI, have two things in common. They are made by humans, and the effect that they have depends on the hands wielding them.

Follow related authors and topics

Authors and topics you follow will be added to your personal news feed in Following.

Interact with The Globe