Skip to main content
opinion

During the summer of 2020, Wayfair Inc. W-N found itself in the throes of a crisis.

Conspiracy theorists linked to QAnon, seemingly emboldened by the chaos and confusion of the pandemic, tried to tarnish the reputation of the online furniture and home goods retailer. The trolls used Twitter, Instagram and Reddit to spread false information that Wayfair was involved in child sex trafficking through the sale of industrial-grade cabinets.

The company refuted the allegations, but the lies continued to circulate online, underscoring how easy it is for malicious actors to create reputational risks for companies in the digital era.

Wayfair experienced what is known as a disinformation campaign – a deliberate attempt to disseminate false information to inflict harm. The motive to mislead is what distinguishes disinformation from misinformation, even though both involve the spread of fabrications, experts say.

Although disinformation is traditionally associated with political activity, such as interference in democratic elections by malevolent foreign powers, it is also posing a growing risk to businesses of all sizes. That’s because the use of artificial intelligence is putting a new twist on fraud by generating deepfakes of photos, audio clips and videos that are so convincing they have the potential to spark consumer boycotts and tank a company’s stock price.

“The technology is evolved enough now that even hobbyists these days can train an AI model on public people,” said Dustin Heywood, the Canadian head of IBM X-Force, the technology company’s global threat intelligence agency.

Chief executive officers are prime targets for deepfakes, Mr. Heywood said, because maintaining a public profile is part of the job. For instance, CEOs regularly speak on earnings calls, at shareholder meetings and in television interviews. That means audio and video clips of them are regularly posted online.

“They’re constantly in public,” he said. “You can take an AI model, train it on all of their public speaking and make fake videos, fake voice.”

There are, in fact, plenty of examples of companies that have already fallen victim to this type of AI trickery. In early 2019, for example, a video posted online appeared to show a self-driving Tesla car crashing into a robot at a tech convention. It went viral but was later revealed to be a fake created by foreign fraudsters attempting to manipulate Tesla’s stock price, according to published reports.

Mark Zuckerberg, the chairman and chief executive officer of Meta Platforms Inc. META-Q, the parent company of Facebook and Instagram, was also the subject of a fake online video in which he purported to thank U.S. legislators for their “inaction” on antitrust issues. (OK, I’ll admit, that one did make me laugh.)

The top executive of a British energy company, meanwhile, was targeted by an AI-generated voice scam that resulted in a fraudulent transfer of money, according to The Wall Street Journal.

Mr. Heywood argues that 2024 is shaping up to be a year of deception because of rising geopolitical tensions, the forthcoming U.S. presidential election and other high-profile events such as the Olympic Games in Paris.

At the same time, the global economic slowdown is prompting businesses to clamp down on costs. But cutting corners on IT security creates an easy opening for cybercriminals.

One study by MIT found that false information spreads more quickly than the truth on social media. That means serious damage could be inflicted on a company’s brand in a matter of minutes.

For all those reasons, IBM is collaborating with the University of Ottawa to train executives and directors on how best to respond to tech-based attacks. Its clients include companies in industries such as aviation and automotive.

The university’s Professional Development Institute also has an Information Integrity Lab that is researching visual disinformation such as photos and videos.

“It’s not even lip-sync any more. The mouth moves perfectly in sync with the articulation,” said Serge Blais, the institute’s executive director.

While the institute can assess the truthfulness of a video, the analysis takes hours. Real-time detection, though, remains the ultimate goal.

“To develop AI tools that can detect AI-enabled visual disinformation is like hitting a bullet with a bullet,” Mr. Blais said.

In addition to teaching companies how to fend off threats to their reputations, the institute is also helping businesses assess risks related to supply chains.

“With all the sanctions regimes in effect, it is really more and more important that you understand the integrity of your supply chain,” said Jennifer Irish, the director of the Information Integrity Lab.

Now is the time for corporate leaders to think laterally about the risks posed by AI. That also means inquiring about controls to prevent the spread of false information and to stop AI models from giving out advice on causing physical harm or other illegal topics, such as bomb making, Mr. Heywood said.

Investors will undoubtedly use the upcoming annual meeting season to ask questions about what steps are being taken to protect corporate brands.

After all, disinformation campaigns are cheap to mount and generate lucrative rewards for fraudsters. The lack of regulation means they are poised to flourish.

Companies are particularly vulnerable to disinformation attacks when conducting an initial public offering, merger, acquisition, rebranding or reorganization, according to an analysis by professional services firm PwC.

Executives and directors would do well to heed Ms. Irish’s advice: “Once your credibility is harmed, it becomes extraordinarily difficult to recuperate from reputational risk.”

Follow related authors and topics

Authors and topics you follow will be added to your personal news feed in Following.

Interact with The Globe