Skip to main content

One of the great mysteries of the recent past is: How does Barack Obama feel about what Donald Trump is doing to his country? Is he raging inside that famously cool head? Has despair made him take up smoking again? It was enlightening – and refreshing, I’ll admit – to see a video of Barack Obama expressing his true thoughts: “Simply, President Trump is a total and complete [expletive deleted].”

I saw Mr. Obama say those words, and yet he didn’t say them. Confused? You should be, and it only gets worse from here. The Obama video was actually a deepfake, a manipulated video that transposes sound or image to make it look as if a person was doing or saying something he hadn’t. The technology is nearly seamless now; as it improves, imagine how it will be used in a citizenry that already cannot be bothered to check the most basic meme-quotes before sharing them. Your dumbbell, Facebook-loving cousin just got a new weapon.

In fact, the Obama video was intended as a warning. It was produced by Oscar-winning filmmaker Jordan Peele, in conjunction with Buzzfeed News. Mr. Peele had already honed his Obama impersonation over the years on Key & Peele; all he needed for the eerie simulation was help from Buzzfeed’s technicians, who used Adobe After Effects and the AI program FakeApp to create the video.

Story continues below advertisement

At least Mr. Peele was using the technology for good. He has Mr. Obama “say”: “It may sound basic, but how we move forward in the Age of Information is going to be the difference between whether we survive or whether we become some kind of [expletive deleted] dystopia.”

I would have said “an even greater dystopia,” but hey, that’s just a quibble. Mr. Obama also featured in the first deepfake video to hit the public consciousness, when researchers from the University of Washington last year unveiled a clip they’d created using deep-learning software (apparently he’s a good subject because there’s so much high-quality audio and video of him speaking, which makes it easier for the AI program to learn.)

Perhaps not surprisingly, the greatest outlet for deepfakers’ creativity has been porn videos – with a little time and a free bit of software, an industrious horndog can put a celebrity’s face on a readily available writhing body. Voila, a mythological hybrid that nicely encapsulates the early 21st century: Part porn, part profit, zero reality.

The privacy aspect has led the New York state legislature to introduce a bill to ban deepfakes. But the more dangerous potential lies, to paraphrase Shakespeare, not in the stars but in ourselves. Human psychology is already predisposed to believing things that are comforting or tribe-affirming, rather than true. The readiness of voters to believe counterfeit “facts” (if they’re amusingly packaged) and propaganda (if shared by friends) has been widely documented. Just to give one example: A study from researchers at MIT earlier this year revealed that lies and rumours travel much more quickly on Twitter, and have a wider reach, than truth does. False news stories were 70 per cent more likely to be retweeted than ones based on fact. : “False news is more novel, and people are more likely to share novel information,” said Professor Sinan Aral, one of the study’s co-authors.

If we’re terrible at separating truth from reality when it comes to things we read, how much worse will it be when it comes to things we actually see? Or should I say “see”? If a politician appears to say something inflammatory or shocking, is anyone going to bother checking to see if the footage is fake before sharing it?

What if, for example, video appears of Donald Trump giving the thumbs up to a murderous dictator like Kim Jong-un, or praising that dictator by saying, “he speaks and his people sit up at attention. I want my people to do the same.” Oh wait, President Trump actually did say that. My bad. But you see the problem: How do you fact-check reality that is too surreal to be believed?

Recently, a Belgian political party created a deepfake ad featuring Mr. Trump criticizing the Paris climate accord in an attempt to get people to sign a climate-change petition. “We thought, ‘okay Trump is known for his withdrawal of the climate change agreement, but also he’s known for discussions on fake news,’ ” a spokesman for the party, Jan Cornillie, told the Poynter journalism website.

Story continues below advertisement

Mr. Cornillie told Poynter that faux-Trump was fashioned in a deliberately shoddy way to let viewers know the image was fake, but as Poynter noted, a number of commenters on social media believed it was real. I feel for them; sorting truth from disinformation has already become a full-time job. Add deepfakes to that, and you’re in the realm of unpaid overtime.

This is just the beginning. The software behind deepfakes is progressing to the point where the clever human eye and ear – so good at spotting artificial simulations of speech and action – will soon be outwitted. (Especially for those who don’t mind being outwitted.) A new iteration of the technology, developed by researchers at Stanford and other universities, improves head position, facial expression and eye movement to make the recreations even more lifelike.

One of the Stanford researchers, Michael Zollhofer, writes in a blog post that “our aim is to demonstrate the capabilities of modern computer vision and graphics technology, and convey it in an approachable and fun way.” The software could be used in the film industry, he notes, for post-production or dubbing.

At the same, Prof. Zollhofer recognizes the potential for the technology to be misused in “the generation of made-up video content that could potentially be used to defame people or to spread so-called ‘fake-news.’ ” It’s crucial for people to start thinking critically about the way they consume video content, he writes.

This will require the effort of critical thinking, a skill that students need to be taught. It will require all of us to inquire outside our bubbles. I was kind of hoping what Mr. Obama “said” about his successor was true. That, of course, is the biggest problem of all.

A deepfake is a manipulated video that transposes sound or image to make it look as if a person was doing or saying something he or she hadn’t. The technology is nearly seamless now.

Illustration by Hanna Barczyk

Report an error Editorial code of conduct
Comments

Welcome to The Globe and Mail’s comment community. This is a space where subscribers can engage with each other and Globe staff. Non-subscribers can read and sort comments but will not be able to engage with them in any way. Click here to subscribe.

If you would like to write a letter to the editor, please forward it to letters@globeandmail.com. Readers can also interact with The Globe on Facebook and Twitter .

Welcome to The Globe and Mail’s comment community. This is a space where subscribers can engage with each other and Globe staff. Non-subscribers can read and sort comments but will not be able to engage with them in any way. Click here to subscribe.

If you would like to write a letter to the editor, please forward it to letters@globeandmail.com. Readers can also interact with The Globe on Facebook and Twitter .

Welcome to The Globe and Mail’s comment community. This is a space where subscribers can engage with each other and Globe staff.

We aim to create a safe and valuable space for discussion and debate. That means:

  • All comments will be reviewed by one or more moderators before being posted to the site. This should only take a few moments.
  • Treat others as you wish to be treated
  • Criticize ideas, not people
  • Stay on topic
  • Avoid the use of toxic and offensive language
  • Flag bad behaviour

Comments that violate our community guidelines will be removed. Commenters who repeatedly violate community guidelines may be suspended, causing them to temporarily lose their ability to engage with comments.

Read our community guidelines here

Discussion loading ...

Due to technical reasons, we have temporarily removed commenting from our articles. We hope to have this fixed soon. Thank you for your patience. If you are looking to give feedback on our new site, please send it along to feedback@globeandmail.com. If you want to write a letter to the editor, please forward to letters@globeandmail.com.
Cannabis pro newsletter