It’s an advertising match made in hell. Alongside Facebook pages set up to mock the suicide of bullied B.C. teen Amanda Todd are ads for Target Corp., Ticketmaster and Four Seasons Hotels.
Those companies have not deliberately chosen to sponsor the content, of course. But the appearance of these ads highlights a problem with advertising on social media: It is not always brand safe, and it allows relatively little control over the content that appears next to a company’s messages in a constantly updating environment.
In some ways, it’s a conversation that is as old as advertising itself. Companies have always grappled with placing ads during newscasts, for example, where a commercial might air after an upsetting story. However, because advertisers in traditional media generally bought time or space tied to a particular television show or newspaper section, they could better judge the risks.
Online, advertising is bought much more frequently not for a particular page but for the person who is looking at it. An ad will follow a consumer who is looking to buy a car, for example, or a mom with school-aged children, and will pop up where that consumer chooses to go. The rise of advertising exchanges, where advertising inventory could be bought up blind at a discount, has also accelerated the problem.
Ad verification software, such as Project Sunblock or Comscore’s Validated Campaign Essentials have helped, by blocking questionable content for advertisers on the Web. But because Facebook Inc. is its own ad platform, its ability to provide similar blocking is still a work in progress.
“There have been a lot of conversations in the industry about how we can better control that, and protect brands,” said Karel Wegert, managing director of digital solutions at Media Experts, a media planning and buying firm.
Mr. Wegert points to the recent introduction of “Premium” ads on Facebook, allowing ads to appear directly in the “news feed” on a user’s homepage, as opposed to on the side of the screen on other pages and groups – a more expensive but much less risky environment for a brand – as a step in the right direction.
Facebook is also testing its own ad exchange, which buyers say they are hoping will include more safeguards.
The rest of the Web has hardly solved the problem – even when specifying key words to avoid in search advertising, or using verification software and other digital safeguards, some content is occasionally overlooked.
“These risks aren’t unique to digital, but they’ve accelerated, and happened at a greater breadth and pace. It requires us to be more nimble,” said Sasha Grujicic, senior vice-president of group strategy at Aegis Media Canada.
While the advertising industry understands that ads are targeted to a user as opposed to a page, consumers might not be aware of that. The risk for advertisers, then, is for users to mistakenly assume a brand is somehow affiliated with an offensive page.
The “Amanda Todd Reporting Team,” an anti-bullying group, has been communicating with advertisers to let them know they’ve seen ads on these pages, and ask them to pressure Facebook to take them down.
“We were deeply concerned to learn about this, as this type of page does not align with our values and we do not endorse it in any way,” said Megan Hooper, a spokesperson with Toronto-Dominion Bank, one of the companies contacted by the group.
The anti-bullying group also got in touch with Clorox, which has also suffered manipulation of its brand image by groups making cruel jokes about a former suicide attempt, in which the teenager swallowed bleach.
“Clorox is deeply saddened by the death of Amanda Todd and this entirely tragic situation. We are very disturbed about the troll pages that have been popping up and have expressed our concern directly to our Facebook team,” said David Kellis, a spokesperson for The Clorox Company.
“Every day we’ve been flagging all the questionable content that we’ve been seeing related to this issue and have reported it to Facebook. … We certainly would like them taken down. And in the event our ads appear next to any questionable content like this we would immediately request that our ads be pulled.”
Facebook has guidelines against bullying on the site, but simply cannot prescreen the roughly 2.5 billion pieces of content posted every day.
“Pages that are abusive have no place on our service, however, and we have been working to quickly remove all pages and content that violate our terms,” a Facebook spokesperson said.