Facebook Inc. is waking up to the true cost of fixing its image problems.
When the company reported a disappointing second quarter in July – with rising expenses and slowing growth – investors reacted swiftly, cutting the social media giant’s stock market value by US$119-billion in a single day. Facebook executives have warned that their plans to overhaul the platform in response to the recent wash of controversies will cost billions.
Those costs include hiring thousands of new safety and security workers, investing in artificial intelligence and machine learning, changing its news feed and launching a glitzy publicity campaign to show the world that it is a changed company.
But problems such as fake news, hate speech and the manipulation of posts by hostile foreign governments are not easy to solve, even for a company with Facebook’s massive resources. And recent events have highlighted just how far the company still has to go.
Last week, Facebook said it had discovered a new set of “inauthentic” pages and accounts trying to sow political discord ahead of the November U.S. midterm elections. Among other things, those behind those pages tried to stir up tensions by organizing a counterprotest against a planned political rally in Washington.
Facebook has admitted that its platform was used by Russian trolls looking to interfere in U.S. elections, including the 2016 presidential race; that the data of millions of users were improperly shared; that it was not doing enough to stop inflammatory content. It is facing federal investigations in the United States by the Department of Justice, the Federal Bureau of Investigations and the Federal Trade Commission. Its founder and chief executive officer, Mark Zuckerberg, has made numerous public appearances and offered testimony before U.S. Congress to show contrition and talk about his plans for reform.
Yet many analysts say the changes Facebook is making fall far short of what’s needed to regain public trust. If so, that could spell more bad news for Facebook – and serve as a brutal awakening for those investors banking on a quick turnaround of the company’s fortunes.
Exactly how close Facebook is to wiping out fake news, hate speech and political interference remains an open question among experts in cybersecurity and privacy.
On controversial political ads, Facebook now requires U.S. political advertisers to verify their identities. On fake news, the company says it’s aiming to reduce the number of users who see such content in their news feeds. As for privacy issues, such as revelations that political consultancy Cambridge Analytica improperly accessed personal details on Facebook users through an app, the company now requires app developers to undergo audits.
While they largely applaud the efforts that Facebook has made, many experts say they don’t go far enough.
Observers complain that Facebook’s new ad transparency tools aren’t user-friendly. The company offers a searchable database for political ads, but critics say it’s difficult to use. Some analysts say the company needs to create a searchable database of all ads, not just political ones, and should make it easy for researchers, journalists and the public to analyze the data.
Even as Facebook has released some details about its content moderation efforts, such as how much problem content it removes, critics say it has not released important details, such as how often it incorrectly flags content.
While Facebook had made it easier for users to manage their privacy controls, the company still allows advertisers to target users on a host of personal details and also continues to track users as they browse the web off of Facebook.
That practice prompted Mozilla, the non-profit that runs the Firefox web browser, to suspend advertising on Facebook. Earlier this year, Mozilla launched a browser extension that blocks Facebook from tracking users off the platform, but would rather see Facebook take its own action.
“To my mind, if Facebook really wants to earn back trust, that is the big step that it needs to take,” said Marshall Erwin, Mozilla’s director of trust and security.
In some cases, Facebook’s changes have proved ineffective. After the company banned advertisers from discriminating against some users – such as by excluding older workers from seeing certain job ads – a government investigation in Washington State found many such ads were still able to get through Facebook’s filters.
“That there’s so much pressure at times on the platforms to remove all the problematic content that sometimes they actually move too fast and content moderation algorithms or policies get rolled out before they’re really ready,” said Natasha Duarte, a policy analyst at Washington-based Center for Democracy and Technology.
What’s more, it’s difficult to determine just how much problem content there is on Facebook, making it hard to know whether the company is succeeding in combatting the issue. The company employs 7,500 content moderators to help sift through billions of pieces of content uploaded every day.
Facebook has provided rough estimates for the prevalence of some problem content. The company estimated that 22 to 27 of every 10,000 posts contained graphic violence and six to seven contained nudity or sexual content. But it has not yet found a reliable way to measure the prevalence of hate speech, terrorism propaganda or spam.
“That is really indicative of why it’s going to be really hard for Facebook to regain the public trust,” said Miranda Bogen, a senior policy analyst at Washington-based technology policy consultancy Upturn. “In a lot of cases they’re saying: ‘Trust us, we’re taking care of it.’ But the public has no way to verify that.”
To solve issues such as political interference and hate speech Facebook has pledged to invest in artificial intelligence and machine learning. But the technology is fraught with limitations.
The algorithms work best in English and tend to struggle with linguistic nuances. Computers are typically trained to recognize common offensive patterns of speech, but by altering their language even slightly, users can often easily avoid detection.
Automation “performs really well in a context where no one is trying to game it,” said Laura Norén, director of research at Obsidian Security, a California-based cybersecurity firm. “But if you actually have someone trying to game these algorithms, they are very fragile to that.”
Facebook has said its automated tools catch the vast majority of spam, terrorism propaganda and nudity before users complain. But it still relies on user reports to flag most hate speech. “It’s impossible to get around the need to hire a lot of humans to do this work,” Ms. Duarte said.
Outside of automation, there are changes Facebook could make that may not cost much, but could have significant consequences for the company’s bottom line.
Hiring content moderators means less money to develop new products. Scrapping technologies that track users across the web could make Facebook ads less lucrative.
Already, Facebook said its new political advertising tools cost more than the company earns from such ads.
The alternative is regulation, which seems increasingly likely in the United States. Last week, Senator Mark Warner, the senior Democrat on the Senate intelligence committee, released a plan to regulate tech companies.
Social media companies “realize that privacy legislation in the United States is inevitable,” Ms. Duarte said.
Experts say regulation may turn out to be the best hope for restoring public trust in social media, in part because it will apply broadly across the internet, not just to Facebook. Or perhaps there will be another controversy that sends companies scrambling to create a new round of changes.
“I think we’ve hit all the harms that are potentially going to present to internet users,” Mozilla’s Mr. Erwin says. “Now the question is: Can we use that as enough of a motivation to really get that business done?”