Images are unavailable offline.

Dozens of cardboard cutouts of Facebook CEO Mark Zuckerberg are seen during an protest outside the U.S. Capitol in Washington, U.S., April 10, 2018. REUTERS/Aaron P. Bernstein


Facebook continued to defend its policies to allow Holocaust deniers to be published on its platform in North America, saying it considered such content to be misinformation, not hate speech.

During a press event at Facebook’s Menlo Park, Calif. headquarters last week, Tessa Lyons, a product manager and member of the company’s misinformation team, told journalists that the social media giant would ban posts in countries where denying the Holocaust is illegal, such as Germany. But in others countries, including the United States, Holocaust denial would be would be “treated as a form of misinformation” and would be allowed to remain on the platform, though Facebook would reduce how many people saw the posts.

“Now, saying the Holocaust didn’t happen is different than [posting] an actual, specific attack against a protected group of people,” Ms. Lyons added. “If that were to be the case, then that would be against our policies and would be removed.”

Story continues below advertisement

The world’s largest social media network has come under intense fire for its stance on Holocaust deniers. On Wednesday, chief executive Mark Zuckerberg gave a controversial podcast interview to technology website Recode in which he defended the rights of Holocaust deniers to post on Facebook, implying they were merely misinformed about history. “I just don’t think that it is the right thing to say: ‘We’re going to take someone off the platform if they get things wrong, even multiple times,” he said.

Related: The long road to fixing Facebook

Facebook Canada contracts independent fact checkers to combat ‘fake news’

Opinion: The new rules for the internet - and why deleting Facebook isn’t enough

The company has pledged to crack down on abuses of its platform amid widespread outcry over Facebook’s failure to recognize foreign interference in the 2016 U.S. presidential election and criticism of its lax attitude toward data privacy.

But for the world’s largest social network, policing a user base of 2.2 billion people that posts millions of times a day in 50 languages has meant trying to thread the needle between promoting free expression and protecting its users from harm. And as events of last week made clear, the company is still grappling with how to strike the right balance.

Earlier in the week, Facebook officials had appeared before Congress, where they struggled to defend their decision not to ban InfoWars, the right-wing website that had accused survivors of the Parkland, Fla., school shooting of being “crisis actors.”

Story continues below advertisement

The company also announced new policies to remove fake news that could incite imminent harm, amid evidence that such posts had led to ethnic violence in countries such as Sri Lanka and Myanmar.

But officials provided scant details about how they would determine which posts posed a risk, beyond a plan to form partnerships with local civil-society groups that might offer recommendations. “The same piece of content shared in Sri Lanka and shared in Eastern Canada, where I’m from, aren’t going to have the same impact,” she said.

At an event last week to give international journalists an update on how the company was combatting fake news and political interference on its platform, Facebook officials said they would also continue to allow posts attacking same sex marriage, though not those that specifically targeted individual people.

Images are unavailable offline.

Monika Bickert, the head of global policy management at Facebook, Juniper Downs, global head of public policy and government relations at YouTube, left, and Nick Pickles, the senior strategist at Twitter, testify before the House judiciary committee on July 17, 2018 in Washington. The committee heard testimony regarding possible political bias in the company's content filtering systems.

Alex Wroblewski/Getty Images

Public-policy director Ross Kirschner gave the example of a post showing a rainbow flag with the words “sin is sin,” which he said would be allowed. “You can criticize, condemn, religions, countries, [sexual] orientations,” he said. “What you cannot do is attack people.”

Many of the changes did not go over well. What constitutes the sort of trustworthy news sources that Facebook had pledged to promote on its platform, one of the journalists asked. Was trust a universal concept or a personal one? “I think this is generally an area where there’s not one single answer for that,” said John Hegeman, the head of Facebook’s news feed.

Another journalist wanted to know whether it would just be easier for Facebook to let governments develop their own legal definitions of things such as fake news and misinformation, which social-media companies would then be forced to follow. But by then the answer seemed obvious. “There’s an enormous number of different things people mean when they talk about fake news,” Mr. Hegeman offered. “I think it would just be very, very hard to do something legislatively that would define exactly each of those categories.”