Go to the Globe and Mail homepage

Jump to main navigationJump to main content

(Sean Kilpatrick/Sean Kilpatrick/The Canadian Press)
(Sean Kilpatrick/Sean Kilpatrick/The Canadian Press)

Can social media ever be made safe from sex predators? Add to ...

A solid system for defending against online predators requires both oversight by trained employees and intelligent software that not only searches for improper communication but also analyzes patterns of behavior, experts said.

The better software typically starts as a filter, blocking the exchange of abusive language and personal contact information such as e-mail addresses, phone numbers and Skype login names. But instead of looking just at one set of messages it will examine whether a user has asked for contact information from dozens of people or tried to develop multiple deeper and potentially sexual relationship, a process known as grooming.

Companies can set the software to take many defensive steps automatically, including temporarily silencing those who are breaking rules or banning them permanently. As a result, many threats are eliminated without human intervention and moderators at the company are notified later.

Sites that operate with such software still should have one professional on safety patrol for every 2,000 users online at the same time, said Sacramento-based Metaverse Mod Squad, a moderating service. At that level the human side of the task entails “months and months of boredom followed by a few minutes of your hair on fire,” said Metaverse Vice President Rich Weil.

Metaverse uses hundreds of employees and contractors to monitor websites for clients including virtual world Second Life, Time Warner’s Warner Brothers and the PBS public television service.

Metaverse Chief Executive Amy Pritchard said that in five years her staff only intercepted something terrifying once, about a month ago, when a man on a discussion board for a major media company was asking for the e-mail address of a young site user.

Software recognized that the same person had been making similar requests of others and flagged the account for Metaverse moderators. They called the media company, which then alerted authorities. Other sites aimed at kids agree that such crises are rarities.

Sites aimed at those under 13 are very different from those with large teen audiences.

Under a 1998 U.S. law known as COPPA, for the Children’s Online Privacy Protection Act, sites directed at those 12 and under must have verified parental consent before collecting data on children. Some sites go much further: Disney’s Club Penguin offers a choice of viewing either filtered chat that avoids blacklisted words or chats that contain only words that the company has pre-approved.

Filters and moderators are essential for a clean experience, said Claire Quinn, safety chief at a smaller site aimed at kids and young teens, WeeWorld. But the programs and people cost money and can depress ad rates.

“You might lose some of your naughty users, and if you lose traffic you might lose some of your revenue,” Ms. Quinn said. “You have to be prepared to take a hit.”

There is no legal or technical reason that companies with large teen audiences, like Facebook, or mainly teen users, such as Habbo, can’t do the same thing as Disney and WeeWorld.

From a business perspective, however, there are powerful reasons not to be so restrictive, starting with teen expectations of more freedom of expression as they age. If they don’t find it on one site, they will somewhere else.

The looser the filters, the more the need for the most sophisticated monitoring tools, like those employed at Facebook and those offered by independent companies such as the U.K.’s Crisp Thinking, which works for Lego, Electronic Arts, and Sony Corp’s online entertainment unit, among others.

In addition to blocking forbidden words and strings of digits that could represent phone numbers, Crisp assigns warning scores to chats based on multiple categories of information, including the use of profanity, personally identifying information and signs of grooming. Things like too many “unrequited” messages, or those that go unresponded to, also factor in, because they correlate with spamming or attempts to groom in quantity, as does analysis of the actual chats of convicted pedophiles.

The highest scores generate colour-coded “tickets,” with those marked red requiring the quickest response from moderators.

Single page
 

Topics:

In the know

Most popular videos »

Highlights

More from The Globe and Mail

Most popular