Skip to main content

Derek Ruths is a professor of computer science at McGill University, where he advances the responsible use of large datasets to measure and model human behaviour. In his research, he’s worked extensively on and collaborated with social-media platforms including Facebook, Twitter and Reddit.

The blue checkmark of Twitter’s verified accounts remains one of social media’s most coveted baubles. The symbol, which appears next to a user’s name to signify that a given account genuinely belongs to the person, company or organization it claims to represent, remains hard to get and doing so involves a vetting and verification process.

So it was strange when, on July 15, a long list of verified Twitter accounts – including those of Bill Gates, Barack Obama, Kanye West, Jeff Bezos and Kim Kardashian – tweeted out messages asking for donations sent through bitcoin, the hard-to-trace cryptocurrency, with promises to double and then return those gifts.

This was a security breach, where Twitter’s vaunted checkmarks effectively doubled as targets, highlighting the accounts that are the most valuable ones to exploit with their millions and millions of trusting followers.

The company has been understandably cagey about how this hack happened, but there’s enough information to conclude that the only way to seize so many of these accounts at once would have been a backdoor, such as one of the internal “god-mode” tools that Twitter employees use as master controls, to administer, alter and manage the platform. It seems clear that Twitter’s god-mode tools were too accessible and too few checks were required to make substantive changes to users’ accounts.

Hackers tell the inside story of the Twitter attack

Twitter says about 130 accounts were targeted in cyber attack this week

FBI leads search for hackers who hijacked Twitter accounts, scammed bitcoins

While many have since raised the alarm about the power of these god-mode tools, the reality is that they are necessary for any online platform to function – whether that’s social media (Twitter), retail (Shopify) or gaming (Candy Crush). The ability for certain key employees to review, edit, remove, ban, change passwords on and reinstate a user’s account is essential for any online platform to remain healthy, comply with laws and manage risks.

What is more alarming is how poorly these tools are secured and tech companies’ generally sluggish efforts to protect them better. It’s not technically hard to solve this problem; it would simply require rerouting precious development time, implementing stricter moderation protocols and staffing additional customer-support managers. And banks have effectively had god-mode mechanisms in place for decades, with relatively solid track records when it comes to security. So why won’t Twitter, Facebook and other platforms just adopt these best practices?

The answer has a lot to do with the tech industry’s culture. In 2004, freshly founded Facebook created the motto “Move fast and break things,” effectively a call to get as many features released as quickly as possible – consequences be damned. It devalued the integrity of the product in favour of what the user could see and touch – like a house that looks great on first walk-through, until you notice there are holes behind the furniture and the electrical wiring is dangerously tangled – but Facebook’s success led that to become the mantra of the “brogrammer” culture at large in other tech companies.

A decade later, Facebook changed its motto to the less catchy “Move fast, with stable infra” (short for infrastructure), but that didn’t change companies’ development culture. Online and gaming platforms have repeatedly demonstrated a systemic inability to respond to values that revolve around the integrity and long-term impact of their products, including the processes and practices around god-mode tools.

To address these kinds of gaps – from Twitter’s privacy breach to reports about how Facebook can affect voter attitudes – online platforms need to make changes. But the sector has had years to prove that it could enact those changes itself, and time and time again, companies have proven – with few exceptions – that it’s just not disciplined enough to create this change from within. Executives and developers at platform companies have been too distracted by the appeal of shiny new features and product differentiation to prioritize real user protection. Indeed, this is what the industry incentivizes; companies that do divert resources to developing bank-grade god-mode tools or focusing on long-term social issues such as privacy, media literacy or bias will typically be seen as taking their eyes off the prize, and lose market share.

External regulations, similar to the ones imposed on banks, seem to be the only way to make user protection an enduring high-priority agenda item for policy and platform makers. There’s no clear regulatory solution for this yet, but Canadian governments have already demonstrated a willingness to create pressure on the tech sector and apply innovative approaches. If tech companies aren’t going to check the box of being human-centred themselves, then governments will need to do it for them.

Keep your Opinions sharp and informed. Get the Opinion newsletter. Sign up today.