Skip to main content
opinion
Open this photo in gallery:

OpenAI CEO Sam Altman, left, appears onstage with Microsoft CEO Satya Nadella at OpenAI's first developer conference, on Nov. 6, in San Francisco.Barbara Ortutay/The Associated Press

Connor Spelliscy is executive director of the Decentralization Research Center

When the majority of the Microsoft-backed OpenAI’s board members were ousted and Sam Altman resumed his role as chief executive officer of the world’s most well-known artificial-intelligence company, Big Tech showed its hand and tightened its grip on the AI industry. The authority of OpenAI’s altruistic non-profit board proved to be nothing more than “governance theatre.”

If you’ve seen a referee try to manage a soccer game of four year olds, you’ve experienced governance theatre. The referee’s authority is limited to the amount of respect the players have for the rules, which isn’t a lot when it comes to that age group.

OpenAI’s former board found itself in a similar position last month. The board was responsible for setting OpenAI on a path to “ensure that artificial general intelligence benefits all of humanity.” It seems they decided to dismiss Mr. Altman, the head of their for-profit subsidiary, at least partly because they believed his actions were jeopardizing this mission.

In the end, the scenario at OpenAI mirrored the chaos of the aforementioned youth soccer game. Although legally empowered to dismiss the CEO, the authority of OpenAI’s board turned out to be, like the referee’s, mostly an illusion. Influential stakeholders had enough control to outmanoeuvre the board and Mr. Altman was reinstated just days later, now with a new board much less likely to repeat the same mistake.

The key influential stakeholder here was likely Microsoft, which had invested US$13-billion in OpenAI. Without the support of a rich backer such as Microsoft, OpenAI could not afford the immense computational resources required for training and operation. Follow the money in the annotated version of OpenAI’s governance structure below and you’ll get a clearer picture of who is in charge.

You can see governance theatre in action across a range of activities, from children’s’ soccer games to Olympic judging to Rogers board meetings. OpenAI’s governance theatre seized headlines these past few weeks because in those other contexts, there is no concern that a governance failure could result in human extinction.

AI tools are already changing the way we learn, work and communicate, and they are poised to further revolutionize our daily lives. As they grow exponentially in sophistication, starting from an already impressive base, it’s increasingly understood that tools being built by OpenAI and its competitors could change the trajectory of humanity. Geoffrey Hinton, one of Canada’s godfathers of AI, has taken to reciting the line that, as a result of AI, “it’s quite conceivable that humanity is just a passing phase in the evolution of intelligence.”

OpenAI was meant to be different. Its unique non-profit structure and board stands in stark contrast to that of traditional Big Tech companies. OpenAI’s board was composed of members who had no personal ownership in the for-profit subsidiary, with many of them skewing more academic than corporate; whereas traditional Big Tech companies boast boards of prominent former policy-makers, executives and venture capitalists, all of whom have a financial interest in the success of the company.

When OpenAI’s prospects were on the line, however, its idealistic board was dismantled and Microsoft’s leverage was made plain. OpenAI now appears positioned to amplify the quickly growing power of Big Tech, instead of introducing a new model of organization emphasizing equitable governance and prioritizing safety.

We’ll have to wait for the dust to settle before we can know what role Mr. Altman will play at OpenAI, but he could be well-positioned to find a place among the tech oligarchs of Silicon Valley, some of whom wield complete control over their Big Tech companies. For examples, look no further than today’s digital town squares. X, the former Twitter, is privately owned and controlled by Elon Musk, while Meta, despite being a public company, is controlled by Mark Zuckerberg through his special class of supervoting shares.

The trend of centralized power among Big Tech companies extends beyond control of the company to the control of the market. Meta, X and Microsoft, among other Big Tech giants, hold dominant positions in crucial markets where competition is scarce. Many of these dominant companies are also expanding their influence in the AI market. They’re doing this by investing in new startups and developing their own AI tools internally. This significant market control has led many Big Tech companies to face antitrust action and recent sabre-rattling from regulators suggests there is more to come.

One example? Microsoft received 49 per cent of OpenAI’s for-profit subsidiary and valuable rights to intellectual property in return for a multibillion-dollar investment. After the board shuffle, Microsoft also secured itself a non-voting seat on OpenAI’s board, which seems to be part of the reason that competition regulators in Britain and the United States have started examining the relationship between the two organizations.

In short, only a handful of companies – sometimes controlled by a single person – control the technologies that are likely to shape our future.

Oligarchy to plurality

If these technologies are going to change our future, how do we get a say?

Effective regulation would be a good start, but we’ll need to first elect policy-makers willing to invest time in understanding emerging technologies without politicizing them.

Policy-makers have the unenviable task of working on a variety of complex topics, from climate policy to national defence, making it tough for them to keep up with ever-changing emerging technologies, such as AI. As someone who has spent several years of my career building and running advocacy organizations for emerging technologies, I have seen first-hand that the level of knowledge among policy-makers, even those thought to be a party’s expert on a particular subject, is often alarmingly low.

In such situations, a natural defence is to slow-play regulation or politicize an issue, which provides policy-makers with an opportunity to delay. This shifts the discussion away from the merits of a technology and results in bad policy or, more often than not, no policy. In which case regulators are left to take a best guess at how the technology should fit into existing – and usually inappropriate and dated – regulatory regimes. For a recent example of this, take a look at blockchain, which has been in the news for a decade and yet Canada and the United States have passed no material legislation on the subject, creating space for bad actors while chilling valuable innovation.

We’re not advocating for any particular legislation; we’re advocating for legislation guided by a breadth of policy-makers who understand emerging technologies and can contribute substantially to their regulation. If we continue to elect policy-makers who are unwilling to learn about these technologies, their inaction will result in the same mistakes that allowed such significant power to accrue to the last generation of tech giants. If you’re not going to learn, don’t run for office. The stakes are too high.

Regulation shouldn’t be limited to industry-specific oversight. We also need to update and enforce antitrust laws for the tech industry. While antitrust laws in North America were once an important tool to encourage competition and fight centralization – notably used in the breaking up of Standard Oil and AT&T – these old laws have proven ineffective when applied to modern tech companies. The ineffectiveness of Canada’s antitrust laws was evident this year when our Competition Bureau failed to halt the merger between Rogers and Shaw, and salt was rubbed into the wound when the Competition Bureau was ordered to pay Rogers $13-million for even attempting to stop the merger.

Another path to mitigating the risk of tech oligarchs is, ironically, a variation on OpenAI’s strategy of organizing differently than a standard corporation. While OpenAI’s unique structure has undoubtedly not gone according to plan, if OpenAI had organized using a more battle-tested structure, such as a co-operative, control of the organization would rest with a significantly broader set of stakeholders rather than a handful of insiders.

If OpenAI were a workers’ co-op, the employees would have had the power to decide the fate of their leader. The employees, understanding the nuances of Mr. Altman’s contributions and having a direct stake in the organization’s trajectory, could have made a more democratic and more well-informed decision. This is particularly relevant here given that more than 95 per cent of OpenAI’s 750-plus employees signed a letter committing to leaving the company unless Mr. Altman was reinstated.

Even better, if OpenAI were a consumers’ co-op, its millions of users, ranging from individual hobbyists to large corporations, would own a portion of OpenAI and collectively control it. This would enable an exponentially higher percentage of the people affected by OpenAI’s technology to have a say in its direction. Mr. Altman himself has suggested he’d like for the advanced form of artificial intelligence, known as artificial general intelligence, to be equally owned and controlled by every person in the world. Structuring AI companies as consumers’ co-ops would be a step in that direction.

While co-ops at this scale have traditionally struggled with issues such as organizational transparency, effective governance systems and member accountability, there is a new horizon. In a paper I recently co-authored for the Harvard Kennedy School’s Belfer Center, we explored how another emerging technology, decentralized tooling, enables co-ops to overcome their traditional limitations and effectively compete with Big Tech companies.

There is no quick fix to untangling decades of Big Tech centralization, but if we want an end to governance theatre, and if we want a broader, diverse part of society to start having a stake and a say in the technologies that are changing our lives and society as a whole, we need to start making changes now.

Follow related authors and topics

Authors and topics you follow will be added to your personal news feed in Following.

Interact with The Globe