OpenAI CEO Sam Altman has been kicked out of the company. We explore how corporate coups take place and how you can avoid one in your own organization.

⚔ Coups follow a similar pattern.

This weekend OpenAI (creator of ChatGPT) was roiled in drama as the CEO, Chairman, and other key figures are kicked out.

We don't have all the facts yet, but it seems that the co-founder, chief scientist Ilya Sutskever, led the coup.

Why does this happen? (And how can you avoid it?)

1. 🌤 Lack Of Cohesive Vision 🌤

When the direction of a company is at odds with expectations, people tend to freak out.

In this case, OpenAI was plagued with debates on AI safety. The now former-CEO Sam Altman was heavily focused on commercializing AI. Others in the organization wanted to focus on the non-profit side.

This split in vision means a split in the company. Soon, people are rowing in different directions. Distrust starts to form. Each accusing the other that they are being selfish. Eventually a righteous mission emerges to kick the other out. "We must save the company!" The organization is at war.

2. ⚡ Emotional Contagion ⚡

Coups feed on emotion, predominantly fear. Rather than move into healthy conflict resolution, leaders in the grips of fear spread it amongst each other. They feed off the negativity, whipping it up until making drastic moves "makes sense".

Today the co-founder that led the coup expressed remorse and regret and wishes he could reunite the company. Regret is a byproduct of making decisions based on emotion.

Ironically, maybe if they asked ChatGPT 🤖 for some advice, they could have taken some of the emotion out of the equation.

Today Sam Altman and employees from the commercial side of the business joined Microsoft. Essentially, Microsoft acquired half of OpenAI without lifting a finger. And all because a few leaders couldn't get into a room, communicate, and sort out their emotions.

Ousted CEO Sam Altman

3. ⚙️ Systemic Failure ️⚙️

How you set something up matters. Results are an output of their system. And OpenAI created a structure that couldn't last.

The way the company was structured created different, competing profit centers with different goals. This was embedded in the very system itself.

We should invest in #HumanIntelligence as much as #ArtificalIntelligence and stop with the predictable mistakes. As we've written before, crisis is the accumulation of small problems that are ignored again and again until they lead to a collapse.

What will remain of OpenAI is to be seen in the next few weeks and months. But it's clear that we humans have yet to sort out our own BS.

OpenAI is not unique. Every nation, every organization risks the same consequences when it cannot align behind a common vision or control its fear.

How many times do we need to play out this experiment before we learn its lessons?