![](https://static.wixstatic.com/media/96f9e3_50f7bc5bf9cb4fcdb1d894cf03cf0f21~mv2.png/v1/fill/w_980,h_980,al_c,q_90,usm_0.66_1.00_0.01,enc_auto/96f9e3_50f7bc5bf9cb4fcdb1d894cf03cf0f21~mv2.png)
As we report elsewhere, Sam Altman has been heralded to the OpenAI hotseat, raised-up by the sworn allegiance of 730 of his co-workers (not sure I'd like to be the other 40 that +didn't+ sign the letter demanding his return) and brute-forced in through Microsoft's desire to secure the value of its $13Bn investment.
So what happened? And what does it mean for the rest of us?
It looks like there were two defining moments that drove the initial decision of the Board to exit him. One was a significant breakthrough in the AI technology that OpenAI is developing. Its stated goal, in its charter, is to advance towards Artificial General Intelligence (AGI) - the moment that machines become as intelligent as humans. On the Thursday night before he was sacked, Altman was at a conference telling the world that this breakthrough had had a big impact on him.
“Four times now in the history of OpenAI—the most recent time was just in the last couple of weeks—I've gotten to be in the room, when we sort of push the veil of ignorance back and the frontier of discovery forward,” Altman said, at the discussion during the Asia-Pacific Economic Cooperation (APEC) conference.
It's likely we will find out what this 'frontier of discovery' is sometime in 2024 - but if it's that amazing, expect that AGI is another leap closer.
The second defining moment appears to be that this discovery provoked some researchers within OpenAI to write to the Board, worrying that Altman was viewing this as a commercial opportunity rather than a massive risk to the future of humanity. It seems that this was the trigger for them pulling the trigger.
It's not necessary to know the convolutions that have gotten OpenAI to its current guise - Elon Musk was one of the Co-Founders until he stropped-off with his cash when they didn't want to do things his way anymore. It was originally a not-for-profit that was trying to create AI to compete with Google. Altman, Musk and Greg Brockman, CTO and President, worried that if the tech was developed in private hands alone, the rest of humanity would have a problem.
However, by 2019 Altman and Brockman insisted that they needed a for-profit entity in order to pay for the massive computing power required to continue to progress the technology. This is where Musk broke with them, vowing to set up his own AI company, now called xAI.
All along, the Board has been charged with ensuring the safety of the tech they're creating. They are required by the charter to do all they can to mitigate the risks to humanity of what OpenAI is doing. This, clearly, is what they thought they were doing when they fired Altman just over a week ago. They must have acted because they believed the researchers that Altman himself, due to this breakthrough, had himself become a danger to the rest of us.
So, now that everything's back as it was (apart from a few irritating Board members having been ousted instead), what does it mean for the technology - and that little thing called humanity?
In general, that a number of researchers were uncomfortable with the direction the technology was going in isn't surprising. The charter's insistence on risk-aversion rather than the grasping of opportunities meant that this was pretty inevitable at some stage - no one knows what this technology is going to do, so the three Board members that sacked Altman had to rely on their best-guesses. And they were all tied to this idea that they should protect first, then let questions be asked later.
Whatever this new technology is, we're heading towards AGI. OpenAI is not perfect by any mean - nor is Altman - but they are at least attempting to do things the right way, in public. Whilst Altman will now have a pretty bulletproof job he does inspire a belief that he's leading a company that at least tries to consider protection as importantly as progress.
As 2024 rolls by, I think we'll see more eye-popping developments from OpenAI. Each of them will be hotly debated as to whether they are the breakthrough that Altman saw these last few weeks - and whether the decelerant researchers or the accelerant Altman were right about their direction all along.
Comments