Powered by MOMENTUM MEDIA
cyber daily logo

Breaking news and updates daily. Subscribe to our Newsletter

Breaking news and updates daily. Subscribe to our Newsletter X facebook linkedin Instagram Instagram

OpenAI fired Altman for silence on superhuman AI

While it seems that Sam Altman is returning to his post at the helm of AI giant OpenAI, things are far from ordinary at the AI giant, with one cited reason for the reinstated chief executive’s initial firing relating to a revolutionary AI breakthrough.

user icon Daniel Croft
Thu, 23 Nov 2023
OpenAI fired Altman for silence on superhuman AI
expand image

According to sources close to the matter via Reuters, a handful of researchers at OpenAI had written a letter to the board that warned of a potentially dangerous artificial intelligence (AI) breakthrough that could endanger the world as we know it.

The AI in question is called Q* (pronounced Q Star), and it’s OpenAI’s project to search for what is known as artificial general intelligence (AGI), or superintelligence, which the company said is AI that is smarter than human beings.

Q* had reportedly been making serious progress, with the model performing tasks that could revolutionise AI.

============
============

Current AI models, like OpenAI’s ChatGPT 4, are fantastic at writing based on predictive behaviour, but as a result, answers do vary and are not always correct.

Researchers believe that having an AI tool capable of solving and properly understanding mathematic problems where there is only one answer is a major breakthrough in the development of superintelligence.

This is exactly what Q* began to do. While only solving equations at a school level and being extremely resource-heavy, researchers believe this to be big.

It’s worth noting that this is not like a calculator that can solve certain equations as you enter them in. ChatGPT can already do that, thanks to the massive library of data it has access to. Superintelligence like Q* is able to learn and properly understand the process.

The reason the OpenAI board was displeased with Altman is that news of these developments had been kept quiet.

OpenAI staff were only made aware on Wednesday last week (15 November), when company executive Mira Murati informed them, according to a company spokesperson speaking with Reuters.

The spokesperson also said that a letter had been sent to the board, which flagged the potential dangers presented by the technology.

Altman has previously expressed his excitement for AGI and has worked hard to push OpenAI closer to its discovery, alongside changing the nature of work with ChatGPT. To do this, Altman relied heavily on the mass of computing resources provided by the company’s biggest backer and his short-term employer, Microsoft.

Speaking at the Asia-Pacific Economic Cooperation summit last week, Altman said that he believed that OpenAI was close to discovering AGI.

“Four times now in the history of OpenAI, the most recent time was just in the last couple weeks, I’ve gotten to be in the room, when we sort of push the veil of ignorance back and the frontier of discovery forward, and getting to do that is the professional honour of a lifetime,” he said.

The next day, Altman was terminated from OpenAI.

Altman’s push for the rapid exploration of superintelligence is concerning. For years, there has been discussion surrounding the fears of AI and the threat it could pose to humans. Now, however, the technology is no longer a work of science fiction, and the latest discoveries of AI that is smarter than humans detailed above come with a whiff of Cyberdyne Systems Skynet (the AI that powered the Terminator for the non-nerds in the room).

Already, there have been warnings that AI could lead to humans being wiped out, with the US Centre for AI Safety (CAIS) saying back in May that the development of the technology could lead to human extinction.

“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war,” said CAIS in the one-sentence release.

While agencies around the world have begun to introduce regulations regarding the development of AI, like the US government, which issued an executive order for the technologies regulation back last month, Altman had been calling for governments to step in and regulate development since back in May, a major contrast with his lack of candour regarding AGI.

Speaking in front of US Congress, Altman said that while generative AI is an incredibly powerful tool, government regulation would be necessary to curb the dangers it creates.

“I think if this technology goes wrong, it can go quite wrong, and we want to be quite vocal about that; we want to work with the government to prevent that from happening,” said Altman.

“We try to be very clear about what the downside case is and the work that we have to do to mitigate that.”

Daniel Croft

Daniel Croft

Born in the heart of Western Sydney, Daniel Croft is a passionate journalist with an understanding for and experience writing in the technology space. Having studied at Macquarie University, he joined Momentum Media in 2022, writing across a number of publications including Australian Aviation, Cyber Security Connect and Defence Connect. Outside of writing, Daniel has a keen interest in music, and spends his time playing in bands around Sydney.

cd intro podcast

Introducing Cyber Daily, the new name for Cyber Security Connect

Click here to learn all about it
newsletter
cyber daily subscribe
Be the first to hear the latest developments in the cyber industry.