Powered by MOMENTUM MEDIA
cyber daily logo

Breaking news and updates daily. Subscribe to our Newsletter

Breaking news and updates daily. Subscribe to our Newsletter X facebook linkedin Instagram Instagram

Happy birthday ChatGPT: Looking back on the OpenAI chatbot’s first year of life

Despite having just turned one year old, ChatGPT has done a lot in its first year of life.

user icon Daniel Croft
Fri, 01 Dec 2023
Happy Birthday ChatGPT: Looking back on the OpenAI chatbot's first year of life
expand image

The tool was popular from the very beginning, with over a million users within the first five days of its birth (30 November 2022). Since then, ChatGPT has grown, amassing 1.7 billion users and has become more and more advanced with the launch of GPT-4 turbo, and changed the very nature of how people all over the world work, becoming a staple tool in a mass of different industries.

As Oscar Wilde once said, “Imitation is the sincerest form of flattery that mediocrity can pay to greatness” and by that notion, OpenAI’s ChatGPT is rich, with rival generative AI chatbots spawning throughout the year at the hands of major tech companies such as Google Bard and X’s (formerly Twitter) Grok.

Despite the competition; however, it’s hard to beat an original. ChatGPT has carved out a healthy 60 per cent of the market, dominating its rivals.

============
============

“In the year since OpenAI’s ChatGPT was publicly released, generative AI has taken the world by storm, proving its ability to enhance many different aspects of business operations,” said Jaime Moles, head of technical marketing at cyber security firm ExtraHop.

“As such, new customer data from ExtraHop found that the majority of enterprises have quickly adopted these technologies. ChatGPT has overwhelmingly taken the lead, showing clear dominance over other services like GitHub Copilot, Google Bard, and Azure OpenAI.”

That being said, these rivals are gaining more traction, meaning OpenAI will be forced to continue innovating to stay on top, as it has done with ChatGPT-4 Turbo.

“Interestingly, while only 12 per cent of customer devices are connecting to GitHub Copilot, our data indicates that it accounts for 30 per cent of total outbound traffic to generative AI services,” added Moles.

“As reference, nearly 80 per cent of customers are connecting to OpenAI, but ChatGPT is only accounting for two-thirds of the outbound traffic. The smaller base of Copilot users are clearly uploading far more data than their OpenAI counterparts and this may indicate that Copilot usage is on the up – especially among a far smaller user base.”

New features with GPT-4 Turbo include a much more up-to-date database of information, no longer cut-off at 2021. It has also dropped the price to be three times cheaper than GPT-4 and can generate images, just to name a few features.

On the other hand, OpenAI and ChatGPT have also sparked discussion on the impact AI tools could have on human workers, as well as a massive debate regarding the ethical use and development of AI tools, with governments worldwide introducing guidelines and legislation.

Most recently, the UK’s National Cyber Security Centre (NCSC) released new global guidelines for the development of AI, with the help of the US Cybersecurity and Infrastructure Security Agency (CISA).

With 17 signatories including Australia, the new global guidelines break down AI security into four stages – secure design, secure development, secure deployment, and secure operation and maintenance – each reflective of a different part of an AI tool’s life cycle.

“AI systems have the potential to bring many benefits to society. However, for the opportunities of AI to be fully realised, it must be developed, deployed, and operated in a secure and responsible way,” said the NCSC.

“AI systems are subject to novel security vulnerabilities that need to be considered alongside standard cyber security threats.

“When the pace of development is high – as is the case with AI – security can often be a secondary consideration. Security must be a core requirement, not just in the development phase, but throughout the life cycle of the system.

“For each section, we suggest considerations and mitigations that will help reduce the overall risk to an organisational AI system development process.”

Prior to the NCSC’s guidelines, the US issued an executive order for AI regulation, which it says will “advance and govern” the technology’s use and development.

“Harnessing AI for good and realising its myriad benefits requires mitigating its substantial risks,” said President Joe Biden in a White House release.

“This endeavour demands a society-wide effort that includes government, the private sector, academia, and civil society.”

Even the father of ChatGPT, OpenAI chief executive officer Sam Altman, has called for assistance regulating AI, asking US Congress to step in.

“I think if this technology goes wrong, it can go quite wrong, and we want to be quite vocal about that; we want to work with the government to prevent that from happening,” said Altman.

“We try to be very clear about what the downside case is and the work that we have to do to mitigate that.”

Outside of his calls for regulation, Altman caused significant drama at OpenAI this month, bookending ChatGPT’s first year.

The CEO was fired without specific reasoning on 17 November, sparking threats of a mass exodus as staff looked to follow him to his new employer, Microsoft.

However, just as quickly as the drama started, Altman was reinstated as OpenAI CEO by the board, who was reshuffled.

While Altman has not disclosed any reasoning for his dismissal, it is believed that he remained silent on superhuman AI developments, which involved technology that could surpass the intelligence of humans.

So as we say happy birthday to ChatGPT for the first time, and applaud it for how far it has come, we can say it will almost certainly continue to shape the future in its next years, but it’s hard to say exactly how.

Daniel Croft

Daniel Croft

Born in the heart of Western Sydney, Daniel Croft is a passionate journalist with an understanding for and experience writing in the technology space. Having studied at Macquarie University, he joined Momentum Media in 2022, writing across a number of publications including Australian Aviation, Cyber Security Connect and Defence Connect. Outside of writing, Daniel has a keen interest in music, and spends his time playing in bands around Sydney.

cd intro podcast

Introducing Cyber Daily, the new name for Cyber Security Connect

Click here to learn all about it
newsletter
cyber daily subscribe
Be the first to hear the latest developments in the cyber industry.