Powered by MOMENTUM MEDIA
cyber daily logo

Breaking news and updates daily. Subscribe to our Newsletter

Breaking news and updates daily. Subscribe to our Newsletter X facebook linkedin Instagram Instagram

UK cyber watchdog releases international guidelines for AI development

Australia, the US and 15 more countries have shown their support for new global guidelines for AI development released by the UK’s National Cyber Security Centre (NCSC).

user icon Daniel Croft
Tue, 28 Nov 2023
UK cyber watchdog releases international guidelines for AI development
expand image

The new guidelines, which have been developed by the NCSC and the US Cybersecurity and Infrastructure Security Agency (CISA), approach the entire development and release cycle of artificial intelligence (AI) tools with a “secure-by-default” approach.

The guidelines have been broken down into four stages – secure design, secure development, secure deployment, and secure operation and maintenance, each reflective of a different part of an AI tool’s life cycle.

“AI systems have the potential to bring many benefits to society. However, for the opportunities of AI to be fully realised, it must be developed, deployed and operated in a secure and responsible way,” said the NCSC.

============
============

“AI systems are subject to novel security vulnerabilities that need to be considered alongside standard cyber security threats.

“When the pace of development is high – as is the case with AI – security can often be a secondary consideration. Security must be a core requirement, not just in the development phase, but throughout the life cycle of the system.

“For each section, we suggest considerations and mitigations that will help reduce the overall risk to an organisational AI system development process.”

The UK has sought to play a leadership role in the AI regulatory space and has garnered international support with the release of the new first-of-their-kind international guidelines.

Seventeen countries, including Australia, Japan, Israel, Canada, Germany, and the US, have become signatories to the new guidelines, reflecting the international recognition of the dangers AI could create, alongside the massive potential it has to change the world for the better.

“As nations and organisations embrace the transformative power of AI, this international collaboration, led by CISA and NCSC, underscores the global dedication to fostering transparency, accountability, and secure practices,” said CISA director Jen Easterly.

“The domestic and international unity in advancing secure-by-design principles and cultivating a resilient foundation for the safe development of AI systems worldwide could not come at a more important time in our shared technology revolution.”

Discussions regarding AI regulation have peaked in the last year, with government leaders, industry experts, and more all calling for intervention.

OpenAI CEO Sam Altman requested the assistance of US Congress back in May, calling for them to regulate AI development as he stressed the risks it could pose in creating misinformation.

However, recent drama at the ChatGPT developer has sparked alarm, revealing that dangerous AI with greater intelligence than humans could be on the verge of discovery.

Daniel Croft

Daniel Croft

Born in the heart of Western Sydney, Daniel Croft is a passionate journalist with an understanding for and experience writing in the technology space. Having studied at Macquarie University, he joined Momentum Media in 2022, writing across a number of publications including Australian Aviation, Cyber Security Connect and Defence Connect. Outside of writing, Daniel has a keen interest in music, and spends his time playing in bands around Sydney.

cd intro podcast

Introducing Cyber Daily, the new name for Cyber Security Connect

Click here to learn all about it
newsletter
cyber daily subscribe
Be the first to hear the latest developments in the cyber industry.