Powered by MOMENTUM MEDIA
cyber daily logo

Breaking news and updates daily. Subscribe to our Newsletter

Breaking news and updates daily. Subscribe to our Newsletter X facebook linkedin Instagram Instagram

Government to mandate AI safeguards for ‘high-risk’ AI platforms

The Australian government has released its interim response to last year’s AI consultation paper, pushing for a careful balance between driving innovative new solutions and empowering works and mitigating the dangers and risks associated with the technology.

user icon Daniel Croft
Wed, 17 Jan 2024
Government to mandate AI safeguards for ‘high-risk’ AI platforms
expand image

The Department of Industry, Science and Resources opened the consultation in June 2023, receiving over 500 submissions before it closed again in August.

Now, the government has collected the submissions and formed an interim response, saying that while the technology has the potential to massively improve the lives of Australians and stimulate the economy, there is still a lack of trust for artificial intelligence (AI).

“The potential for AI systems and applications to help improve wellbeing, quality of life and grow our economy is well known. It’s been estimated that adopting AI and automation could add an additional $170 billion to $600 billion a year to Australia’s GDP by 2030,” said the response.

============
============

“While AI is forecast to grow our economy, there is low public trust that AI systems are being designed, developed, deployed and used safely and responsibly. This acts as a handbrake on business adoption and public acceptance.

“Surveys have shown that only one-third of Australians agree Australia has adequate guardrails to make the design, development and deployment of AI safe.”

As a result of its findings, the Australian government has announced that it will be applying mandatory safeguards to risky AI tools, while low or no-risk AI tools are allowed to be used without restriction, maximising the amount in which they can help Australians in day-to-day operations.

“In considering the right regulatory approach to implementing safety guardrails, the government’s underlying aim will be to help ensure that the development and deployment of AI systems in Australia in legitimate, but high-risk settings, is safe and can be relied upon while ensuring the use of AI in low-risk settings can continue to flourish largely unimpeded.”

The government added that its first step in developing these safeguards is to identify what risks AI tools present, what mandatory safety safeguards would appropriately deal with said risks and the best ways to implement them.

On top of this, the government is working with industry professionals to “develop a voluntary AI Safety Standard, implementing risk-based guardrails for industry”.

It has also said it is establishing an expert advisory body that will overlook the development of future AI guardrails and that it will develop a voluntary labelling and watermarking scheme that will require creators to mark AI-generated material.

Alongside the strict crackdown on AI risks, the government hopes to optimise the technology within Australian society and maximise the benefits it presents.

As per submissions, the government has said that more investment in AI is crucial in bringing Australia to the forefront of the technology and maximising the nation’s output.

In line with this, the government has dedicated $75.7 million in funding to AI initiatives, which it said complements the massive multibillion-dollar investments made by the private sector – $4.4 billion since 2013, with $1.9 billion in 2022 and $1.8 billion in 2021 alone.

“Building on these important investments, the Australian government will continue to consider opportunities to support the adoption and development of AI and other automation technologies in Australia, including the need for an AI Investment Plan. This complements efforts to ensure that Australia has in place the necessary guardrails to build trust and confidence in the use of AI.”

Daniel Croft

Daniel Croft

Born in the heart of Western Sydney, Daniel Croft is a passionate journalist with an understanding for and experience writing in the technology space. Having studied at Macquarie University, he joined Momentum Media in 2022, writing across a number of publications including Australian Aviation, Cyber Security Connect and Defence Connect. Outside of writing, Daniel has a keen interest in music, and spends his time playing in bands around Sydney.

cd intro podcast

Introducing Cyber Daily, the new name for Cyber Security Connect

Click here to learn all about it
newsletter
cyber daily subscribe
Be the first to hear the latest developments in the cyber industry.