Powered by MOMENTUM MEDIA
cyber daily logo

Breaking news and updates daily. Subscribe to our Newsletter

Breaking news and updates daily. Subscribe to our Newsletter X facebook linkedin Instagram Instagram

US restricts sale of semiconductors to China to stunt AI development

In an effort to stunt the advancement of its artificial intelligence and supercomputing technologies, the US has announced new limitations on the sale of advanced semiconducting chips to China.

user icon Daniel Croft
Wed, 18 Oct 2023
US restricts sale of semiconductors to China to stunt AI development
expand image

Announced on Tuesday (17 October), the new restrictions back earlier limits the Biden administration placed on the sale of semiconductors to China in October last year and will apply to chips that “exceed the performance threshold set in the [2022] rule; or … A new ‘performance density threshold‘, which is designed to pre-empt future workarounds.”

Semiconductor manufacturers looking to sell their products to China will need to either notify the government of their plans or acquire a license.

To prevent the chips from reaching other countries with which the US has arms embargos, manufacturers will need to acquire licenses for any nations the product passes through on its way to China.

============
============

Both last year’s and this year’s limitations come amid concerns surrounding the use of AI and advanced semiconductors in warfare and military scenarios. The White House has expressed concerns that China’s procurement of this advanced hardware could aid in its development of hypersonic missile guidance systems, breaking top-secret US security codes or establishing advanced surveillance and espionage systems.

The US is also likely concerned with the supply of US semiconductors to Chinese data centres, where they would be used in the development of advanced AI models.

The major concern for many with the growth of AI is the displacement of workers as the technology takes jobs from humans. However, the San Francisco-based Centre for AI Safety (CAIS) has previously said that the technology could lead to human extinction.

“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war,” said CAIS in a release issued earlier this year.

Similarly, the chief executive of ChatGPT creator OpenAI, Sam Altman, has previously called for the US government to step in and enforce regulation regarding the development of AI.

“I think if this technology goes wrong, it can go quite wrong, and we want to be quite vocal about that; we want to work with the government to prevent that from happening,” he told Congress in May.

While many responding to the CAIS statement believed that the threat of AI eliminating the human race was distant and not yet worth worrying about when there are much more prominent and current concerns, the use of AI in devastating weaponry is already being explored.

A new report by the Rand Corporation released on Monday (16 October) has discovered that AI can be applied in the development of biological weapons.

The report outlined the testing of a number of large language models (LLMs), which, while unable to create “explicit instructions for creating biological weapons”, were capable of offering “guidance that could assist in the planning and execution of a biological attack”, the report said.

“In a fictional plague pandemic scenario, the LLM discussed biological weapon-induced pandemics, identifying potential agents, and considering budget and success factors.

“The LLM assessed the practical aspects of obtaining and distributing Yersinia pestis-infected specimens while identifying the variables that could affect the projected death toll.”

The concern is that, in the same way that AI can lower the barrier of entry for cyber criminals by bridging knowledge gaps and assisting in coding, the technology can also bridge gaps of understanding when it comes to bioweapons.

The report cited the attempt by the Japanese Aum Shinrikyo cult in the 1990s, who attempted to use botulinum toxin in a bioweapon but inherently failed due to a lack of understanding.

“These initial findings do not yet provide a full understanding of the real-world operational impact of LLMs on bioweapon attack planning,” the report continued.

“Ongoing research aims to assess what these outputs mean operationally for enabling non-state actors.

“The final report on this research will clarify whether LLM-generated text enhances the potential effectiveness and likelihood of a malicious actor causing widespread harm or is similar to the existing level of risk posed by harmful information already accessible on the internet.”

Daniel Croft

Daniel Croft

Born in the heart of Western Sydney, Daniel Croft is a passionate journalist with an understanding for and experience writing in the technology space. Having studied at Macquarie University, he joined Momentum Media in 2022, writing across a number of publications including Australian Aviation, Cyber Security Connect and Defence Connect. Outside of writing, Daniel has a keen interest in music, and spends his time playing in bands around Sydney.

cd intro podcast

Introducing Cyber Daily, the new name for Cyber Security Connect

Click here to learn all about it
newsletter
cyber daily subscribe
Be the first to hear the latest developments in the cyber industry.