Breaking news and updates daily. Subscribe to our Newsletter!

Breaking news and updates daily. Subscribe to our newsletter

Hop onboard the AI train

By Liam Garman
21 October 2021 | 1 minute read

Machine learning will not only underpin the future of cyber warfare, but will play an even larger role in target selection and influence the command-and-control continuum. Australia must fully grasp the technology sharing agreements under the AUKUS deal to build a stronger domestic AI capability.

In early October, the US Army conducted its fourth iteration of tests for the ‘Scarlet Dragon’ AI platform. The concept of the platform is simple, it scans satellite imagery and uses machine learning to identify potential targets which are then relayed to a human operator to oversee fire control.

Scarlet Dragon has been a significant leap in demonstrating the growing interconnectedness of the military domains, enabled by cyber and AI activity. Throughout the most recent iteration of Scarlet Dragon testing, the platform analysed 7,200 square kilometres of satellite radar imagery which was then notionally relayed to F-35s, F-15s, and F-18s for fire support.


Chief data officer for the US Army’s XVIII Airborne Corps Colonel Melissa Solsbury explained that such operations refine the platform’s machine learning capabilities, according to the US-based Army Times.

“Since our first event we have been able to reduce the speed of moving data from sensor to shooter by nearly 50 per cent,” COL Solsbury told the media outlet.

Indeed, the Pentagon’s machine learning enabled target identification process has been supported by the likes of Google, Microsoft and Amazon. Know the annoying image selection process to log into a website or verify an account?


Enter Project Maven.

Project Maven provides the target identification algorithms to ensure that the platform can correctly identify friend from foe, military vehicle versus civilian car. In fact, Google supported the Pentagon’s Project Maven until a mass employee petition in 2018 which withdrew the company’s support for the project.

Speaking to Gizmodo, a Google employee explained how Project Maven has been used to support military target identification.

“We have long worked with government agencies to provide technology solutions. This specific project is a pilot with the Department of Defense, to provide open source TensorFlow APIs that can assist in object recognition on unclassified data,” a company spokesperson said.

“The technology flags images for human review, and is for non-offensive uses only. Military use of machine learning naturally raises valid concerns. We’re actively discussing this important topic internally and with others as we continue to develop policies and safeguards around the development and use of our machine learning technologies.”

Simply, AI provides a flow of more timely and accurate information into a commander’s command-and-control continuum, thus enhancing their decision-making matrices. In yet further simpler terms, it makes your OODA loop shorter than an adversary’s OODA loop.

Despite the exhortations of ethicists arguing against the application of AI in warfare, it is both an inevitability and not a new phenomenon.

In 1988, the USS Vincennes downed an Iranian passenger jet with the semi-autonomous Aegis weapons system – but post-crash investigations found that it was human error and not the autonomous system that caused the crash. As with machine error, human error can be catastrophic.

While many have spent years studying the Clausewitzian maxims that war “is wrapped in a fog of greater or lesser uncertainty ” and that “most [intelligence reports] are uncertain”, such lessons are diminishing in the face of AI enabled warfare.

Huon Curtis, writing in ASPI’s The Strategist this week, argued that Australia will benefit from the new AI technology sharing arrangements under recently inked AUKUS agreement.

“Prime Minister Scott Morrison has added science and technology to the portfolio of the Defence Industry Minister, Melissa Price, signalling closer alignment of the research sector with the defence organisation,” Curtis argued.

“Britain’s new AI strategy puts defence front and centre after a 2017 industrial strategy and a £1 billion AI sector deal in 2018. By the end of this year, the UK Ministry of Defence intends to publish its own strategy on how it will adopt and use AI.”

In his submission, Curtis rightly observes that “technology is increasingly seen as geopolitical”. This is a key detail that many in the West continue to forget, whether it’s the repressive ITAR that led to a boom in the commercial space and cyber sectors – enabling cross border companies to develop unimaginable asymmetric weapons – or the unwillingness for Australia and other middle allied nations to invest in a public-private DARPA.

But Curtis explains that Australia is uniquely positioned to support the allied push for AI.

“If the US positioning of research as a national asset is anything to go by, Australian universities are likely to be repositioned as an element of critical infrastructure. Australian AI research ranks highly in terms of per capita output and quality, in particular the frequency with which research papers are cited. But we have a weak venture capital system and longstanding issues in commercialising our research,” he notes.

Nevertheless, relying on our universities to spearhead a push for AI is not without risk. Curtis cites former chief defence scientist Robert Clark and Peter Jennings, executive director at ASPI to note that “the current largely open approach of Australian research universities to their international links is significantly exposed”.

Australia must take a front foot and fully utilise the information sharing capabilities from the AUKUS deal. While our population is small, Australia must utilise its unique vectors to gain a deterrence capability over any enemies. After all, why not? Our enemies are doing it.

According to the US-based Cyber Readiness Institute’s Kiersten Todt, analysists are starting to believe that the Microsoft Exchange hack earlier in the year, which had compromised some 250,000 devices was used as an information gathering operation to support overseas AI enhancement.

“Stealing information from small- and medium-size businesses out in the American heartland doesn't immediately suggest espionage. Instead, officials believe the Chinese gather this information to help them construct the informational mosaic they need to build world-class AI,” Dina Temple-Raston wrote in NPR.

It’s time for Australia to step in line with the world’s most advanced economies and fully grasp the cyber and AI capabilities offered to us under the AUKUS deal, supported by a public-private partnership in the form of an Australian-DARPA .

Yes, a cacophony of military ethicists will disagree but we can no-longer let the Hollywoodification of AI cloud the fact that machine learning dynamic targeting cycles will underpin the future of warfare.

[Related: Uni of Adelaide scientists partner with industry to develop cyber deception technologies]

Liam Garman

Liam Garman

Liam began his career as a speech writer at New South Wales Parliament before working for world leading campaigns and research agencies in Sydney and Auckland. Throughout his career, Liam has managed and executed a range of international media and communications campaigns spanning politics, business, industrial relations and infrastructure. He’s since shifted his attention to researching and writing extensively on geopolitics and defence, specifically in North Africa, the Middle East and Asia. He holds a Bachelor of Commerce from the University of Sydney and is undertaking a Masters in Strategy and Security from UNSW Canberra.

Hop onboard the AI train
lawyersweekly logo
cyber security subscribe
Be the first to hear the latest developments in the cyber security industry.