Powered by MOMENTUM MEDIA
cyber daily logo

Breaking news and updates daily. Subscribe to our Newsletter

Breaking news and updates daily. Subscribe to our Newsletter X facebook linkedin Instagram Instagram

Can generative AI be used to fight cyber crime?

Artificial intelligence (AI) is set to change the cyber security industry in a major way. We already know that AI tools are being used by threat actors, but a new study is testing whether or not these tools can be used to fight cyber crime.

user icon Daniel Croft
Tue, 26 Mar 2024
Can generative AI be used to fight cyber crime?
expand image

Researchers from the Energy and Resources Institute at Charles Darwin University (CDU) teamed up with the Institute for Advanced Studies at Christ Academy in India to test whether generative AI tools like ChatGPT were capable of aiding cyber professionals in the fight against cyber threats.

In the study, the researchers trailed ChatGPT’s penetration testing (pen testing) capabilities, attempting to see if it could automate tasks such as vulnerability assessments, scanning, reconnaissance, exploitation, and reporting activities.

It is worth noting that many organisations have already adopted AI to empower their cyber security practices, but the use of generative AI chatbots in this way is new.

============
============

The AI chatbot was prompted to inspect webpage source code, search for data within an archive and log in to a server anonymously and download data.

According to the study’s co-author, Dr Bharanidharan Shanmugam, senior lecturer in Information Technology at the CDU, ChatGPT was majorly successful in automating tasks.

“In the reconnaissance phase, ChatGPT can be used for gathering information about the target system, network, or organisation for the purpose of identifying potential vulnerabilities and attack vectors,” Shanmugam said.

“In the scanning phase, ChatGPT can be used to aid in performing detailed scans of the target particularly their network, systems and applications to identify open ports, services, and potential vulnerabilities.

“While ChatGPT proved to be an excellent GenAI tool for pen testing for the previous phases, it shown the greatest in exploiting the vulnerabilities of the remote machine.”

With cyber criminals already using generative AI chatbots in similar ways by automating processes or asking chatbots to perform tasks they may find difficult, the study’s findings show that fire can indeed be fought with fire.

However, like fire, AI usage can get out of control when not supervised, and Shanmugam says control over AI use must be “closely” monitored.

“Organisations must adopt best practices and guidelines, focusing on responsible AI deployment, data security and privacy, and fostering collaboration and information sharing,” added Shanmugam.

“By doing so, organisations can leverage the power of GenAI to better protect themselves against the ever-evolving threat landscape and maintain a secure digital environment for all.”

Daniel Croft

Daniel Croft

Born in the heart of Western Sydney, Daniel Croft is a passionate journalist with an understanding for and experience writing in the technology space. Having studied at Macquarie University, he joined Momentum Media in 2022, writing across a number of publications including Australian Aviation, Cyber Security Connect and Defence Connect. Outside of writing, Daniel has a keen interest in music, and spends his time playing in bands around Sydney.

cd intro podcast

Introducing Cyber Daily, the new name for Cyber Security Connect

Click here to learn all about it
newsletter
cyber daily subscribe
Be the first to hear the latest developments in the cyber industry.