Powered by MOMENTUM MEDIA
cyber daily logo

Breaking news and updates daily. Subscribe to our Newsletter

Breaking news and updates daily. Subscribe to our Newsletter X facebook linkedin Instagram Instagram

Industry responds to the White House’s AI executive order

The Biden administration’s newly announced “Executive Order on the Safe, Secure and Trustworthy Development and Use of Artificial Intelligence” is being regarded by many in the industry as a good first step towards a safer AI-adopting future.

user icon Daniel Croft
Tue, 31 Oct 2023
Industry responds to the White House’s AI executive order
expand image

That being said, industry experts and academics are saying that more needs to be done to properly mitigate the risks that the technology creates.

Here is what industry leaders are saying.

Drew Bagley
Vice-president and counsel, privacy and cyber policy, CrowdStrike

============
============

CrowdStrike is encouraged to see a push for common artificial intelligence (AI) principles in the new executive order. AI has long been transformative for modern technology, but recent developments have lowered the barrier of entry to innovators and adversaries alike. This is especially true in cyber security, where defenders must rely upon AI to detect and prevent cyber attacks at scale in an era dominated by malware-free attacks and zero-day exploits.

Adversaries continue expressing interest in leveraging large language models (LLMs) to move more quickly and scale their operations. Beyond cyber attacks, in the lead-up to the 2024 election cycle, misinformation campaigns driven by AI are of particular concern for our industry and something we’re watching closely.

Ultimately, it’s critical that AI can and should be leveraged in a responsible way. The natural language interface of today’s LLMs has the potential to make cyber security roles and responsibilities more broadly accessible, helping to close the cyber security skills gap and improve response time so defenders can stay ahead of adversaries – boosting proactive security across organisations and agencies. This is why investing in responsible AI innovation is more critical than ever.

Dan Schiappa
Chief product officer, Arctic Wolf

Because of AI’s growing impact on technologies, industries, and even society as a whole, it’s incredibly important that our current administration put a continued emphasis on security. While I applaud the government’s desire to ensure AI is safe, it’s also imperative that regulation is balanced with the speed of innovation.

If we slow down AI innovation significantly, foreign companies could innovate faster than us, and we risk falling behind in the AI race.

While these rules are necessary, these regulations may only keep well-intentioned people in check and will ultimately have no impact on threat actors as they will not follow these rules. During this time, we’ll need to rely on the private cyber security sector to help provide us protection from these malicious threats.

Michael Leach
Compliance manager, Forcepoint

The executive order on AI that was announced today provides some of the necessary first steps to begin the creation of a national legislative foundation and structure to better manage the responsible development and use of AI by both commercial and government entities, with the understanding that it is just the beginning.

The new executive order provides valuable insight [into] the areas that the US government views as critical when it comes to the development and use of AI and what the cyber security industry should be focused on moving forward when developing, releasing and using AI such as standardised safety and security testing, the detection and repair of network and software security vulnerabilities, identifying and labelling AI-generated content, and last but not least, the protection of an individual’s privacy by ensuring the safeguarding of their personal data when using AI.

The emphasis in the executive order that is placed on the safeguarding of personal data when using AI is just another example of the importance that the government has placed on protecting Americans’ privacy with the advent of new technologies like AI.

Since the introduction of global privacy laws like the EU GDPR, we have seen numerous US state-level privacy laws come into effect across the nation to protect Americans’ privacy, and many of these existing laws have recently adopted additional requirements when using AI in relation to personal data.

The various US state privacy laws that incorporate requirements when using AI and personal data together (e.g., training, customising, data collection, processing, etc.) generally require the following: the right for individual consumers to opt out [of] profiling and automated decision making, data protection assessments for certain targeted advertising and profiling use cases, and limited data retention, sharing, and use of sensitive personal information when using AI.

The new executive order will hopefully lead to the establishment of more cohesive privacy and AI laws that will assist in overcoming the fractured framework of the numerous, current state privacy laws with newly added AI requirements.

The establishment of consistent national AI and privacy laws will allow US companies and the government to rapidly develop, test, release and adopt new AI technologies and become more competitive globally while putting in place the necessary guardrails for the safe and reliable use of AI.

Hitesh Sheth
President and chief executive, Vectra AI

President Biden’s new executive order on artificial intelligence is a positive step towards more concrete regulation to curb AI’s risk and harness its benefits; however, it will be important for all global governments to strike the right balance of regulation and innovation. On the positive side, the White House is smart to align with existing National Institute of Standards and Technology (NIST) standards for AI red-teaming – or stress-testing the defences and potential problems within systems.

With a continually evolving threat landscape, it is essential for organisations to embrace a more holistic, proactive security paradigm and NIST’s standards around red-teaming support this approach.

As the US government works with international partners to implement AI standards around the world, it will be important for these regulations to strike a balance between advocating for transparency and promoting continued innovation – rather than simply creating artificial guardrails.

There’s no doubt that AI advancements and adoption have reached a state where regulation is required – however, governments need to be cognisant of not halting the groundbreaking innovation that’s taking place that can transform how we live for the better.

Tyler Farrar
Chief information security officer, Exabeam

I’m pleased to see that the executive order places a strong emphasis on enhancing security measures for AI systems. The requirement for sharing safety test results and conducting rigorous red team testing for sizable AI companies will help boost cyber defences.

Setting standards for content authentication and privacy-preserving techniques will also help overwhelmed security analysts as AI’s integration into security operations and other critical systems takes hold.

The executive order’s commitment to protecting consumers and establishing an advanced cyber security program will provide security analysts with new AI-powered tools and resources to help identify and address vulnerabilities in critical software.

It promises a more secure digital environment and alleviates alert burdens on analysts, helping the industry focus on addressing more pressing issues. As new AI technology matures, we are still learning how it will best integrate into workflows; this executive order is an important start in establishing much-needed guidelines.

Steve Moore
Vice-president and chief security strategist, Exabeam

This EO has a gap; it completely misses the protection of data (on which these models will be trained). Also, EOs are notorious for accomplishing almost nothing (remember “log everything”?).

Trustworthy AI also means trustworthy quality, security, and integrity of the data. So on what will the AI be trained, and how is that protected? We’ve never gotten this right, but we want to toss AI on top.

Daniel Croft

Daniel Croft

Born in the heart of Western Sydney, Daniel Croft is a passionate journalist with an understanding for and experience writing in the technology space. Having studied at Macquarie University, he joined Momentum Media in 2022, writing across a number of publications including Australian Aviation, Cyber Security Connect and Defence Connect. Outside of writing, Daniel has a keen interest in music, and spends his time playing in bands around Sydney.

cd intro podcast

Introducing Cyber Daily, the new name for Cyber Security Connect

Click here to learn all about it
newsletter
cyber daily subscribe
Be the first to hear the latest developments in the cyber industry.