How Will AI Change Cyber Operations?
The impact of artificial intelligence (AI) on cyber operations is a complex and multifaceted issue. The U.S. government, for instance, holds seemingly contradictory views, expressing both optimism and pessimism. Some officials believe AI will empower cyber defense, shifting the advantage from attackers to defenders. Others caution about AI's potential to enable powerful offensive cyber operations. This article delves into this intricate landscape, exploring how AI is reshaping cyber operations, the opportunities and risks it presents, and the crucial policy considerations that must be addressed.
The Evolving Cyber Landscape
The rapid advancements in AI, particularly in areas like generative AI and large language models (LLMs), are poised to significantly alter the dynamics of cyber operations. Both attackers and defenders are already leveraging AI-powered tools and techniques to enhance their capabilities. This evolution affects multiple aspects of cybersecurity:
- New Tools and Techniques: AI empowers both offensive and defensive cyber operations with novel tools and techniques. Attackers can use AI for tasks like automated vulnerability discovery and crafting sophisticated phishing campaigns. Defenders, on the other hand, can leverage AI for enhanced threat detection, anomaly identification, and automated incident response.
- Evolving Attack Surface: AI-assisted coding, while promising, can inadvertently introduce vulnerabilities into software, expanding the attack surface. AI systems and their applications also become new targets for cyberattacks, adding another layer of complexity to the security landscape.
- Shifting Target Exploitability: Instead of simply strengthening offense or defense, AI is likely to reshape the distribution of exploitable targets in cyberspace. Certain vulnerabilities might become easier to discover and exploit, while others might become more difficult to target.
Beyond Offense vs. Defense
The traditional framing of AI's impact on cyber as a simple "offense versus defense" balance is inadequate. The reality is far more nuanced. AI's influence is mediated by geopolitical and economic factors that shape how individuals, companies, and governments adopt and utilize AI. Preexisting constraints, such as national regulations on vulnerability disclosure, also play a significant role.
For example, a nation's decision to stockpile or disclose discovered vulnerabilities will influence how AI-driven vulnerability discovery affects its offensive capabilities. Similarly, the speed with which organizations can patch discovered vulnerabilities will determine whether AI-assisted vulnerability discovery benefits attackers or defenders in a given context.
Sharper Swords, Tougher Shields
The U.S. government is actively exploring how AI can be used to both augment its cyber capabilities and bolster its defenses, as well as how to secure increasingly sophisticated AI systems. Vulnerability discovery is a key area of focus. AI-powered fuzzing techniques, which involve feeding random or mutated inputs to a program to identify vulnerabilities, can significantly accelerate the discovery process. LLMs can enhance fuzzing by generating valid inputs at scale, potentially automating the exploration of entire code repositories.
However, the ultimate impact of this increased vulnerability discovery depends on factors beyond AI's capabilities. The speed of exploitation versus patching, influenced by factors like national vulnerability disclosure policies and organizational patching practices, determines whether the advantage goes to offense or defense.
The Marginal Effect and Threats
Analyzing the marginal effects of AI on various phases of cyber operations provides valuable insights. For example, generative AI can enhance social engineering and spearphishing attacks by creating highly convincing text, voice, and image content. While the click-through rates of AI-generated phishing emails may currently be slightly lower than those crafted by humans, the efficiency gains are substantial. This "quality versus efficiency tradeoff" may be particularly appealing to opportunistic cybercriminals seeking to maximize their reach.
However, not all threat actors will benefit equally from these advancements. State-sponsored groups with specific targets and a focus on covert operations might not find the same value in scaling up phishing operations with slightly lower success rates. Similarly, while LLMs can generate malicious code, their impact on offensive capabilities might be marginal compared to existing techniques and tools available to hackers.
Instead, the broader use of LLMs in software development might inadvertently increase the attack surface by introducing insecure code, offering more opportunities for attackers using existing techniques.
Incorporating AI into U.S. Cyber Strategy
As AI becomes increasingly integrated into cyber operations, U.S. national cyber strategy must adapt to address both the opportunities and risks presented by this evolving landscape. Moving beyond the simplistic "offense versus defense" dichotomy is crucial. Policymakers should focus on the mediating factors that influence how AI is developed, used, and applied, aiming to shape these factors in ways that align with U.S. strategic interests.
This requires prioritizing the most impactful and likely AI-enabled cyber threats, leveraging AI to reduce the attack surface, and developing effective responses to evolving threats. Collaboration between government, industry, and academia is essential to foster innovation, promote responsible AI development, and build a more secure cyber ecosystem. Policy should address incentives for secure software development practices, encourage vulnerability disclosure and patching, and promote international cooperation on AI and cyber security norms. By adopting a holistic and forward-looking approach, the U.S. can navigate the complexities of the AI-driven cyber landscape and maintain a strong cyber posture in the face of evolving threats.