How the US Army is readying for a cyberspace fight against enemy AI hackers

0
13
A soldier lies down holding a rifle pointing it out towards a field.
A tabletop scenario followed enemy AI agents attacking US Army security in a hypothetical Indo-Pacific conflict.

US Army photo by Sgt. Elijah Magaña

  • The US Army and industry tested how enemy AI could attack in cyberspace.
  • The simulated enemy AI exposed vulnerabilities and adapted during multiple waves of attacks.
  • Questions remain on the role of AI agents and what can be learned from industry.

The attacks came faster than a human adversary.

The communications and data networks essential to the US Army's operations across the Asia-Pacific region were probed by a new kind of adversary: an enemy AI trying to confuse and ensnare soldiers.

That's what Army leaders, guided by top US AI companies, saw in a new series of tabletop exercises as they prepare for a new era of AI-augmented cyber operations, and how to effectively defend themselves.

It's the latest example of how the Army is embracing artificial intelligence at all levels of warfighting — and the latest acknowledgement that the challenges of future warfare may be too fast for humans to tackle alone.

The Army and various partners held its second artificial intelligence tabletop exercise earlier this week after the inaugural one last September. The first iteration brought around 15 CEOs of major AI firms together to propose solutions to real-world problems, like using AI capabilities in conflict environments where communications and networks are denied by the enemy, speeding up supply chain management, and managing the behind-the-scenes paperwork so that civilians and personnel can focus on other tasks.

This time, the exercise specifically honed in on AI-enabled cyber defense for the Army, preparing for "an Indo-Pacific crisis and a hypothetical September 2027," Brandon Pugh, the Army's principle cyber advisor, told reporters, with "the premise that an adversary was leveraging AI not to just launch a single decisive cyber blow, but to really launch salvo after salvo attacks that continuously adapted to the Army's defensive posture and did so arguably faster than a human defender could keep up with."

Army leaders, like Secretary Dan Driscoll, have previously noted the increasing importance of the service defending against enemy attacks on networks, data, and software, calling it as vital as defending physical assets and terrain.

People sit around a table at a conference.
The exercise involved 14 companies, including Google, OpenAI, and Microsoft.

US Army photo by Cpl. Giselle Gonzalez

Fourteen companies came to the table this time around, with C-suite representatives from Google, OpenAI, Microsoft, Amazon Web Services, Palo Alto Networks, and others. Army and US Department of Defense officials were also present. In addressing the scenario, "the focus was really how do we defend better using artificial intelligence, frontier models," and the use of AI agents, Gen. Chris Eubank, head of Army Cyber Command, said.

Various ideas and solutions came up, with reoccurring ones focused on pairing AI agents' capabilities in deception tactics, as in using AI to detect an adversary inside US systems, learn from their behavior, and make them take up time and resources on obstacles. The exercise also brought up what Army leaders said were previously unknown vulnerabilities in Army systems.

The simulated enemy's AI system was analyzing Army defenses in real time, seeing what triggered human intervention and slowed down responses, learning from each iteration. It showed that, in a potential conflict, an enemy could use artificial intelligence to attack cybersecurity in waves of attacks while continuously adapting to US defenses.

The tabletop also brought up a question of risk acceptance when it comes to the use AI. "At what stage are machines, [AI] agents, allowed to accept risk versus a human accepting risk?" Eubank said. What could AI be best used for in the cybersecurity space? And is there a possibility that AI agents could do certain functions on their own?

US military officials and experts have questioned the broader role of AI and whether it can and should operate on its own in certain capacities amid concerns that the speed of decision-making in a future war with an AI-equipped opponent may be too fast for humans. Right now, Army leadership is encouraging the use of artificial intelligence for a variety of tasks, from paperwork to coding, and requires a human in the loop for all tasks.

After the tabletop exercise, the service expects that it will examine the role of AI in cybersecurity more closely and how much leeway it should be given.

"If we believe the end state is, we're going to use AI to augment humans, we're going to be way behind," Eubank said. "We have to get to a place where we're not just augmenting humans. Where does AI have autonomy to do things in the cyberspace defense environment?"

Read the original article on Business Insider