AI Tools Are Helping Mediocre North Korean Hackers Steal Millions

0
17

Save StorySave this storySave StorySave this story

The advent of AI hacking tools has raised fears of a near future in which anyone can use automated tools to dig up exploitable vulnerabilities in any piece of software, like a kind of digital intrusion superpower. Here in the present, however, AI seems to be playing a more mundane, if still concerning, role in hackers’ toolkit: It’s helping mediocre hackers level up and carry out broad, effective malware campaigns. That includes one group of relatively unskilled North Korean cybercriminals who’ve been discovered using AI to carry out virtually every part of an operation that hacked thousands of victims to steal their cryptocurrency.

On Wednesday, cybersecurity firm Expel revealed what it describes as a North Korean state-sponsored cybercrime operation that installed credential-stealing malware on more than 2,000 computers, specifically targeting the machines of developers working on small cryptocurrency launches, NFT creation, and Web3 projects. By using the AI tools of US-based companies, including those of OpenAI, Cursor, and Anima, the hacker group—which Expel calls HexagonalRodent—“vibe coded” almost every part of its intrusion campaign, from writing their malware to building the fake websites of companies used in its phishing schemes. That AI-enabled hacking allowed the group to steal as much as $12 million in cryptocurrency from victims in three months.

What’s most striking about the HexagonalRodent hacking campaign isn’t its sophistication, says Marcus Hutchins, the security researcher who discovered the group, but rather how AI tools allowed an apparently unsophisticated group to carry out a profitable theft spree in the service of the North Korean state.

“These operators don't have the skills to write code. They don't have the skills to set up infrastructure. AI is actually enabling them to do things that they otherwise just would not be able to do,” says Hutchins, who became well-known in the cybersecurity community after disabling the WannaCry ransomware worm created by North Korean hackers.

Emoji-Littered, AI-Written Code

HexagonalRodent’s hacking operation focused on tricking crypto developers with fraudulent job offers at tech firms, going so far as to create full websites for the fake companies recruiting the victims, often created with AI web design tools. Eventually, the victim was told they’d have to download and complete a coding assignment as a test—which the hackers had infected with malware that infiltrated their machine and stole credentials, including those that in some cases could grant access to the keys that controlled their crypto wallets.

Those parts of the hacking operation appear to have been well-honed and effective, but the hackers were also clumsy enough to leave parts of their own infrastructure unsecured, leaking the prompts they used to write their malware with tools that included OpenAI’s ChatGPT and Cursor. They also exposed a database where they tracked victim wallets, which allowed Expel to estimate the total amount of cryptocurrency the hackers may have stolen. (While those wallets added up to $12 million in total contents, Hutchins says the company couldn’t confirm for each target whether the entire sum had already been drained from the wallets or if the hackers still needed to obtain keys to the victim wallets in some cases, given some may have been protected with hardware security tokens.)

Hutchins also analyzed samples of the hackers’ malware and found other clues that it was largely—perhaps entirely—created with AI. It was thoroughly annotated with comments throughout—in English—hardly the typical coding habits of North Koreans, despite the fact that some command-and-control servers for the malware tied them to known North Korean hacking operations. The malware’s code was also littered with emojis, which Hutchins points out can, in some cases, serve as a clue that software was written by a large language model, given that programmers writing on a PC keyboard rather than a phone rarely take the time to insert emojis. “It's a pretty well-documented sign of AI-written code,” Hutchins says.

The AI-written code Hutchins analyzed ought to have been detectable with typical “end point detection and response” security tools used in most companies and government agencies, Hutchins says, given that it followed standard patterns of behavior for malware. But Hutchins says HexagonalRodent’s decision to focus on individual victims in its hacking campaign meant many didn’t have those security tools installed. “They found a niche where you actually can get away with completely AI-generated malware,” says Hutchins.

Hutchins argues that the HexagonalRodent campaign shows how AI may be an especially useful tool for North Korea, which can easily recruit unskilled IT workers to join its hacker ranks—or more commonly, to infiltrate tech companies while posing as citizens of other countries—but has a far more limited number of capable hackers, given the average North Korean’s lack of access to the internet or even computers. “They have hundreds of people being sent over the border to work in IT operations, and only a few of them really know what they're doing,” Hutchins says. “But then they're able to use generative AI to get a leg up and actually run fairly successful hacking campaigns.”

In fact, rather than reduce the number of people involved in the hacking campaign through automation, Hutchins says he’s been able to observe North Korean operations grow in size over time. Expel estimates that as many as 31 individual hackers were involved in HexagonalRodent. “They just keep adding more and more operators,” Hutchins says. “Because they can just hand them access to an AI model, and they can now do things which they would have previously needed a development team to support.”

A Hermit Kingdom, Embracing AI

The HexagonalRodent activity observed by Hutchins makes up only a small part of North Korea’s sweeping hacking and cybercriminal activity, which can involve vast cryptocurrency theft, ransomware, espionage, fraud, and infiltrating Western organizations through its IT worker schemes. Security researchers have likened North Korea's cyber operations to functioning like a “state-sanctioned crime syndicate,” which ultimately works to fund the nation’s nuclear weaponry, build the country’s infrastructure, and evade international sanctions.

Increasingly, and perhaps unsurprisingly, these state-backed programs have been adding generative AI to their hacking and fraud workflows to improve their overall efficiency. Within North Korea, these efforts have reportedly been supported by the creation of Research Center 227, an organization sitting under the military’s Reconnaissance General Bureau that will partly focus on developing AI-focussed hacking tooling. But day-to-day, North Korea’s cyber operators have repeatedly been caught using commercial, off-the-shelf AI tools.

“North Korea is using AI as a force multiplier, and it is helping with every aspect—building resumes, building websites, building exploits, testing vulnerabilities—and they're doing it at speed and scale,” says Michael “Barni” Barnhart, a researcher at security firm DTEX, who has tracked the country’s hacking operations for years. North Korean cyber operators have been experimenting and widely using AI for multiple years, Barnhart says. “AI is helping them move faster so that they can weaponize exploits and even help build those exploits,” he explains. “You get little pieces of the puzzle from each of the groups, and then it kind of forms a whole picture of how they're using AI.”

For instance, members of North Korea’s IT worker programs have been using AI assistants and face-changing deepfakes to answer questions and change their appearance during fraudulent job interviews. Security researchers at Microsoft have spotted suspected North Korean operations using AI to create false IDs, research work tools, polish their English for social engineering, and research known security vulnerabilities. Some North Korea actors have also used the technology to create web infrastructure at scale, making their operations harder to detect, according to Microsoft’s research.

Both OpenAI and Anthropic have also spotted North Korean cyber operators using their platforms over the last 12 months. In February last year, OpenAI said it had banned suspected North Korean accounts that it detected using ChatGPT at multiple stages of fraudulent IT worker schemes, including during interviews to generate answers to technical questions and for writing code once someone had gained employment at a company.

Meanwhile, Anthropic said in its August threat intelligence report that it had seen North Korean IT workers who “appear unable to perform basic technical tasks or professional communication without AI assistance.” The company also said it detected North Korean hackers intending to use Claude to “enhance” some of the same malware strains Expel found in use, and to develop skills tests containing malware. But Anthropic wrote in its own report that it detected the malicious use of Claude and banned the hackers from using its tools.

OpenAI tells WIRED that its tools did not give the hackers any “novel capabilities,” but acknowledging that the “value” of its tools to the hackers “appears to be speed and scale.” OpenAI did not say if it had banned any accounts in relation to Expel’s findings. Cursor tells WIRED that it had blocked the HexagonalRodent hackers from using its tools, adding that the company is “investigating further and [is] in communication with other model providers on the incident."

Anima, one of the AI web design firms whose tools were used in the hacking campaign, tells WIRED that it was working with Expel to identify and block the hackers from using its software. “This is misuse of Anima’s coding agent by bad actors, and we’re addressing it head-on,” the company’s CEO, Avishay Cohen, wrote.

Hutchins argues that it’s this practical use of AI for enabling hacking operations that should be the cybersecurity’s industry’s focus, not the notion of some future vulnerability discovery AI.

“We're thinking we need to build defenses for the hypothetical Skynet that’s going to blast through all of our networks,” says Hutchins. “Meanwhile, you have a nation-state threat who is able to spin up their operations using AI without doing anything novel. There is real threat activity happening as a result of AI. But it's not the stuff that people are wasting their breath on.”