{"id":49462,"date":"2026-04-23T02:31:25","date_gmt":"2026-04-23T02:31:25","guid":{"rendered":"https:\/\/agooka.com\/news\/business\/ai-tools-are-helping-mediocre-north-korean-hackers-steal-millions\/"},"modified":"2026-04-23T02:31:25","modified_gmt":"2026-04-23T02:31:25","slug":"ai-tools-are-helping-mediocre-north-korean-hackers-steal-millions","status":"publish","type":"post","link":"https:\/\/agooka.com\/news\/business\/ai-tools-are-helping-mediocre-north-korean-hackers-steal-millions\/","title":{"rendered":"AI Tools Are Helping Mediocre North Korean Hackers Steal Millions"},"content":{"rendered":"<p>Save StorySave this storySave StorySave this story<\/p>\n<p>The advent of AI hacking tools has raised fears of a near future in which anyone can use automated tools to dig up exploitable vulnerabilities in any piece of software, like a kind of digital intrusion superpower. Here in the present, however, AI seems to be playing a more mundane, if still concerning, role in hackers\u2019 toolkit: It\u2019s helping mediocre hackers level up and carry out broad, effective malware campaigns. That includes one group of relatively unskilled North Korean cybercriminals who\u2019ve been discovered using AI to carry out virtually every part of an operation that hacked thousands of victims to steal their cryptocurrency.<\/p>\n<p>On Wednesday, cybersecurity firm Expel revealed what it describes as a North Korean state-sponsored cybercrime operation that installed credential-stealing malware on more than 2,000 computers, specifically targeting the machines of developers working on small cryptocurrency launches, NFT creation, and Web3 projects. By using the AI tools of US-based companies, including those of OpenAI, Cursor, and Anima, the hacker group\u2014which Expel calls HexagonalRodent\u2014\u201cvibe coded\u201d almost every part of its intrusion campaign, from writing their malware to building the fake websites of companies used in its phishing schemes. That AI-enabled hacking allowed the group to steal as much as $12 million in cryptocurrency from victims in three months.<\/p>\n<p>What\u2019s most striking about the HexagonalRodent hacking campaign isn\u2019t its sophistication, says Marcus Hutchins, the security researcher who discovered the group, but rather how AI tools allowed an apparently unsophisticated group to carry out a profitable theft spree in the service of the North Korean state.<\/p>\n<p>\u201cThese operators don&#039;t have the skills to write code. They don&#039;t have the skills to set up infrastructure. AI is actually enabling them to do things that they otherwise just would not be able to do,\u201d says Hutchins, who became well-known in the cybersecurity community after disabling the WannaCry ransomware worm created by North Korean hackers.<\/p>\n<h2>Emoji-Littered, AI-Written Code<\/h2>\n<p>HexagonalRodent\u2019s hacking operation focused on tricking crypto developers with fraudulent job offers at tech firms, going so far as to create full websites for the fake companies recruiting the victims, often created with AI web design tools. Eventually, the victim was told they\u2019d have to download and complete a coding assignment as a test\u2014which the hackers had infected with malware that infiltrated their machine and stole credentials, including those that in some cases could grant access to the keys that controlled their crypto wallets.<\/p>\n<p>Those parts of the hacking operation appear to have been well-honed and effective, but the hackers were also clumsy enough to leave parts of their own infrastructure unsecured, leaking the prompts they used to write their malware with tools that included OpenAI\u2019s ChatGPT and Cursor. They also exposed a database where they tracked victim wallets, which allowed Expel to estimate the total amount of cryptocurrency the hackers may have stolen. (While those wallets added up to $12 million in total contents, Hutchins says the company couldn\u2019t confirm for each target whether the entire sum had already been drained from the wallets or if the hackers still needed to obtain keys to the victim wallets in some cases, given some may have been protected with hardware security tokens.)<\/p>\n<p>Hutchins also analyzed samples of the hackers\u2019 malware and found other clues that it was largely\u2014perhaps entirely\u2014created with AI. It was thoroughly annotated with comments throughout\u2014in English\u2014hardly the typical coding habits of North Koreans, despite the fact that some command-and-control servers for the malware tied them to known North Korean hacking operations. The malware\u2019s code was also littered with emojis, which Hutchins points out can, in some cases, serve as a clue that software was written by a large language model, given that programmers writing on a PC keyboard rather than a phone rarely take the time to insert emojis. \u201cIt&#039;s a pretty well-documented sign of AI-written code,\u201d Hutchins says.<\/p>\n<p>The AI-written code Hutchins analyzed ought to have been detectable with typical \u201cend point detection and response\u201d security tools used in most companies and government agencies, Hutchins says, given that it followed standard patterns of behavior for malware. But Hutchins says HexagonalRodent\u2019s decision to focus on individual victims in its hacking campaign meant many didn\u2019t have those security tools installed. \u201cThey found a niche where you actually can get away with completely AI-generated malware,\u201d says Hutchins.<\/p>\n<p>Hutchins argues that the HexagonalRodent campaign shows how AI may be an especially useful tool for North Korea, which can easily recruit unskilled IT workers to join its hacker ranks\u2014or more commonly, to infiltrate tech companies while posing as citizens of other countries\u2014but has a far more limited number of capable hackers, given the average North Korean\u2019s lack of access to the internet or even computers. \u201cThey have hundreds of people being sent over the border to work in IT operations, and only a few of them really know what they&#039;re doing,\u201d Hutchins says. \u201cBut then they&#039;re able to use generative AI to get a leg up and actually run fairly successful hacking campaigns.\u201d<\/p>\n<p>In fact, rather than reduce the number of people involved in the hacking campaign through automation, Hutchins says he\u2019s been able to observe North Korean operations grow in size over time. Expel estimates that as many as 31 individual hackers were involved in HexagonalRodent. \u201cThey just keep adding more and more operators,\u201d Hutchins says. \u201cBecause they can just hand them access to an AI model, and they can now do things which they would have previously needed a development team to support.\u201d<\/p>\n<h2>A Hermit Kingdom, Embracing AI<\/h2>\n<p>The HexagonalRodent activity observed by Hutchins makes up only a small part of North Korea\u2019s sweeping hacking and cybercriminal activity, which can involve vast cryptocurrency theft, ransomware, espionage, fraud, and infiltrating Western organizations through its IT worker schemes. Security researchers have likened North Korea&#039;s cyber operations to functioning like a \u201cstate-sanctioned crime syndicate,\u201d which ultimately works to fund the nation\u2019s nuclear weaponry, build the country\u2019s infrastructure, and evade international sanctions.<\/p>\n<p>Increasingly, and perhaps unsurprisingly, these state-backed programs have been adding generative AI to their hacking and fraud workflows to improve their overall efficiency. Within North Korea, these efforts have reportedly been supported by the creation of Research Center 227, an organization sitting under the military\u2019s Reconnaissance General Bureau that will partly focus on developing AI-focussed hacking tooling. But day-to-day, North Korea\u2019s cyber operators have repeatedly been caught using commercial, off-the-shelf AI tools.<\/p>\n<p>\u201cNorth Korea is using AI as a force multiplier, and it is helping with every aspect\u2014building resumes, building websites, building exploits, testing vulnerabilities\u2014and they&#039;re doing it at speed and scale,\u201d says Michael \u201cBarni\u201d Barnhart, a researcher at security firm DTEX, who has tracked the country\u2019s hacking operations for years. North Korean cyber operators have been experimenting and widely using AI for multiple years, Barnhart says. \u201cAI is helping them move faster so that they can weaponize exploits and even help build those exploits,\u201d he explains. \u201cYou get little pieces of the puzzle from each of the groups, and then it kind of forms a whole picture of how they&#039;re using AI.\u201d<\/p>\n<p>For instance, members of North Korea\u2019s IT worker programs have been using AI assistants and face-changing deepfakes to answer questions and change their appearance during fraudulent job interviews. Security researchers at Microsoft have spotted suspected North Korean operations using AI to create false IDs, research work tools, polish their English for social engineering, and research known security vulnerabilities. Some North Korea actors have also used the technology to create web infrastructure at scale, making their operations harder to detect, according to Microsoft\u2019s research.<\/p>\n<p>Both OpenAI and Anthropic have also spotted North Korean cyber operators using their platforms over the last 12 months. In February last year, OpenAI said it had banned suspected North Korean accounts that it detected using ChatGPT at multiple stages of fraudulent IT worker schemes, including during interviews to generate answers to technical questions and for writing code once someone had gained employment at a company.<\/p>\n<p>Meanwhile, Anthropic said in its August threat intelligence report that it had seen North Korean IT workers who \u201cappear unable to perform basic technical tasks or professional communication without AI assistance.\u201d The company also said it detected North Korean hackers intending to use Claude to \u201cenhance\u201d some of the same malware strains Expel found in use, and to develop skills tests containing malware. But Anthropic wrote in its own report that it detected the malicious use of Claude and banned the hackers from using its tools.<\/p>\n<p>OpenAI tells WIRED that its tools did not give the hackers any \u201cnovel capabilities,\u201d but acknowledging that the \u201cvalue\u201d of its tools to the hackers \u201cappears to be speed and scale.\u201d OpenAI did not say if it had banned any accounts in relation to Expel\u2019s findings. Cursor tells WIRED that it had blocked the HexagonalRodent hackers from using its tools, adding that the company is \u201cinvestigating further and [is] in communication with other model providers on the incident.&quot;<\/p>\n<p>Anima, one of the AI web design firms whose tools were used in the hacking campaign, tells WIRED that it was working with Expel to identify and block the hackers from using its software. \u201cThis is misuse of Anima\u2019s coding agent by bad actors, and we\u2019re addressing it head-on,\u201d the company\u2019s CEO, Avishay Cohen, wrote.<\/p>\n<p>Hutchins argues that it\u2019s this practical use of AI for enabling hacking operations that should be the cybersecurity\u2019s industry\u2019s focus, not the notion of some future vulnerability discovery AI.<\/p>\n<p>\u201cWe&#039;re thinking we need to build defenses for the hypothetical Skynet that\u2019s going to blast through all of our networks,\u201d says Hutchins. \u201cMeanwhile, you have a nation-state threat who is able to spin up their operations using AI without doing anything novel. There is real threat activity happening as a result of AI. But it&#039;s not the stuff that people are wasting their breath on.\u201d<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Save StorySave this storySave StorySave this story The advent of AI hacking tools has raised fears of a near future in which anyone can use automated tools to dig up exploitable vulnerabilities in any piece of software, like a kind of digital intrusion superpower. Here in the present, however, AI seems to be playing a [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":49463,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[36],"tags":[],"class_list":{"0":"post-49462","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-business"},"_links":{"self":[{"href":"https:\/\/agooka.com\/news\/wp-json\/wp\/v2\/posts\/49462","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/agooka.com\/news\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/agooka.com\/news\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/agooka.com\/news\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/agooka.com\/news\/wp-json\/wp\/v2\/comments?post=49462"}],"version-history":[{"count":0,"href":"https:\/\/agooka.com\/news\/wp-json\/wp\/v2\/posts\/49462\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/agooka.com\/news\/wp-json\/wp\/v2\/media\/49463"}],"wp:attachment":[{"href":"https:\/\/agooka.com\/news\/wp-json\/wp\/v2\/media?parent=49462"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/agooka.com\/news\/wp-json\/wp\/v2\/categories?post=49462"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/agooka.com\/news\/wp-json\/wp\/v2\/tags?post=49462"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}