{"id":50067,"date":"2026-05-06T17:11:10","date_gmt":"2026-05-06T17:11:10","guid":{"rendered":"https:\/\/agooka.com\/news\/business\/hackers-hate-ai-slop-even-more-than-you-do\/"},"modified":"2026-05-06T17:11:10","modified_gmt":"2026-05-06T17:11:10","slug":"hackers-hate-ai-slop-even-more-than-you-do","status":"publish","type":"post","link":"https:\/\/agooka.com\/news\/business\/hackers-hate-ai-slop-even-more-than-you-do\/","title":{"rendered":"Hackers Hate AI Slop Even More Than You Do"},"content":{"rendered":"<p>Save StorySave this storySave StorySave this story<\/p>\n<p>The complaint sounds familiar. \u201cI\u2019m disappointed that you are working to incorporate AI garbage into the site,\u201d one annoyed person, posting anonymously, said in an online message. \u201cNo-one is asking for this\u2014we want you to improve the site, stop charging for new features.\u201d<\/p>\n<p>Only, this is not a regular internet user moaning about AI being forced into their favorite app. Instead, they are complaining about a cybercrime forum\u2019s plans to introduce more generative AI. Like millions of others, scammers, grifters, and low-level hackers are getting annoyed about AI encroaching into their lives and the rise of low-quality AI slop being posted in their online communities.<\/p>\n<p>\u201cPeople don\u2019t like it,\u201d says Ben Collier, a security researcher and senior lecturer at the University of Edinburgh. As part of a recent study into how low-level cybercriminals are using AI, Collier and fellow researchers spotted an increasing pushback over the use of generative AI in underground cybercrime forums and hacking groups.<\/p>\n<p>During the generative AI boom and hype cycles of the past couple of years, some people posting on hacking forums have moved from being positive about how AI can help hacking to a greater skepticism about the technology, according to the study, which also involved researchers from the University of Cambridge and the University of Strathclyde.<\/p>\n<p>The researchers analyzed 97,895 AI-related conversations on cybercrime forums since the launch of ChatGPT in 2022 until the end of last year. They found complaints about people dumping \u201cbullet-pointed explainers\u201d of basic cybersecurity concepts, moaning about the number of low quality posts, and concerns about Google\u2019s AI search overviews driving down the number of visitors to the forums.<\/p>\n<p>For decades cybercrime message boards and marketplaces, often Russian in origin, have allowed scammers to do business together. They are places where stolen data can be traded, hacking jobs are advertised, and fraudsters shitpost about their rivals. While scammers often try to scam each other, the forums also have a sense of community. For example, users build up reputations for being reliable, and forum owners hold writing competitions.<\/p>\n<p>\u201cThese are essentially social spaces. They really hate other people using [AI] on the forums,\u201d Collier says. He says the social dynamic of the groups can be messed up by potential cybercriminals trying to gain a better reputation by posting AI-generated hacking explainers. \u201cI think a lot of them are a bit ambivalent about AI because it undermines their claim to be a skilled person.\u201d<\/p>\n<p>Posts reviewed by WIRED on Hack Forums, a self-styled space for those interested in talking about hacking and sharing techniques, show an irritation caused by people creating posts with AI. \u201cI see a lot of members using AI for making their threads\/posts and it pisses me off since they don\u2019t even take the time to write a simple sentence or two,\u201d one poster wrote. Another put it more bluntly: \u201cStop posting AI shit.\u201d<\/p>\n<p>In several instances, Collier says, users of multiple forums appear to be irritated by AI posts as they want to make friends. \u201cIf I wanted to talk to an AI chatbot, there are many websites for me to do so \u2026 I come here for human interaction,\u201d one post cited in the research says.<\/p>\n<p>Since ChatGPT emerged toward the end of 2022, there has been significant interest in AI-hacking capabilities and how the technology can transform online crime. Both sophisticated hackers and those less capable have been trying to use AI in their attacks. While some organized fraudsters have boosted their operations with ever-more realistic AI face-swapping technology and social engineering messages translated using AI, a lot of attention has been on generative AI\u2019s capabilities to write malicious code and discover vulnerabilities.<\/p>\n<p>\u201cMore sophisticated threat actors are aware of the shortfalls of commercial models that have guardrails, and they know ways to jailbreak those prompts,\u201d says Ian Gray, vice president of intelligence at the security company Flashpoint, referring to the safety mechanisms put in place by OpenAI, Anthropic, and Google. \u201cThey\u2019re also cautious of AI-generated projects in forums or marketplaces\u2014there are weaknesses and vulnerabilities, sometimes exposing the underlying infrastructure,\u201d Gray says.<\/p>\n<p>Flashpoint has seen hackers recently talking about the potential capabilities of Claude Mythos Preview, Anthropic\u2019s latest frontier AI model, which has thrown some in the cybersecurity industry into a panic. Some cybercriminals have also disparaged others for allegedly using AI in their hacking operations\u2014\u201call they can do is use AI,\u201d one group said, according to Flashpoint\u2019s analysis.<\/p>\n<p>Collier says that so far, among the lower-level cybercriminals that his study tracked\u2014not sophisticated or nation-state-backed hackers\u2014there hasn\u2019t been any obvious signs of \u201creal disruption\u201d caused by AI. \u201cIt has not significantly reduced the skill barrier to entry, nor has it led to serious disruptions to established business models or practices,\u201d the study says. \u201cInstead, its main impact has been on already highly automated areas such as SEO fraud, social media bots, and some forms of romance scam.\u201d<\/p>\n<p>Despite the frosty reception of AI being used on cybercrime forums, others see potential. Some posters on Hack Forums have said they would perhaps welcome an AI assistant that would \u201chelp\u201d them structure their posts and improve grammar, but they draw the line at an AI that can fully post for them. \u201cAn AI generator for posts would turn this into a clanker forum of AI&#039;s talking to each other,\u201d one person wrote.<\/p>\n<p>Meanwhile, Flashpoint researchers have spotted hackers discussing the idea of building an \u201cAI-enhanced\u201d cybercrime market, which was touted as a way to help people to buy stolen data and online accounts more quickly. Not everyone was on board. As one person wrote, \u201cIT\u2019S A STUPID FUCKING IDEA TO PUT AI INTO YOUR MARKET.\u201d<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Save StorySave this storySave StorySave this story The complaint sounds familiar. \u201cI\u2019m disappointed that you are working to incorporate AI garbage into the site,\u201d one annoyed person, posting anonymously, said in an online message. \u201cNo-one is asking for this\u2014we want you to improve the site, stop charging for new features.\u201d Only, this is not a [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":50068,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[36],"tags":[],"class_list":{"0":"post-50067","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-business"},"_links":{"self":[{"href":"https:\/\/agooka.com\/news\/wp-json\/wp\/v2\/posts\/50067","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/agooka.com\/news\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/agooka.com\/news\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/agooka.com\/news\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/agooka.com\/news\/wp-json\/wp\/v2\/comments?post=50067"}],"version-history":[{"count":0,"href":"https:\/\/agooka.com\/news\/wp-json\/wp\/v2\/posts\/50067\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/agooka.com\/news\/wp-json\/wp\/v2\/media\/50068"}],"wp:attachment":[{"href":"https:\/\/agooka.com\/news\/wp-json\/wp\/v2\/media?parent=50067"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/agooka.com\/news\/wp-json\/wp\/v2\/categories?post=50067"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/agooka.com\/news\/wp-json\/wp\/v2\/tags?post=50067"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}