{"id":45257,"date":"2026-02-12T16:41:28","date_gmt":"2026-02-12T16:41:28","guid":{"rendered":"https:\/\/agooka.com\/news\/technologies\/over-100000-prompts-used-in-attempt-to-steal-geminis-reasoning-logic\/"},"modified":"2026-02-12T16:41:28","modified_gmt":"2026-02-12T16:41:28","slug":"over-100000-prompts-used-in-attempt-to-steal-geminis-reasoning-logic","status":"publish","type":"post","link":"https:\/\/agooka.com\/news\/technologies\/over-100000-prompts-used-in-attempt-to-steal-geminis-reasoning-logic\/","title":{"rendered":"Over 100,000 prompts used in attempt to steal Gemini\u2019s reasoning logic"},"content":{"rendered":"<p><img decoding=\"async\" src=\"https:\/\/dataconomy.com\/wp-content\/uploads\/2026\/02\/1120725.jpg\" alt=\"Over 100,000 prompts used in attempt to steal Gemini\u2019s reasoning logic\" title=\"Over 100,000 prompts used in attempt to steal Gemini\u2019s reasoning logic\"\/><\/p>\n<p>Google reports that its Gemini AI chatbot faced a large-scale cloning attempt involving over 100,000 prompts from commercially motivated actors seeking to extract the model\u2019s reasoning algorithms.<\/p>\n<p>In a Thursday report Google described the activity as \u201cdistillation attacks,\u201d repeated questioning intended to reveal Gemini\u2019s patterns, logic, and inner workings, a process it labels \u201cmodel extraction.\u201d<\/p>\n<p>The company says attackers aim to use the extracted information to build or strengthen their own AI systems and believes most perpetrators are private companies or researchers pursuing a competitive edge.<\/p>\n<p>A Google spokesperson told NBC News the attacks originated worldwide, though no further suspect details were provided.<\/p>\n<p>John Hultquist, chief analyst of Google\u2019s Threat Intelligence Group, warned that Gemini\u2019s experience serves as a \u201ccanary in the coal mine,\u201d suggesting similar incidents will likely affect smaller firms with custom AI tools.<\/p>\n<p>Google classifies distillation attacks as intellectual property theft. Tech firms have invested billions developing large language models and treat the internal mechanisms of flagship models as highly valuable proprietary assets.<\/p>\n<p>Despite detection and blocking mechanisms, major LLMs remain inherently vulnerable because they are openly accessible on the internet.<\/p>\n<p>OpenAI previously accused China\u2011based DeepSeek of conducting distillation attacks to improve its own models. Google indicated many prompts were crafted to extract the algorithms that enable Gemini to \u201creason,\u201d i.e., decide how to process information.<\/p>\n<p>Hultquist warned that as companies train custom LLMs on sensitive data, distillation could expose proprietary knowledge, citing a hypothetical scenario involving a model trained on a century of confidential trading strategies.<\/p>\n<p>Kevin Collier, NBC News reporter covering cybersecurity, privacy, and technology policy, authored the article. Google reported that it has implemented monitoring tools to detect anomalous prompting patterns and has taken steps to block sources identified as conducting repeated extraction attempts.<\/p>\n<p>The spokesperson noted that the campaigns originated from multiple regions, indicating a worldwide interest in replicating Gemini\u2019s capabilities.<\/p>\n<p>OpenAI\u2019s earlier allegation against DeepSeek illustrates that distillation attacks are not limited to a single vendor and have been observed across different geographic markets. One documented campaign alone generated more than 100,000 distinct prompts, demonstrating the intensity of the extraction effort.<\/p>\n<p><strong>Featured image credit<\/strong><\/p>\n","protected":false},"excerpt":{"rendered":"<p>Google reports that its Gemini AI chatbot faced a large-scale cloning attempt involving over 100,000 prompts from commercially motivated actors seeking to extract the model\u2019s reasoning algorithms. In a Thursday report Google described the activity as \u201cdistillation attacks,\u201d repeated questioning intended to reveal Gemini\u2019s patterns, logic, and inner workings, a process it labels \u201cmodel extraction.\u201d [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":45258,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[37],"tags":[],"class_list":{"0":"post-45257","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-technologies"},"_links":{"self":[{"href":"https:\/\/agooka.com\/news\/wp-json\/wp\/v2\/posts\/45257","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/agooka.com\/news\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/agooka.com\/news\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/agooka.com\/news\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/agooka.com\/news\/wp-json\/wp\/v2\/comments?post=45257"}],"version-history":[{"count":0,"href":"https:\/\/agooka.com\/news\/wp-json\/wp\/v2\/posts\/45257\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/agooka.com\/news\/wp-json\/wp\/v2\/media\/45258"}],"wp:attachment":[{"href":"https:\/\/agooka.com\/news\/wp-json\/wp\/v2\/media?parent=45257"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/agooka.com\/news\/wp-json\/wp\/v2\/categories?post=45257"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/agooka.com\/news\/wp-json\/wp\/v2\/tags?post=45257"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}