
Google reports that its Gemini AI chatbot faced a large-scale cloning attempt involving over 100,000 prompts from commercially motivated actors seeking to extract the model’s reasoning algorithms.
In a Thursday report Google described the activity as “distillation attacks,” repeated questioning intended to reveal Gemini’s patterns, logic, and inner workings, a process it labels “model extraction.”
The company says attackers aim to use the extracted information to build or strengthen their own AI systems and believes most perpetrators are private companies or researchers pursuing a competitive edge.
A Google spokesperson told NBC News the attacks originated worldwide, though no further suspect details were provided.
John Hultquist, chief analyst of Google’s Threat Intelligence Group, warned that Gemini’s experience serves as a “canary in the coal mine,” suggesting similar incidents will likely affect smaller firms with custom AI tools.
Google classifies distillation attacks as intellectual property theft. Tech firms have invested billions developing large language models and treat the internal mechanisms of flagship models as highly valuable proprietary assets.
Despite detection and blocking mechanisms, major LLMs remain inherently vulnerable because they are openly accessible on the internet.
OpenAI previously accused China‑based DeepSeek of conducting distillation attacks to improve its own models. Google indicated many prompts were crafted to extract the algorithms that enable Gemini to “reason,” i.e., decide how to process information.
Hultquist warned that as companies train custom LLMs on sensitive data, distillation could expose proprietary knowledge, citing a hypothetical scenario involving a model trained on a century of confidential trading strategies.
Kevin Collier, NBC News reporter covering cybersecurity, privacy, and technology policy, authored the article. Google reported that it has implemented monitoring tools to detect anomalous prompting patterns and has taken steps to block sources identified as conducting repeated extraction attempts.
The spokesperson noted that the campaigns originated from multiple regions, indicating a worldwide interest in replicating Gemini’s capabilities.
OpenAI’s earlier allegation against DeepSeek illustrates that distillation attacks are not limited to a single vendor and have been observed across different geographic markets. One documented campaign alone generated more than 100,000 distinct prompts, demonstrating the intensity of the extraction effort.
Featured image credit































