Google's flagship AI is getting peppered with questions, not all of them innocent. In a new report, the company says its Gemini chatbot has been targeted by "commercially motivated" actors trying to reverse-engineer it—by asking itself exactly how it works. The tactic, known as model extraction, involves bombarding a chatbot with prompts to glean the logic and patterns behind its responses, potentially to build competing AI systems, per NBC News. According to Infosecurity magazine, a technique known as knowledge distillation is used to copy data from one model and paste it to another "to accelerate AI model development quickly and at a significantly lower cost."
Google, which views this as theft of intellectual property, believes the attempts largely stem from private firms and researchers around the globe, though it declined to name suspects. Gemini was prompted more than 100,000 times in one such "distillation attack." John Hultquist, chief analyst at Google's Threat Intelligence Group, describes the company as "the canary in the coal mine," warning of attacks that smaller companies' models are likely to face soon, if they aren't already. Major chatbots remain attractive targets because they're publicly accessible by design, despite safeguards meant to detect and block this kind of probing. Last year, OpenAI accused Chinese rival DeepSeek of similar activity aimed at ChatGPT.