Google Warns: Hackers Are Already Weaponizing Everyday AI
A new report from Google’s security office delivers a clear message to businesses and governments: the era of AI-powered hacking isn't coming; it's here. Based on observations from Google’s Threat Intelligence Group, the assessment details how adversaries are actively adapting commercial and open-source AI tools for attacks.
The research, led by Google Cloud CISO Phil Venables, outlines a three-stage pattern: distillation, experimentation, and integration. In distillation, threat actors extract useful functions from large AI models, bypassing safety features to generate phishing emails, social engineering scripts, and even malware code. This creates a dangerous asymmetry, where safety investments in original models are nullified once outputs are used to train malicious, derivative ones.
Experimentation is already widespread. State-linked groups from nations including China, Russia, Iran, and North Korea are testing AI to improve reconnaissance, craft convincing lures, and find software vulnerabilities. While these tools haven’t invented wholly new attacks, they are making existing methods faster, cheaper, and more scalable—a trend corroborated by other intelligence firms like Microsoft.
The most serious phase is integration, where AI becomes a permanent part of attackers' toolkits. The report warns that AI agents capable of autonomous action could soon execute attack chains with little human oversight, with corporate AI agents themselves becoming new targets for manipulation.
Google stresses that defenders are not powerless. AI also amplifies security teams, enabling faster threat detection and analysis. The company points to its own Sec-Gemini model as an example. Defenders hold an advantage by operating in controlled environments with proprietary data, but this edge requires deliberate investment and updated security architectures built for AI.
The advice for organizations is immediate: assume phishing and hacking attempts will be more sophisticated and personalized. Security must be built into AI systems from the start, not added later. With regulatory frameworks still playing catch-up, the report underscores that waiting to act is no longer an option. The activity Google documents is present-tense, marking a pivotal transition in the security landscape that demands a matched response.
Original source
Read on Webpronews