AI Transforms Cybercrime Into a Global Menace
AI-powered hacking has rapidly escalated from a minor concern to an industrial-scale menace in just three months, reveals a groundbreaking report from Google’s threat intelligence group. This surge intensifies global alarm over how advanced AI models, gifted with exceptional coding abilities, are becoming potent tools for uncovering and exploiting software vulnerabilities across diverse systems.


Criminals and State Actors Exploit Commercial AI Models
Google’s findings expose widespread adoption of commercial AI platforms—such as Gemini, Claude, and OpenAI’s tools—by criminal syndicates and state-sponsored hackers from China, North Korea, and Russia. These threat actors use AI to sharpen, expedite, and scale their cyberattacks with alarming efficiency.

John Hultquist, Google’s chief analyst, stated bluntly, “There’s a misconception that the AI vulnerability race is imminent. The reality is that it’s already begun.” He explained that AI empowers attackers to accelerate testing, enhance malware sophistication, maintain persistence on targets, and dramatically upgrade their operational capabilities.

Zero-Day Vulnerabilities and AI’s Role in Mass Exploitation
In a striking example, the AI firm Anthropic withheld its new model, Mythos, citing its immense power to detect zero-day vulnerabilities—unknown flaws—in every major operating system and web browser. Anthropic warned these findings demand urgent, coordinated defensive measures across the cybersecurity industry.


Yet Google’s report revealed that a criminal group nearly launched a mass exploitation campaign utilizing a different large language model (LLM), not Mythos. This underscores how multiple AI tools fuel cybercriminal innovation.

Experimental AI Tools Aid Hackers and Defenders Alike
The report also highlights hacker experiments with OpenClaw, an AI tool notorious for lacking safeguards and accidentally mass-deleting email inboxes. Despite its flaws, OpenClaw exemplifies how AI agents can automate complex cyber operations.

Security expert Professor Steven Murdoch from University College London remains cautiously optimistic. He notes that AI can bolster cybersecurity defenses as well as attacks, marking a paradigm shift in vulnerability discovery that will unfold over time.


AI’s Uncertain Impact on Public Sector Productivity
While AI boosts hacker productivity, its benefits for the broader economy remain debatable. The Ada Lovelace Institute (ALI), a leading AI research body, challenges optimistic projections of multi-billion-pound productivity gains in the public sector.

The UK government forecasts a £45 billion uplift through AI and digital investments, but ALI warns most studies focus narrowly on time or cost savings without assessing outcomes like service quality or worker wellbeing.

ALI’s latest report exposes critical flaws in current research: unreliable real-world efficacy, obscured task-specific results, and overlooked effects on employment and service delivery.

Recommendations for Robust AI Impact Assessment
The report urges:

- Incorporating uncertainty in AI productivity forecasts.
- Government departments to measure AI program impacts from inception.
- Supporting long-term studies tracking productivity gains over years, not weeks.
ALI cautions that inflated confidence in AI-driven productivity claims often outpaces the evidence, potentially misleading policymakers and stakeholders.

















