State-Backed Hackers Using AI Tool Gemini to Power Cyberattacks, Google Warns

By
Aayush
Aayush is a B.Tech graduate and the talented administrator behind AllTechNerd. . A Tech Enthusiast. Who writes mostly about Technology, Blogging and Digital Marketing.Professional skilled in...
When you purchase through links on our site, we may earn an affiliate commission.

Government-linked cybercriminal groups are increasingly using artificial intelligence tools to plan and execute sophisticated online attacks, according to a new Google assessment. The company’s threat intelligence team found that its AI model, Gemini, has been misused to support multiple stages of cyber operations.

State actors among users

The report identified activity linked to threat groups operating from China, Iran, North Korea and Russia. These actors are not using AI to launch attacks directly, but as a support system to improve efficiency and scale.

Investigators found that hackers relied on Gemini for tasks such as:

  • Writing phishing emails and scam content
  • Translating messages for international targeting
  • Generating and refining malicious code
  • Testing for software vulnerabilities
  • Troubleshooting technical issues during operations

In several cases, attackers framed their queries as hypothetical or fictional scenarios to obtain detailed technical guidance.

From planning to post-breach operations

Google’s analysis highlights that AI assistance was used throughout the attack lifecycle — from initial reconnaissance to activities after a system was compromised.

One incident involved a hijacked account belonging to a cybersecurity professional. The attacker used it to request automated vulnerability analysis and targeted testing strategies against U.S.-based systems.

Other examples include:

  • Chinese groups seeking code corrections and technical advice on intrusion methods
  • Iranian actors using the tool to create customized components for social engineering campaigns
  • Automated development of new features for existing malware

Rise of automated social engineering

The report also noted growing use of AI to enhance psychological manipulation techniques. Attackers used Gemini to design persuasive messages and to support ClickFix campaigns — schemes that trick victims into running harmful commands through search ads or fake troubleshooting prompts.

These methods allow criminals to quickly capture sensitive data without immediately alerting the target.

No direct threat to regular users — but risks growing

Google said the misuse does not mean Gemini itself is unsafe for everyday users. However, the findings underscore how AI is lowering the technical barriers for cybercrime and enabling attackers to operate more efficiently.

Security experts warn that beyond financial fraud and data theft, such misuse could also lead to intellectual property loss and deeper compromises of critical systems.

The company said it continues to monitor abuse patterns and refine safeguards, as the rapid evolution of AI tools reshapes both cybersecurity defenses and the tactics used by threat actors.

Set AllTechNerd as Preferred source on Google
TAGGED:
Follow:
Aayush is a B.Tech graduate and the talented administrator behind AllTechNerd. . A Tech Enthusiast. Who writes mostly about Technology, Blogging and Digital Marketing.Professional skilled in Search Engine Optimization (SEO), WordPress, Google Webmaster Tools, Google Analytics