HashJack Attack Exploits AI Browsers to Steal Data via URL Manipulation

By Aayush
When you purchase through links on our site, we may earn an affiliate commission.

A new cybersecurity study from Cato Networks has uncovered a previously unseen cyberattack technique called HashJack. This method hides malicious instructions after the “#” symbol in a normal website URL. Because browsers ignore anything after the hash when loading a page, the site appears legitimate—yet AI-powered browsers may still read those hidden instructions as commands.

Since the attack happens inside the browser itself, it slips past traditional defenses: antivirus software doesn’t detect it, and website servers don’t see anything suspicious. HashJack is a type of prompt injection, where text the user never typed gets interpreted as instructions for an AI assistant.

How browser prompt attacks work

There are two broad prompt-injection paths:

  • Direct injection: hidden instructions are fed directly into a chatbot at the moment the user sends a prompt.
  • Indirect injection: the harmful commands are embedded inside content the AI is supposed to analyze—like a webpage, PDF, review, or comment.

This new attack targets AI-powered browsers, which try to anticipate user intent and automatically interpret page content. Because these browsers rely heavily on large language models, they are especially vulnerable to indirect attacks.

Cato Networks says HashJack is the first method to turn legitimate websites into weapons that manipulate AI assistants. Popular tools—including Microsoft Edge’s Copilot, Google Chrome’s Gemini integration, and Perplexity AI’s Comet browser—can all be affected.

How HashJack works

The attacker simply adds a “#” at the end of a normal URL, followed by hidden commands. For example:

https://legitimate-website.com/page#secret-instructions-go-here

The page loads normally. But once the user activates the browser’s AI assistant—whether to summarise the page, answer questions, or perform a task—the assistant reads the text after the “#” and treats it as instructions. That can lead to:

  • theft of personal or account data
  • phishing attempts
  • misinformation
  • triggering malicious downloads
  • directing users to harmful links

Because people tend to trust their AI helper, these attacks can be far more convincing than traditional phishing.

Cato’s research group, Cato CTRL, found that more “agent-like” AI browsers, such as Comet, can even be tricked into sending data directly to a hacker-controlled server. More passive assistants may still mislead the user with false guidance or malicious links.

Vendor responses

Cato disclosed the issue to Google and Microsoft in August, and to Perplexity AI in July.

  • Microsoft and Perplexity applied fixes to their browsers.
  • Google, however, said it does not plan to fix the issue, arguing that the behavior is expected and classifying the problem as “low severity.”

HashJack highlights a growing challenge: as AI-powered browsers become more common, the attack surface expands in unexpected ways—and long-trusted browser behaviors can suddenly become security risks.

Share This Article
Follow:
Aayush is a B.Tech graduate and the talented administrator behind AllTechNerd. . A Tech Enthusiast. Who writes mostly about Technology, Blogging and Digital Marketing.Professional skilled in Search Engine Optimization (SEO), WordPress, Google Webmaster Tools, Google Analytics
Leave a Comment