A six-month investigation into AI-assisted software development tools has uncovered a systemic security problem affecting nearly every major AI-powered coding assistant on the market. The findings, published in the IDEsaster research report, reveal more than thirty serious vulnerabilities that enable attackers to steal sensitive data and, in some scenarios, achieve remote code execution (RCE) inside developer environments.
The research paints a troubling picture: every tested AI-integrated IDE and coding assistant was found to be vulnerable when deployed under typical real-world conditions.
All Major AI IDE Tools Affected
The investigation examined AI agents embedded in popular development environments such as:
- Visual Studio Code
- JetBrains IDEs (including IntelliJ-based tools)
- Zed
Along with widely used AI coding assistants, including:
- GitHub Copilot
- Cursor
- Windsurf
- Kiro.dev
- Zed.dev
- Roo Code
- Junie
- Cline
- Gemini CLI
- Claude Code
Researchers identified at least 24 confirmed CVEs, with additional security advisories released by AWS. According to the report, the issue is not isolated to a specific vendor or AI model—it is structural and ecosystem-wide.
The Root of the Problem: IDEs Were Never Designed for Autonomous AI
At the heart of the issue is a fundamental design mismatch. Traditional IDEs were built around human-driven workflows, where features such as file access, configuration changes, schema validation, and code execution were assumed to be manually controlled and context-aware.
AI assistants changed that assumption.
“All AI IDEs effectively ignore the base software in their threat model,”
said Ari Marzouk, the security researcher behind the report, in comments to The Hacker News.
“They treat these features as safe because they’ve existed for years. But when you introduce autonomous agents that can read, write, and act independently, those same features become weapons.”
Once an AI agent is allowed to operate across a project autonomously, long-standing IDE features suddenly become attack primitives.
An IDE-Agnostic Attack Chain
The report outlines a repeatable, IDE-agnostic exploit chain that applies across platforms:
- Context Hijacking via Prompt Injection
Hidden instructions are planted in places AI tools routinely process, such as:- README files
- rule or config files
- filenames
- output returned from compromised Model Context Protocol (MCP) servers
- AI Tool Manipulation
Once the agent reads the poisoned context, it treats the embedded instructions as valid guidance and uses its legitimate tools—file editing, configuration updates, schema generation—without recognizing malicious intent. - Abuse of Core IDE Features
The final stage leverages built-in IDE behaviors to:- exfiltrate data
- trigger outbound network connections
- execute attacker-controlled code
Crucially, these steps rely on documented, default IDE functionality, not exploits in the traditional sense.
Real-World Examples: Data Exfiltration and RCE
Silent Data Leaks via JSON Schema Fetching
One demonstrated exploit involves writing a JSON file that references a remote schema URL. When the IDE attempts to validate the file:
- It automatically fetches the schema
- The request includes parameters injected earlier by the AI agent
- Sensitive data collected during the session is silently transmitted
This behavior was confirmed in Visual Studio Code, JetBrains IDEs, and Zed, and notably, developer protections such as diff previews failed to block or warn about the outbound request.
Remote Code Execution Through IDE Settings
Another proof-of-concept showed how attackers could achieve full RCE:
- The AI agent modifies an executable file already inside the workspace
- It then changes configuration values such as
php.validate.executablePath - The IDE immediately executes the malicious file when a related source file is opened or created
JetBrains tools exhibited similar weaknesses through workspace metadata and run configuration handling.
Why This Is Hard to Fix
The report’s conclusion is particularly striking: this class of vulnerability cannot be fully patched in the short term.
The reason is architectural. Modern IDEs were never designed under what the researchers describe as a “Secure for AI” assumption. They trust that automation is guided by developers, not autonomous agents that can be socially engineered or manipulated through indirect context.
While mitigations exist—including stricter permission models, sandboxing, and better prompt isolation—the long-term solution would require rebuilding core IDE assumptions about:
- how tools are accessed
- what AI agents are allowed to read and write
- when actions can trigger execution, networking, or validation behaviors
For developers, the findings mean that AI assistants inside IDEs should now be considered high-risk components, especially when working with untrusted repositories or external inputs.
For tool vendors, the research signals a growing need to:
- redesign permission boundaries for AI agents
- treat IDE features as potential attack surfaces
- move beyond legacy trust models inherited from pre-AI workflows
As AI coding tools become more autonomous and deeply integrated, the risks outlined in the IDEsaster report suggest the industry is approaching a critical inflection point.
The investigation makes one thing clear: AI has fundamentally changed the threat landscape inside developer tools. The same features that once improved productivity are now capable of silently leaking secrets or executing code—without traditional exploits or malware.
Until IDEs are redesigned with AI-native security models, researchers warn that vulnerabilities like these will remain a persistent and growing risk across the entire software development ecosystem.
