AI IDEs Vulnerable: Over 30 Security Flaws Found in Popular Coding Tools
December 6, 2025
Researchers warn of a broad set of more than 30 security vulnerabilities in AI-powered IDEs and extensions, collectively dubbed IDEsaster, which can enable data exfiltration and remote code execution.
AI-assisted coding tools such as GitHub Copilot, Amazon Q, and Replit AI harbor over 30 security flaws including command injection, path traversal, and information leakage.
Experts emphasize the need for industry-wide collaboration, standardized security protocols, and proactive defense to keep pace with growing AI-driven threats.
Threats can arise from context hijacking via hidden or attacker-controlled input like pasted URLs, invisible characters, MCP server manipulation, or rug pulls, weaponizing IDE features.
The attack chain typically involves prompt injection to hijack context, autonomous AI agent actions through auto-approved tool calls, and triggering legitimate IDE features to breach security and leak data or run commands.
Mitigations urged include using AI IDEs only with trusted projects, connecting to trusted MCP servers, manually reviewing added sources, and applying least-privilege, hardening, sandboxing, and security testing to prevent path traversal, leakage, and command injection.
Supply chain and model sourcing risks are magnified as attackers craft malicious packages on PyPI and NPM and rely on centralized model providers, complicating inspection and patching.
Real-world incidents include a fintech firm experiencing data leakage from an AI-driven customer service agent and authentication bypasses in generated login code, underscoring tangible risks.
Industry responses call for sandboxing AI tools, treating outputs as untrusted, continuous vulnerability verification, and adopting quantum-safe cryptography and prompt engineering to curb risky outputs.
Open-source AI defenses, such as Google’s Big Sleep, show AI can both identify and mitigate vulnerabilities, highlighting a dual-use security landscape.
Advocates propose a broader Secure for AI framework to ensure AI-enabled products are secure by design and by default as the attack surface expands when AI agents integrate with existing apps.
Notable exploit patterns include reading sensitive files, exfiltrating data via remote JSON schemas, editing settings to execute code, and overwriting workspace configurations to achieve code execution, often without user interaction when auto-approval is enabled.
Summary based on 2 sources
Get a daily email with more AI stories
Sources

The Hacker News • Dec 6, 2025
Researchers Uncover 30+ Flaws in AI Coding Tools Enabling Data Theft and RCE Attacks
WebProNews • Dec 6, 2025
AI Coding Tools Like Copilot and Amazon Q Face 30+ Security Vulnerabilities