Claude Code flaw exposes AI website security gaps
A flaw in Anthropic’s Claude Code has highlighted broader security risks in artificial intelligence (AI)-driven web development, as nearly three-quarters of new web pages are now generated using the technology.
Check Point Research found that the vulnerability in Claude Code allowed attackers to remotely execute code and steal application programming interface – or API – credentials through malicious project configurations. Anthropic has since remediated the vulnerabilities.
The flaw makes it possible to weaponise AI so that developers are turned into unsuspecting hackers when code goes live because it could have contained malicious code, explains Jacqui Muller, a researcher at Belgium Campus iTversity.
In addition, the vulnerability means the developers themselves could be exploited and, for example, data associated with a website that they created could be hacked and held ransom, notes Muller.
AI frenzy
This potential vulnerability is not limited to Claude Code because it can be exploited across several sandbox environments, including those offered by AI development tools Replit, Lovable and GitHub Copilot, among others, says Muller, who is also PhD candidate in computer science and information technology with information systems at North West University.
Nearly 74.2% of newly-created web pages in April 2025 included AI-generated content, according to a large-scale study by Ahrefs, an SEO and web analytics platform. BuiltWith.com lists almost eight million websites built using AI tools, including Verizon.com, Bell.ca and Roche.com.
Muller says those sites could be vulnerable to being exploited through the AI dev environment, which won’t be a known quantity until scans are run, which will take some time. “It depends on the extent that they use AI for their development and the underlying tech stacks they are using.”
Claude Code runs inside the terminal or development environment, allowing developers to delegate coding tasks through natural language instructions. Because the terminal has permission to create and delete files, install software, access stored keys and connect to the internet, the implications of the flaw were significant.
Security researchers found that, if an attacker hides malicious instructions inside configuration files, Claude Code could execute them automatically.
The vibe
Muller says there is a growing risk in what many are casually calling “vibe coding” – building solutions by prompting AI and accepting whatever it generates without properly understanding, reviewing, or validating the output.
“While generative tools can accelerate development dramatically, they can also introduce hidden vulnerabilities, inefficient logic, insecure defaults and architectural flaws,” says Muller.
- Nicole Mawson, ITWeb


