Vibing Dangerously: The Hidden Risks of AI-Generated Code

Vibe coding has rapidly emerged as a revolutionary approach to software development. This methodology relies on large language models (LLMs) and natural language prompts to create code quickly, enabling developers — and increasingly non-developers — to build applications at unprecedented speeds.
Yet, while this approach offers big benefits for rapid prototyping and idea validation, it also casts a significant security shadow that many users may overlook in their rush to embrace this new paradigm.
The Promise and Peril of Vibe Coding
Vibe coding fundamentally changes how software is created. Instead of manually writing every line of code, developers describe their desired functionality in natural language (via either typed or voice prompts), and AI tools generate the implementation.
As Janet Worthington, an analyst at Forrester Research, told The New Stack, this approach “focuses on using generative AI to achieve desired outcomes based on the inventor’s vision, offering a fast way to prototype ideas, without having to understand the underlying code or the complexities of the system.”
This speed is especially valuable for startups and solo developers, she said. It allows for rapid learning cycles and quick demonstrations of concepts to potential investors.
But as Brad Shimmin, an analyst at the Futurum Group, told The New Stack, because vibe coding “relies heavily on iteration with the developer feeding the project codebase back into an LLM repeatedly and ‘not’ carefully reviewing each step (otherwise, it wouldn’t be ‘vibing’), the opportunity to introduce inefficiencies, outright errors, and vulnerabilities is more likely to grow over time without a solid CI/CD practice.”
In a post on X, Abhishek Sisodia, a Toronto-based software engineer for Scotiabank Digital Factory, wrote: “Vibe coding is the new no-code — except now it’s AI doing the heavy lifting instead of drag-and-drop builders. The real question is, will it last, or are we about to see a wave of half-baked AI-built products?”
He also wrote: “AI can write your code, but it won’t protect your app! I realized many new builders are vulnerable to simple attacks.”
AI can write your code, but it won’t protect your app! I realized many new builders are vulnerable to simple attacks.
Since many of you are new builders using AI to code, I’ll break down how to secure your AI-built projects in plain English. No tech jargon, I promise! 🧵👇 https://t.co/cao2RCtdAg
— Abhishek Sisodia (@abhirb) March 17, 2025
Why AI-Generated Code Is Vulnerable by Design
At the core of vibe coding’s security challenges lies a fundamental issue with how AI code models are trained.
Eitan Worcel, CEO at Mobb, told The New Stack: “GenAI was trained on real-world code, much of which contains vulnerabilities that may or may not be exploitable — not because the code was safe, but because other contextual mitigations masked the risk. When GenAI produces new code, it often reproduces those insecure patterns and without the original context, those same vulnerabilities are not mitigated and can become exploitable.”
This training data problem means the AI effectively inherits the security flaws present in its training set. As Forrester’s Worthington notes, “LLMs are probabilistic rather than deterministic, making it challenging to guarantee any level of consistency for AI-generated code, even insecure code.”
Common Vulnerabilities in AI-Generated Code
Security experts identify several categories of vulnerabilities that frequently appear in AI-generated code:
- Injection attacks: Danny Allan, CTO of Snyk, highlights that “because AI typically lacks comprehensive validation for user inputs or improper handling of data, injection vulnerabilities are also a common issue with AI code generation.” This includes SQL injection, a vulnerability that David Mytton, CEO of Arcjet, a developer security software provider, told The New Stack he has observed in vibe-coded applications.
- Improper permissions: Allan also notes that “a very common issue with AI-generated code is improperly configured permissions, potentially leading to the exposure of proprietary, sensitive organizational information or privilege escalation attacks.”
- Path traversal vulnerabilities: Willem Delbare, founder and CTO of Aikido Security, told The New Stack, “AI can easily write code vulnerable to path traversal that would be flagged immediately by security tools. The code may run perfectly fine, but it could allow attackers to access unauthorized files and directories.”
- Insecure dependencies: The AI might recommend “open source libraries that are insecure, poorly maintained, or don’t exist,” according to Worthington.
- Licensing issues: Many LLMs trained on open source code “may suggest code that is an actual code snippet from other open source software without proper attribution or adherence to licensing obligations, which is a liability for organizations,” Worthington adds.
The 0-1 vs. 1-10 Problem
Delbare shed light on an important distinction in how vibe coding impacts different stages of development: “Vibe coding is great for quickly coding solutions to complex requirements, but a typical app has many features, and AI still lacks memory and a big enough context window to handle architectural-level mistakes and interactions between features.”
This observation points to a critical issue: while vibe coding excels at the 0-1 phase (initial creation), it struggles with the 1-10 phase (scaling, hardening, and production-readiness). The more complex an application becomes, the more these security vulnerabilities compound and interact in potentially dangerous ways, these security experts said.
Real-World Consequences
The security risks of vibe coding aren’t merely theoretical. Matt Johansen, a cybersecurity expert and founder of Vulnerable U, told The New Stack, “We’re already seeing examples on social media of solo vibe coders facing attacks against their apps that they launched and they were previously bragging about how fast and easy coding it and going live were.”
One such example comes from a developer known as LeoJr94 on X (formerly Twitter), who built a SaaS product “with Cursor, zero handwritten code.” After proudly sharing his accomplishment, he later posted: “guys, i’m under attack ever since I started to share how I built my SaaS using Cursor… maxed out usage on api keys, people bypassing the subscription, creating random shit on db… as you know, I’m not technical so this is taking me longer that usual to figure out.”
guys, i’m under attack
ever since I started to share how I built my SaaS using Cursor
random thing are happening, maxed out usage on api keys, people bypassing the subscription, creating random shit on db
as you know, I’m not technical so this is taking me longer that usual to…
— leo (@leojr94_) March 17, 2025
This case illustrates how the lack of security expertise, combined with the false confidence that AI-generated code can inspire, creates real vulnerabilities that attackers are quick to exploit.
Can’t You Just Tell the AI to ‘Make It Secure’?
One question is whether simply instructing the AI to produce secure code could solve these problems. Security experts are unanimous in their response: It doesn’t.
Worcel explained: “It’s a good instinct, and certain prompts can sometimes nudge the model toward better practices — but unfortunately, it’s not sufficient. The fundamental issue is that generative AI models were trained on massive amounts of publicly available code, which includes plenty of insecure patterns. There simply isn’t a large enough dataset of code that’s been thoroughly vetted for security to teach these models what not to do.”
But Allan of Snyk is even more direct: “Absolutely not, traditional application security practices should always be front and center when addressing vulnerabilities. Think of it this way: autopilot was created in 1912, but that doesn’t mean we fly our airplanes with no pilots in the cockpit.”
The False Confidence Problem
A particularly insidious aspect of vibe coding is that it can give inexperienced developers a false sense of security. Nick Baumann, product marketing manager at Cline, told The New Stack that security concerns “stem not from AI doing the coding, but rather from the user who has less experience building out systems and the appropriate security they require.”
Meanwhile, Jason Bloomberg, an analyst at Intellyx, takes a stronger stance.
“Security is but one of many issues with vibe coding. Using AI to generate code is an invitation for hallucinations, bugs, vulnerabilities, and all manner of other pitfalls,” he told The New Stack.
Bloomberg added that many experienced developers “are finding that vibe coding isn’t worth the trouble — and some actually think it’s a joke. Less seasoned developers (and their bosses) may see it as a shortcut, at their peril.”
Moreover, Arcjet’s Mytton said he sees another dimension to this problem.
“If you don’t know there are security issues, how do you know if they’re fixed? Some of them? All of them?” he said.
Practical Security Solutions
Despite these challenges, experts offer several approaches to mitigate the security risks of vibe coding:
- Automated security scanning: Mobb’s Worcel argues that “the answer lies in automation — not just for finding issues, but for fixing them. Auto-remediation, integrated into the development process, can help catch and resolve problems without slowing anyone down.”
- Secure API handling: Allan emphasizes that “one of the most critical steps is securing API keys — these should never be exposed in client-side code but rather stored securely on the server side to prevent unauthorized access.”
- Input validation: Allan also highlights the importance of treating “all user inputs as potentially harmful and implementing strict validation to guard against vulnerabilities like SQL injection and cross-site scripting (XSS) attacks.”
- Code review: Even with AI-generated code, human review remains essential. Allan said that “every line must undergo thorough code reviews to ensure adherence to security best practices.”
- Security-aware tooling: Nigel Douglas, head of developer relations at Cloudsmith, in a statement, said “without security-aware tooling or policy enforcement, enterprises could end up unknowingly introducing vulnerabilities into their ecosystem.”
- Test edge cases: Aikido’s Delbare recommends testing “edge cases that go beyond the happy path — especially with external APIs, larger datasets, and unexpected inputs.”
Moreover, Delbare added, “If you’re building a real app that handles sensitive data, you should:
- Use open source tools like OpenGrep to identify security issues
- Have the AI focus specifically on potential security issues in its generated code.”
Finding the Balance
The key to leveraging vibe coding’s benefits while mitigating its risks lies in finding the right balance between speed and security. As LeoJr94 reflected after his security incident: “The more I vibe code, the more I learn. The more I learn, the less I want to vibe code.”
The more I vibe code, the more I learn.
The more I learn, the less I want to vibe code.
— leo (@leojr94_) March 6, 2025
This doesn’t mean abandoning vibe coding entirely but rather approaching it with appropriate caution and supplementing it with solid security practices.
As Shimmin said, “Vibe coding doesn’t do away with testing, documenting, and deploying; if anything, because it operates somewhat autonomously, it pushes more work to the end of the lifecycle.”
Security Cannot Be Ignored
Vibe coding represents a significant evolution in software development, making coding more accessible and accelerating the path from idea to implementation. However, its security implications cannot be ignored.
The combination of AI models trained on insecure code patterns, the lack of comprehensive security knowledge among many practitioners, and the speed-focused nature of the approach creates a perfect storm for security vulnerabilities.
As the industry continues to embrace this new paradigm, it must simultaneously develop security practices specifically tailored to AI-generated code. This includes better automated security tools, improved education for developers, and a healthy dose of caution when deploying vibe-coded applications to production.
The future of secure vibe coding will likely involve a partnership between human expertise and AI capabilities — where the AI accelerates development, but humans provide the security oversight and contextual understanding that current AI models lack.
One developer known as @Method1cal, who is founder of RedStack Labs, a cloud and cybersecurity consultancy in Vancouver, BC, Canada, is already working on this.
“We developed secure coding rules for @cursor_ai to help vibe coders produce more secure code,” he posted.
We developed secure coding rules for @cursor_ai to help vibe coders produce more secure code. We saw the issues @leojr94_ had and it made us think, the vibe coders need security.
Currently supporting Javascript and Python projects, the owasp-asvs cursor rules are in beta, and we… pic.twitter.com/jFJSlZMEfp
— Method (@method1cal) March 18, 2025
In addition, Sisodia has created a cheat sheet for vibe coding security.
In the meantime, developers embracing vibe coding would do well to heed Allan’s aviation metaphor: the AI may be the autopilot, but a human pilot should always remain in the cockpit, especially when it comes to security.