

If you have been leaning into vibe coding, using AI-assisted code generation with tools like Cursor, Copilot, or any other Agentic IDE, here is something you might not want to hear: your company's existing codebase, including every vulnerability you have not fixed, is now part of the model's training set.
This is not an abstract AI FUD theory. At Mobb, we are firmly in the vibe-code pro camp and use these tools extensively. But it is essential to do it securely. Cursor's own docs confirm that when generating new code, it uses your code patterns (source). If those patterns include insecure code, the LLM does not just replicate them, it multiplies them, repeating the same vulnerability across your codebase at machine speed. I even demonstrated this happening in a real example in this video.
How Does AI Code Duplication Turn Bugs Into Multipliers?
Once you understand that your AI coding assistant learns from your existing codebase, the next step is obvious. If insecure code exists in that base, it will spread. And it is already happening.
GitClear's research found that duplicate code has grown 10x in the past two years, most likely attributed to the rise of AI coding assistants (source). Duplicate insecure code does not just mean the same bug in more places. It means more maintenance, more potential entry points, and a multiplied blast radius when something goes wrong. Every missed fix becomes a security debt multiplier.
The industry sees the same trend coming. Gartner predicts: "By 2027, at least 30% of application security exposures will result from usage of vibe coding practices" (source). That is not fringe risk. That is a third of all exposures.
Is Your Backlog Still Static Debt or Now an Active Risk?
In the pre–vibe coding world, a security backlog was a list of known issues you might get around to fixing before the next audit. Bad, but relatively contained. Now those unresolved vulnerabilities are actively shaping the code your team writes tomorrow. Your backlog is no longer static debt, it is an active risk that influences every new line of code.
Think about it:
- Every known but unfixed injection flaw, insecure deserialization, or broken access control teaches the AI “this is how we do it here.”
- Every fix you do not make is a bug you are now asking to see in multiple files, services, and repos.
- Every sprint you delay remediation increases the spread of those patterns.
Leaving vulnerabilities unfixed is no longer just a passive risk. It has gone from static debt to a self-replicating security problem.
Has Inaction Become Amplification?
Once a vulnerability becomes part of your AI's learned patterns, ignoring it is not just inaction but amplification. The cost of not fixing issues has always been high, but now inaction directly fuels the creation of more vulnerable code at machine speed. This is why inaction has become amplification in the era of AI-assisted development.
This should be a wake-up call for companies and developers to treat the backlog as an urgent, high-priority threat, not a rainy-day project. That backlog is no longer a graveyard of “we will get to it later” tickets. It is the seedbed for your next security incident.
Fix it now, before your AI makes it the standard.
in 60 seconds or less.
That’s the Mobb difference