Quality of code and security issues Vibe coding has raised concerns about understanding and accountability. Developers may use AI-generated code without comprehending its functionality, leading to undetected bugs, errors, or
security vulnerabilities. While this approach may be suitable for
prototyping or "throwaway weekend projects" as Karpathy originally envisioned, it is considered by some experts to pose risks in professional settings, where a deep understanding of the code is crucial for
debugging, maintenance, and
security.
Ars Technica cites Simon Willison, who stated: "Vibe coding your way to a production codebase is clearly risky. Most of the work we do as software engineers involves evolving existing systems, where the quality and understandability of the underlying code is crucial." In October 2025 VeraCode released a study that showed that over the last 3 years LLMs had become dramatically better at generating functional code, but that the security of generated code had generally not improved. Moreover, larger models were not better than small ones at generating secure code. There was a small increase in security from the OpenAI reasoning models, but not in other reasoning models, and this increase was nothing like the improvement in generated functionality. In December 2025, computer security researcher Etizaz Mohsin discovered a security flaw in the Orchids vibe coding platform, which he demonstrated to a
BBC News reporter in February 2026. A December 2025 analysis by CodeRabbit of 470 open-source GitHub pull requests found that code that was co-authored by generative AI contained approximately 1.7 times more "major" issues compared to human-written code. The study revealed that AI co-authored code showed elevated rates of logic errors, including incorrect dependencies, flawed control flow, misconfigurations (75% more common), and security vulnerabilities (2.74x higher). Additionally, they also reported high code readability issues, including formatting errors and naming inconsistencies.
Code maintainability and technical debt Vibe coding has the potential of making code harder to maintain in the longer term and leading to
technical debt. In early 2025, GitClear published the results of a longitudinal analysis of 211 million lines of code changes from 2020-2024. They found that the volume of
code refactoring dropped from 25% of changed lines in 2021 to under 10% by 2024,
code duplication increased approximately four times in volume, copy-pasted code exceeded moved code for the first time in two decades, and code churn (prematurely merged code getting rewritten shortly after merging) nearly doubled. In July 2025, METR, an organization that evaluates
frontier models, ran a
randomized controlled trial to understand developer productivity involving generative AI programming tools available in early 2025. They found that experienced open-source developers were 19% slower when using AI coding tools, despite predicting they would be 24% faster and still believing afterward they had been 20% faster. In addition, since the developer did not write the code, the developer may struggle to understand its syntax and concepts. argued that vibe coding has negative impact on the
open-source software ecosystem. The authors say that increased vibe coding reduces user engagement with open-source maintainers, which has hidden costs for said maintainers. Speaking with
The Register about their paper, the authors argued:"Vibe coding raises productivity by lowering the cost of using and building on existing code, but it also weakens the user engagement through which many maintainers earn returns," the authors argue. "When OSS is monetized only through direct user engagement, greater adoption of vibe coding lowers entry and sharing, reduces the availability and quality of OSS, and reduces welfare despite higher productivity."They added that revenue is not the only thing that may be affected by this trend, as open-source software maintainers traditionally also get non-tangible benefits from their work, such as community recognition, reputation, and job prospects. Maya Posch, explaining the paper's claims on
Hackaday, expanded on the explanation. She pointed out that the mechanism for vibe coding lowering harmony with open-source projects is the homogenization of software development; language models will gravitate towards large and established libraries that appear frequently in their training dataset, removing the organic selection process of libraries and tooling and making it harder for newer open-source tools to get noticed. She also pointed out that language models will not submit useful bug reports to the maintainers, or be aware of potential issues. == See also ==