Who is responsible for AI-generated code: a review of the Veracode 2025 report
News | 04.08.2025
Artificial intelligence (AI) is dramatically changing the landscape of software development. Tools based on large language models (LLMs) allow developers to generate code from simple text prompts, significantly speeding up workflows. But does this velocity introduce unacceptable risk? Veracode, a global leader in application security, conducted in-depth research to answer this critical question for today's technology leaders.
The results, published in the "2025 GenAI Code Security Report", are stark: on average, 45% of code created using LLMs contains security flaws. This statistic represents a significant new attack surface for organizations.
The core problem: AI learns from insecure examples
The primary reason for this high flaw rate is that large language models learn from vast datasets, including billions of lines of code from open-source repositories. A significant portion of this training data is not secure. The models replicate the patterns they have absorbed without the contextual awareness to distinguish between safe coding practices and potential security holes. A developer requests functionality, and the AI delivers, but the choice between a secure and an insecure implementation is left to the model's discretion.
Key findings for technology leaders
The research identified several important trends that should inform every CISO's and CTO's strategy for secure development in the AI era.
- Newer models are not necessarily more secure. While AI models are improving at writing syntactically correct code, their ability to ensure security remains consistently low. Even the latest and largest models show no significant progress in generating secure code, meaning leaders cannot simply trust that tool upgrades will mitigate this risk.
- Risk varies greatly by vulnerability type. The models perform reasonably well at preventing common vulnerabilities like SQL Injection. However, for context-dependent flaws like Cross-Site Scripting (XSS), the situation is catastrophic—only 12-13% of the generated code is secure. This is because protecting against such attacks requires an understanding of the entire application context, which is currently beyond the capabilities of LLMs.
- The choice of programming language has an impact. The study showed that code in Java was significantly less secure (only a 28.5% success rate) than code generated for Python, C#, and Javascript. This is likely due to Java's long history and the vast number of outdated, insecure code examples in its training data.
Download a full report: GenAI Code Security Report
Strategic recommendations for CISO and CTO
AI is a powerful assistant, but it is not a substitute for expertise and robust governance. The Veracode report makes it clear that code generated by artificial intelligence requires mandatory verification as part of the software development lifecycle (SDLC).
- Establish clear AI usage policies. Do not allow the ad-hoc adoption of AI coding tools. Define which tools are approved and mandate that all AI-generated code be treated as untrusted, equivalent to code from an unvetted third-party library.
- Integrate automated security testing. Static application security testing (SAST) tools must be integrated into the development workflow. These tools can automatically detect vulnerabilities during the coding stage, before they reach production.
- Prioritize developer security training. In the age of AI, understanding the fundamental principles of secure development becomes even more critical. Your team must be equipped to spot and remediate the flaws that AI introduces.
Softprom is the official distributor of Veracode in Armenia, Austria, Azerbaijan, Bulgaria, Czech Republic, Georgia, Germany, Greece, Hungary, Kazakhstan, Moldova, Poland, Romania, Slovakia, Ukraine, and Uzbekistan. We provide access to advanced tools and expertise to protect your code. To implement a robust secure development process in the AI era, order a consultation with our experts today.