The State of DevSecOps: Why Application Security Is Broken and How to Fix It

🔑 Key Takeaways

  • The Growing Importance of DevOps in Modern Software Development — DevOps has become the dominant delivery methodology, with 78% of organizations using it for at least half of their applications.
  • How Many Vulnerabilities Do Applications Really Have? — The vulnerability numbers are sobering.
  • Why Legacy Application Security Testing Creates Development Bottlenecks — Traditional security testing tools—Static Application Security Testing (SAST) and Dynamic Application Security Testing (DAST)—were designed for sequential development models.
  • The Staggering Cost of False Positives in DevSecOps — False positives are perhaps the most corrosive problem in application security.
  • How Security Scans Disrupt Developer Productivity — The impact on developer productivity extends far beyond waiting for scans to complete.

The Growing Importance of DevOps in Modern Software Development

DevOps has become the dominant delivery methodology, with 78% of organizations using it for at least half of their applications. The COVID-19 pandemic accelerated this trend dramatically: 57% of organizations increased their DevOps budgets, with 35% increasing budgets by more than 10%. Development teams are under immense pressure—79% report increased pressure to shorten release cycles.

The pace of modern development is staggering. Eighty percent of organizations deploy code to production at least multiple times per week, and 88% utilize more than 500 APIs. This velocity creates an enormous challenge for security teams: every deployment represents a potential new attack surface, and every API endpoint is a potential entry point for adversaries.

Digital transformation has made every company a software company, and the organizations that can ship secure software fastest gain competitive advantage. But this speed requirement collides directly with security processes that were designed for a different era—creating the friction that defines the current state of DevSecOps.

How Many Vulnerabilities Do Applications Really Have?

The vulnerability numbers are sobering. Seventy-nine percent of respondents report that the average application in development contains 20 or more vulnerabilities, with 42% reporting 30 to 49 vulnerabilities per application. In production, virtually every application (99%+) carries at least 4 vulnerabilities, with 78% running with 4 to 25 known vulnerabilities.

The most dangerous vulnerability types identified in the report mirror the OWASP Top 10: SQL Injection leads the risk ranking, followed by Cross-Site Scripting (XSS), Broken Authentication, XML External Entities, and Command Injection. These aren’t exotic, newly discovered attack vectors—they’re well-understood vulnerabilities that organizations have struggled to eliminate for decades.

Two-thirds of organizations have dedicated headcount for application security, split roughly evenly between security and development teams. Yet despite this investment, vulnerability counts remain stubbornly high. The problem isn’t headcount—it’s the inefficiency of the tools and processes those teams are forced to use.

Key Finding: 79% of applications in development have 20+ vulnerabilities, and 99%+ of production applications have at least 4. Legacy security testing approaches are clearly not eliminating vulnerabilities at the rate they’re being introduced.

Why Legacy Application Security Testing Creates Development Bottlenecks

Traditional security testing tools—Static Application Security Testing (SAST) and Dynamic Application Security Testing (DAST)—were designed for sequential development models. When embedded into continuous delivery pipelines, they create severe bottlenecks that undermine both development velocity and security outcomes.

Ninety-one percent of organizations report that vulnerability scans take 3 or more hours, with 35% reporting scan times of 8 hours or more. For organizations deploying multiple times per week, a single 5-hour scan (the median) can consume an entire development day. When these scans must run against every build or release candidate, the cumulative time loss is staggering.

Beyond scan duration, each security alert generated requires additional investigation. Seventy-three percent of respondents say each alert consumes at least one hour of AppSec team time for triage, correlation, risk rating, and documentation. In production, SecOps teams spend 3 or more hours per alert on triage, correlation, risk assessment, write-up, and retesting.

📊 Explore this analysis with interactive data visualizations

Try It Free →

The Staggering Cost of False Positives in DevSecOps

False positives are perhaps the most corrosive problem in application security. Eighty percent of organizations report that at least half of their security alerts are false positives, and 38% say that 75% or more are false positives. This means the majority of time spent investigating security alerts produces zero security value.

Consider the mathematics: if an organization’s security tools generate 100 alerts per scan, and 50-75% are false positives, security teams spend 50-75 hours investigating non-issues for every scan cycle. Multiplied across weekly or daily scans and dozens of applications, false positive investigation can consume thousands of engineering hours per month.

The secondary damage of false positives is equally destructive. When security teams are overwhelmed by false alerts, they develop “alert fatigue”—a conditioned tendency to dismiss or deprioritize alerts. This psychological response means that real vulnerabilities increasingly slip through the noise, effectively making the security tools counterproductive. The NIST Cybersecurity Framework emphasizes the importance of accurate detection capabilities precisely because false positives undermine the entire security program.

How Security Scans Disrupt Developer Productivity

The impact on developer productivity extends far beyond waiting for scans to complete. Sixty-two percent of developers report stopping coding every 2 to 3 days to fix vulnerabilities identified by security tools. Each vulnerability consumes an average of 4 or more hours of developer time—and 71% of developers confirm this figure.

Additionally, 78% of developers spend 3 to 5 or more hours per week verifying that their remediations actually resolved the flagged issues. This verification step exists because security tools often can’t confirm fixes without running another full scan—creating another multi-hour delay.

The cumulative effect drives a dangerous behavior: 55% of organizations sometimes skip security scans entirely to meet release deadlines, with 18% skipping them often. This creates a direct tradeoff between delivery velocity and security—exactly the tradeoff that DevSecOps was supposed to eliminate. When teams must choose between shipping on time and running security checks, the business pressure to ship almost always wins.

Why Vulnerability Remediation Takes 90+ Days

Remediation timelines reveal a systemic failure in how organizations manage security debt. Sixty-one percent of organizations take more than 90 days to remediate serious vulnerabilities, and 94% take more than 60 days to resolve just 50% of their vulnerability backlog. Perhaps most telling: the difference in remediation time between serious and non-serious vulnerabilities is minimal, suggesting that organizations lack effective prioritization mechanisms.

Industry performance varies significantly. Finance and banking lead with 58% of organizations remediating serious vulnerabilities within 90 days, likely driven by regulatory pressure from frameworks like PCI DSS. Healthcare follows at 57%. At the other end, media and entertainment (25%) and manufacturing (26%) show the longest remediation cycles.

Production vulnerability remediation is especially costly. Fifty percent of respondents say each production vulnerability requires 10 or more hours of unscheduled developer time—emergency work that disrupts planned development and introduces its own quality risks. The cost differential between finding and fixing a vulnerability in development versus production can be 10-100x, making the “shift left” principle not just a security imperative but an economic one. Learn more about security practices in our threat hunting intelligence guide.

📊 Explore this analysis with interactive data visualizations

Try It Free →

The Application Attack Landscape: Probes, Exploits, and Consequences

The attack data confirms that poor security outcomes aren’t theoretical—they result in real, measurable damage. Sixty-four percent of organizations face 10,000 or more attack probes per application per month, with 11% experiencing 20,000 or more. These probes translate into successful attacks: only 5% of organizations avoided any successful exploitative attacks, while 61% experienced 3 or more successful attacks.

The consequences of successful attacks are severe and multi-dimensional. Seventy-two percent of attacked organizations suffered data exposure, compromising customer information, intellectual property, or internal communications. Sixty-seven percent experienced operational disruption, including service outages, degraded performance, or forced system shutdowns. Sixty-two percent reported brand degradation, damaging customer trust and market reputation.

These statistics make the business case for improved DevSecOps unmistakable. When 95% of organizations experience successful attacks, the question isn’t whether your applications will be exploited—it’s how quickly you’ll detect the breach and how effectively you’ll contain the damage.

AppSec Is Now a Boardroom Priority—But Is Leadership Getting Results?

Application security has achieved C-suite visibility. Fifty-six percent of organizations discuss AppSec at every quarterly board meeting, and 72% use it as a C-suite performance metric. Forty-two percent assign final DevSecOps investment decisions to the CISO, with 84% leaving these decisions to someone in the C-suite.

Yet despite this executive attention, outcomes remain poor. The disconnect between leadership visibility and security outcomes suggests that the problem isn’t awareness or budget—it’s approach. Organizations are investing in tools and headcount that perpetuate the friction-based model rather than fundamentally rethinking how security integrates with development.

Only 43% of respondents describe the relationship between development and security teams as integrated, collaborative, or coordinated. The remaining 57% characterize it as siloed, adversarial, or dysfunctional. Until these cultural barriers are addressed alongside tooling improvements, executive investment will continue to produce disappointing results.

The DevSecOps Staffing Crisis: Skills Gaps and Hiring Challenges

The human element compounds the tooling problem. While 67% of organizations have dedicated AppSec headcount, 45% need additional staff but cannot hire—27% because they can’t find qualified candidates and 18% because they lack budget. The cybersecurity skills shortage is particularly acute in application security, which requires a rare combination of security knowledge and development expertise.

Where AppSec headcount sits varies by organization: some embed security engineers within development teams, others maintain centralized security teams that serve multiple development groups. Neither model consistently outperforms the other, suggesting that organizational structure matters less than the quality of tools, processes, and collaboration between security and development.

The staffing crisis reinforces the need for tools that amplify the effectiveness of existing personnel rather than requiring additional headcount. If each security alert consumes an hour of investigation time and 50-75% are false positives, reducing false positive rates by even 30% could be equivalent to hiring multiple additional security engineers in terms of productive capacity. Explore how data engineering practices address similar challenges in our data engineering playbook guide.

Improve Your Application Security Posture

Moving Beyond Legacy AppSec: Instrumentation and Continuous Observability

The report points toward a fundamental shift in how application security should work. Rather than periodically scanning code from the outside (SAST) or probing running applications from the outside (DAST), the future of DevSecOps lies in instrumentation-based security that monitors applications continuously from within.

Instrumentation-based approaches embed security sensors directly into the application runtime. These sensors observe actual code execution paths, data flows, and configuration in real time—providing accurate vulnerability detection without the false positives that plague external scanning approaches. Because instrumentation operates continuously during development and testing, vulnerabilities are identified the moment they’re introduced rather than hours or days later during a scheduled scan.

The benefits are transformative: scan delays are eliminated because monitoring is continuous, false positive rates drop dramatically because instrumentation observes actual behavior rather than inferring risk from static code patterns, and developer productivity improves because vulnerabilities come with precise location and context information that enables rapid remediation.

Continuous observability across the software development lifecycle (SDLC) represents the maturation of DevSecOps from a process framework to a technical reality. When security monitoring is as continuous and automated as deployment monitoring, the friction between security and development finally disappears—enabling organizations to ship both fast and secure.

📊 Explore this analysis with interactive data visualizations

Try It Free →

Frequently Asked Questions

What is the current state of DevSecOps in organizations?

The State of DevSecOps report reveals that 95% of organizations experienced successful application attacks, 79% of apps in development have 20+ vulnerabilities, 61% take over 90 days to fix serious vulnerabilities, and 55% skip security scans to meet deadlines.

How long do vulnerability scans take in most organizations?

91% of organizations report that vulnerability scans take 3 or more hours, with 35% reporting scans of 8+ hours. Additionally, 65% say scans take at least 5 hours, creating significant development bottlenecks.

What percentage of security alerts are false positives?

80% of organizations report that at least half of their security alerts are false positives, and 38% say that 75% or more are false positives. Each false positive consumes over an hour of AppSec team time to triage and investigate.

Why do organizations skip security scans?

55% of organizations sometimes skip security scans to meet release deadlines, with 18% doing so often. This happens because legacy security tools like SAST and DAST create development bottlenecks with long scan times, high false positive rates, and significant developer time requirements.

How can organizations improve their DevSecOps practices?

Organizations should move beyond legacy SAST/DAST tools toward instrumentation-based security that provides continuous monitoring from within applications. This approach eliminates scan delays, reduces false positives, enables real-time vulnerability detection, and integrates security seamlessly into CI/CD pipelines.

Your documents deserve to be read.

PDFs get ignored. Presentations get skipped. Reports gather dust.

Libertify transforms them into interactive experiences people actually engage with.

No credit card required · 30-second setup

Our SaaS platform, AI Ready Media, transforms complex documents and information into engaging video storytelling to broaden reach and deepen engagement. We spotlight overlooked and unread important documents. All interactions seamlessly integrate with your CRM software.