How to measure the effectivenss of your application security program ?
In this article, we will explore the key metrics essential for evaluating and enhancing an application security program. The primary focus areas will include:
- Learning which metrics should be gathered to determine the effectiveness of application security program
- How to gather data for the metrics
Tuning the tools for effectiveness
Before we start working on the program metrics , we need to ensure the security tools like DAST , SAST , RASP, are tuned for their effectiveness. Tuning is the process of eliminating noise or false positives, managing alerts, and correlating with other tools to ensure accuracy. We can look at quantitative and qualitative metrics to understand the effectiveness of the tools. Some key quantitative metrics that we can look at are
- How many vulnerabilities are being opened from each tool
- How many of these opened vulnerabilities are true positives versus false positives
Generally, SAST tools are configured with default rules and configurations that may not apply to the technology, code language, or processes. As an example, to limit the false positives from a SAST tool , Appsec team can perform a spot audit . Take a sampling of findings and begin to triage them to understand whether they are false positives and whether the tool needs to be adjusted to limit the false positives. This type of audit helps to understand the amount of potential wasted time and effort.
Other type of tuning is to understand if tools are identifying all vulnerabilities impacting the application i.e. to address the false negatives. One approach that we can take is compare the findings from a penetration test for an application and review it against the output from DAST & IAST to understand if any issues are missed and why. Similar approach can be taken for run time protection tools like WAF or RASP where we need to ensure
- Legitimate traffic is not being blocked
- Illegitimate traffic is not getting through
In case of WAF, we can perform the testing of the traffic flow by having WAF integration in pre-production environment. Analyse the dropped traffic , simulate attacks against the WAF to understand the bad traffic that is expected to be blocked as well as the good traffic. In addition to above tuning for the tools , organization should also have process for
- Newly discovered attacks
- Changes to the run-time protection tool rules
- Environmental Changes [ change in technology with the web server that is behind the WAF may require a change in the patterns used to detect attacks ]
Tuning the processes for effectiveness
From a security standpoint, the application security team is primarily focused on processes related to security scanning tools, vulnerability management, penetration testing, and security education. We looked at tuning the tools for quality output. But equally critical is the processes built around the tool.
Measuring the mean time to remediate (MTTR ) — A key metric for evaluating the effectiveness of your tools & processes is the time it takes for engineering teams to resolve identified vulnerabilities. This involves measuring the duration from when a security tool in the pipeline detects a vulnerability, to its triage by the application security team, assignment to a development team, and eventual deployment to production.
Several common issues can lead to delays in MTTR, including:
- Lack of the appropriate points of contact within the engineering team
- Insufficient information to resolve the issue
- Inadequate prioritization of the open vulnerability
- A poorly optimized development pipeline that hinders fast code release
- Limited access to retesters for quick validation
We looked at tool tuning and process optimization for measuring the effectiveness of application security program. Next, we will look at KPIs.
Gathering effectiveness with KPIs
In application security, KPIs should focus on how effectively the program prevents vulnerabilities and how quickly it responds to new ones. The application security team should develop metrics aligned with goals that address the following:
- Reduction of business risk: Track how business risk decreases over time (monthly, quarterly, annually) by identifying open vulnerabilities, assigning their criticality, and assessing their impact on business risk.
- Speed of vulnerability resolution: Measure how quickly the organization can resolve vulnerabilities from their initial identification to their remediation in production.
- Reintroduction of vulnerabilities: Monitor how often the same types of vulnerabilities, like SQL injection, reappear in an application. If the root cause isn’t addressed, these issues will continue to recur.
- Coverage of application security program — The application security program’s coverage can be measured by assessing the security tools in place, such as SAST, DAST, IAST, and WAF.
The four KPIs mentioned can be summarized as follows:
- Open Vulnerabilities
- MTTR (Mean Time to Remediation)
- Reintroduction of Vulnerabilities
- Application Security Coverage
Once the KPIs are identified , we need to finalize the targets for the KPI. As an example, an organization can target a 14-day MTTR for critical vulnerabilities.
We will now take an example of how a KPI can be used to drive improvements.
Reintroduction of Vulnerabilities KPI
Consider the case of building a plan for addressing “recurrence of vulnerabilities” KPI. In this case, the application security team is responsible for the KPI aimed at reducing the recurrence of previously closed vulnerabilities. This task is challenging because tracking recurrence requires more in-depth processing compared to gathering general metrics from security tools.
Focus on one vulnerability — say XSS from different test reports ( DAST , SAST & PT Report ). Use the labels in the scanning tools and review the penetration testing reports to locate all known XSS vulnerabilities that have been found in the past 12 months in the developed applications.
Assume that over the past several months, multiple XSS vulnerabilities have been identified in each application. For example, in an application, a SAST tool identified an XSS vulnerability. Upon further investigation, it was discovered that the same project had encountered multiple XSS issues throughout the year, all of which had been opened and closed. This pattern suggests that the development team was struggling to produce code that was secure against XSS attacks.
With this insight, a workshop can be organized to focus on XSS protection mechanisms, and the SAST vendor can be engaged to provide more detailed resolution guidance within the tool. This targeted approach can lead to fewer recurring XSS vulnerabilities and can help address new ones before they reached production. This simple example illustrates how you can drive a similar KPI in your organization.
Article is based on my notes from the book “Application Security Program Handbook”.