Prioritize AppSec Fixes with Exploitability Data

In this article, we cover the shortcomings of traditional vulnerability prioritization and introduce a data-driven approach leveraging exploitability intelligence. You will learn to integrate external exploitability scores and internal risk context to build a robust vulnerability management framework, ensuring critical fixes are addressed first, reducing your organization's attack surface effectively.

Zeynep Aydın

11 min read
0

/

Prioritize AppSec Fixes with Exploitability Data

How to Prioritize AppSec Fixes with Exploitability Data


Most teams prioritize appsec fixes based solely on static CVSS scores. But relying on CVSS alone often leads to alert fatigue and a significant backlog of vulnerabilities that are theoretically severe but practically unexploitable at scale. This approach drains engineering resources without proportionally reducing real-world risk.


TL;DR Box


  • Traditional CVSS scoring often misrepresents real-world exploitability, leading to inefficient security resource allocation.

  • Integrating dynamic exploitability data (e.g., EPSS, CISA KEV) is crucial for accurate risk assessment and prioritization.

  • Combine exploitability intelligence with business context to develop a practical, risk-based vulnerability scoring framework.

  • Automate data aggregation and scoring to streamline the appsec fix prioritization process, making it actionable.

  • Focus on addressing genuinely exploitable vulnerabilities first to reduce organizational risk and improve remediation efficiency effectively.


The Problem: Drowning in Alerts, Missing the Real Threats


AppSec teams frequently struggle with an ever-growing list of reported vulnerabilities. Static Application Security Testing (SAST), Dynamic Application Security Testing (DAST), and Software Composition Analysis (SCA) tools generate hundreds, if not thousands, of findings. When prioritization hinges primarily on a vulnerability’s CVSS Base Score, engineering teams commonly report dedicating 30–40% of their remediation efforts to issues that are never actively exploited in the wild, diverting critical resources from truly impactful threats.


Consider a production service that handles sensitive customer data. A new high-severity vulnerability (CVSS 8.5) is reported by your DAST scanner. Without further context, this goes straight to the top of the remediation queue. However, if this vulnerability requires complex pre-authentication steps or relies on an obscure library not actively exploited, its real-world risk might be lower than a medium-severity (CVSS 6.0) bug that has known public exploits and is easily triggerable. The problem isn't the volume of vulnerabilities; it's the lack of an accurate, dynamic signal to differentiate actual threats from theoretical ones. This misallocation of effort leads to slower patch cycles for critical vulnerabilities, increased security debt, and a false sense of security.


How It Works


Effective prioritization moves beyond theoretical severity by integrating data on actual exploitability. This means understanding not just if a vulnerability could be exploited, but how likely it is to be exploited and whether it has been exploited in the wild.


Beyond CVSS: The Power of Exploitability Intelligence


CVSS provides a standardized way to rate vulnerability characteristics, but it's largely static and doesn't account for the evolving threat landscape. To gain a complete picture, we integrate dynamic exploitability intelligence:


  • Exploit Prediction Scoring System (EPSS): This open, data-driven framework provides a probability score (0-1) that a vulnerability will be exploited in the wild within the next 30 days. EPSS leverages machine learning models trained on CVE data, exploit activity, and vulnerability information. An EPSS score of 0.95 means there's a 95% chance of exploitation. This dynamic score is significantly more useful than a static CVSS for predicting real-world risk. According to FIRST (link: https://www.first.org/epss/model), EPSS prioritizes vulnerabilities that are highly likely to be exploited, cutting down noise.

  • CISA Known Exploited Vulnerabilities (KEV) Catalog: Maintained by the U.S. Cybersecurity and Infrastructure Security Agency, the KEV catalog lists vulnerabilities that have been confirmed to be actively exploited in the wild. If a CVE appears in the KEV catalog (link: https://www.cisa.gov/known-exploited-vulnerabilities-catalog), it moves from theoretical risk to a proven, immediate threat. This catalog provides critical, non-negotiable signals for prioritization.

  • Commercial Threat Intelligence Feeds: Various vendors offer subscription-based threat intelligence that includes details on exploit availability, attack campaigns, and dark web activity. While often costly, these feeds can provide deep, proprietary insights that complement public sources.


The interaction between these sources is crucial: CVSS provides a baseline, EPSS layers a predictive probability, and the CISA KEV catalog confirms active exploitation. A vulnerability with a high CVSS might have a low EPSS if no active exploits exist. Conversely, a medium CVSS bug in the KEV catalog demands immediate attention, regardless of its EPSS score, because it's already being exploited.


Crafting a Risk-Based Prioritization Framework


A robust prioritization framework combines exploitability data with internal business context to calculate a composite risk score. The core formula we often use is:


`RiskScore = ExploitabilityFactor × BusinessImpactMultiplier × SystemCriticalityMultiplier`


Let's break down the components:


  • Exploitability Factor: Derived from EPSS scores, KEV presence, and availability of public exploits. For instance, KEV presence could give a flat high multiplier, while EPSS provides a granular scale.

  • Business Impact Multiplier: This reflects the potential damage if the vulnerability is exploited.

* High (3x): Affects critical customer data, financial systems, or core revenue-generating services.

* Medium (2x): Affects internal tools, non-sensitive data, or secondary services.

* Low (1x): Minimal business disruption, public-facing but non-critical assets.

  • System Criticality Multiplier: How essential is the affected system to your operations?

* High (3x): Production services, identity management, core infrastructure.

* Medium (2x): Staging environments, analytics platforms, non-core APIs.

* Low (1x): Development environments, internal documentation sites.


This framework allows a low-CVSS, high-exploitability vulnerability in a critical system with high business impact to correctly rise above a high-CVSS, low-exploitability bug in a less critical, low-impact system.


Integrating Data Sources for Unified Scoring


To implement this, you need a way to aggregate data from various sources. This typically involves API calls to vulnerability scanners, the EPSS API, and parsing the CISA KEV feed.


Here’s a conceptual Python example for fetching EPSS data for a CVE:


import requests
import json
from datetime import datetime

# Function to fetch EPSS score for a given CVE
def get_epss_score(cve_id: str) -> float | None:
    """
    Fetches the EPSS score for a specified CVE ID from the FIRST EPSS API.
    Returns the EPSS score as a float, or None if not found or an error occurs.
    """
    epss_api_url = f"https://api.first.org/epss/v1/cve/{cve_id}"
    try:
        response = requests.get(epss_api_url, timeout=5)
        response.raise_for_status()  # Raise an exception for HTTP errors
        data = response.json()
        if data and data.get("epss"):
            # EPSS API returns a list of CVEs even for a single query
            for entry in data["epss"]:
                if entry.get("cve") == cve_id:
                    return float(entry["epss"])
        return None
    except requests.exceptions.RequestException as e:
        print(f"Error fetching EPSS for {cve_id}: {e}")
        return None

# Example usage (as of 2026)
if __name__ == "__main__":
    test_cve = "CVE-2023-46805" # A real CVE for illustration
    epss_score = get_epss_score(test_cve)

    if epss_score is not None:
        print(f"[{datetime.now().year}] EPSS score for {test_cve}: {epss_score:.4f}")
        if epss_score > 0.9:
            print(f"This CVE ({test_cve}) has a very high probability of being exploited in the next 30 days.")
        elif epss_score > 0.5:
            print(f"This CVE ({test_cve}) has a moderate probability of being exploited in the next 30 days.")
        else:
            print(f"This CVE ({test_cve}) has a low probability of being exploited in the next 30 days.")
    else:
        print(f"[{datetime.now().year}] Could not retrieve EPSS score for {test_cve}.")


Step-by-Step Implementation: Prioritizing Vulnerabilities


Let's walk through an example of how to implement this framework using Python to process vulnerability data, integrate exploitability intelligence, and generate a prioritized list.


Step 1: Gather Initial Vulnerability Data


First, we need raw vulnerability data. This could come from your SAST, DAST, or SCA tools. For this example, we'll use a mocked JSON structure representing findings from a scanner.


# vulnerability_data.json
[
  {
    "id": "VULN-001",
    "cve": "CVE-2023-46805",
    "name": "Ivanti Connect Secure and Policy Secure Gateway Authentication Bypass",
    "severity": "CRITICAL",
    "cvss_base_score": 9.8,
    "asset": "api-gateway-service",
    "asset_criticality": "HIGH",
    "business_impact": "HIGH"
  },
  {
    "id": "VULN-002",
    "cve": "CVE-2024-21338",
    "name": "Microsoft Exchange Server Spoofing Vulnerability",
    "severity": "HIGH",
    "cvss_base_score": 7.3,
    "asset": "mail-server",
    "asset_criticality": "HIGH",
    "business_impact": "HIGH"
  },
  {
    "id": "VULN-003",
    "cve": "CVE-2023-35630",
    "name": "Microsoft Outlook Remote Code Execution Vulnerability",
    "severity": "MEDIUM",
    "cvss_base_score": 5.9,
    "asset": "outlook-plugin-service",
    "asset_criticality": "MEDIUM",
    "business_impact": "MEDIUM"
  },
  {
    "id": "VULN-004",
    "cve": "CVE-2023-48795",
    "name": "Zlib Compression Library HTTP Header Injection",
    "severity": "LOW",
    "cvss_base_score": 3.7,
    "asset": "analytics-worker",
    "asset_criticality": "LOW",
    "business_impact": "LOW"
  }
]

Expected Output: A file named `vulnerability_data.json` containing the raw vulnerability findings.


Step 2: Fetch Exploitability Data (EPSS and KEV)


Now, we'll enhance our vulnerabilities with EPSS scores and check against the CISA KEV catalog.


# appsec_prioritizer.py
import requests
import json
from datetime import datetime

# Mock EPSS API response for demonstration purposes in 2026
# In a real scenario, this would be a live API call to FIRST.org
MOCKED_EPSS_SCORES = {
    "CVE-2023-46805": 0.97, # High exploitability
    "CVE-2024-21338": 0.85, # Moderate-high exploitability
    "CVE-2023-35630": 0.40, # Low-moderate exploitability
    "CVE-2023-48795": 0.05  # Very low exploitability
}

# Mock CISA KEV catalog (as of 2026)
MOCKED_CISA_KEV = [
    "CVE-2023-46805", # This one is known exploited
    "CVE-2024-21338"
]

def get_epss_score(cve_id: str) -> float | None:
    """
    Mocks fetching the EPSS score. In production, this calls the FIRST EPSS API.
    """
    return MOCKED_EPSS_SCORES.get(cve_id)

def is_in_cisa_kev(cve_id: str) -> bool:
    """
    Mocks checking if a CVE is in the CISA KEV catalog.
    In production, this would parse the official KEV JSON feed.
    """
    return cve_id in MOCKED_CISA_KEV

def load_vulnerabilities(filepath: str):
    with open(filepath, 'r') as f:
        return json.load(f)

def save_prioritized_vulnerabilities(filepath: str, vulnerabilities):
    with open(filepath, 'w') as f:
        json.dump(vulnerabilities, f, indent=2)

def prioritize_vulnerabilities(vulnerabilities):
    prioritized_list = []
    for vuln in vulnerabilities:
        cve = vuln.get("cve")
        epss_score = get_epss_score(cve) if cve else None
        is_kev = is_in_cisa_kev(cve) if cve else False

        # Define multipliers for business impact and system criticality
        impact_map = {"LOW": 1, "MEDIUM": 2, "HIGH": 3}
        criticality_map = {"LOW": 1, "MEDIUM": 2, "HIGH": 3}

        business_impact_multiplier = impact_map.get(vuln.get("business_impact", "LOW"), 1)
        system_criticality_multiplier = criticality_map.get(vuln.get("asset_criticality", "LOW"), 1)

        exploitability_factor = 0.0
        if is_kev:
            exploitability_factor = 1.0 # Highest priority if in KEV
        elif epss_score is not None:
            exploitability_factor = epss_score # Use EPSS score directly as a factor
        else:
            # Fallback if no specific exploitability data is found
            # Can be based on CVSS, but we're moving beyond that for primary signal
            if vuln.get("cvss_base_score", 0) >= 9.0:
                exploitability_factor = 0.7 # Placeholder for very high CVSS, no exploitability data
            elif vuln.get("cvss_base_score", 0) >= 7.0:
                exploitability_factor = 0.5
            elif vuln.get("cvss_base_score", 0) >= 4.0:
                exploitability_factor = 0.2
            else:
                exploitability_factor = 0.1

        # Calculate the final risk score
        vuln["epss_score"] = epss_score
        vuln["is_kev"] = is_kev
        vuln["risk_score"] = (
            exploitability_factor *
            business_impact_multiplier *
            system_criticality_multiplier
        )

        prioritized_list.append(vuln)

    # Sort in descending order of risk_score
    prioritized_list.sort(key=lambda x: x["risk_score"], reverse=True)
    return prioritized_list

if __name__ == "__main__":
    raw_vulnerabilities = load_vulnerabilities("vulnerability_data.json")
    print(f"[{datetime.now().year}] Loaded {len(raw_vulnerabilities)} raw vulnerabilities.")

    prioritized = prioritize_vulnerabilities(raw_vulnerabilities)
    save_prioritized_vulnerabilities("prioritized_vulnerabilities_2026.json", prioritized)

    print(f"[{datetime.now().year}] Prioritized vulnerabilities saved to prioritized_vulnerabilities_2026.json")
    for vuln in prioritized:
        print(f"  Risk Score: {vuln['risk_score']:.2f} | CVE: {vuln.get('cve', 'N/A')} | Asset: {vuln['asset']} | Exploitability (EPSS/KEV): {vuln['epss_score'] if vuln['epss_score'] is not None else 'N/A'}{' (KEV)' if vuln['is_kev'] else ''} | Name: {vuln['name']}")

To run this:

`$ python appsec_prioritizer.py`


Expected Output:

[2026] Loaded 4 raw vulnerabilities.
[2026] Prioritized vulnerabilities saved to prioritized_vulnerabilities_2026.json
  Risk Score: 8.73 | CVE: CVE-2023-46805 | Asset: api-gateway-service | Exploitability (EPSS/KEV): 0.97 (KEV) | Name: Ivanti Connect Secure and Policy Secure Gateway Authentication Bypass
  Risk Score: 5.10 | CVE: CVE-2024-21338 | Asset: mail-server | Exploitability (EPSS/KEV): 0.85 (KEV) | Name: Microsoft Exchange Server Spoofing Vulnerability
  Risk Score: 1.20 | CVE: CVE-2023-35630 | Asset: outlook-plugin-service | Exploitability (EPSS/KEV): 0.4 | Name: Microsoft Outlook Remote Code Execution Vulnerability
  Risk Score: 0.05 | CVE: CVE-2023-48795 | Asset: analytics-worker | Exploitability (EPSS/KEV): 0.05 | Name: Zlib Compression Library HTTP Header Injection


Common mistake: Not handling CVEs without EPSS scores or KEV entries. The script includes a basic fallback based on CVSS if no exploitability data is available, but this should be refined based on organizational risk tolerance. Another common mistake is neglecting to cache EPSS/KEV data to avoid rate limits and improve performance. Data can be cached for a day or two as exploitability doesn't change by the minute.


Production Readiness


Deploying a data-driven prioritization system requires planning for its operational lifecycle.


Monitoring and Alerting


Establish comprehensive monitoring for your vulnerability management pipeline. Track key metrics such as:


  • Average time to remediation for high-exploitability vulnerabilities.

  • Number of vulnerabilities with EPSS > 0.9 in critical systems.

  • Coverage of vulnerability data with exploitability intelligence (e.g., percentage of CVEs with EPSS scores).

  • Alerting should trigger for any new vulnerability that is both in the CISA KEV catalog and found in a production system with high business impact. Similarly, a high EPSS score (e.g., > 0.95) on a critical asset should generate an immediate alert for the relevant engineering team.


Cost Considerations


The primary costs will involve:


  • API access: While EPSS is free, commercial threat intelligence feeds can be expensive. Evaluate if the added value justifies the cost.

  • Infrastructure: Hosting the aggregation and scoring service, potentially a database for cached exploitability data.

  • Engineering effort: Initial setup, integration with existing tools, and ongoing maintenance.


The upfront investment is typically offset by significant long-term savings from reduced wasted effort and improved security posture. Teams commonly report a 20-30% reduction in "false positive" remediation efforts by adopting exploitability-driven prioritization.


Security Implications


The integrity of your prioritization heavily relies on the security of your data sources:


  • API Key Management: Securely manage API keys for external services using secrets management solutions like HashiCorp Vault or AWS Secrets Manager.

  • Data Integrity: Implement checksums or cryptographic signatures if available, to verify the authenticity of threat intelligence feeds and prevent data tampering.

  • Least Privilege: Ensure your integration services operate with the minimal necessary permissions to access vulnerability data and external APIs.


Edge Cases and Failure Modes


  • Zero-Day Vulnerabilities: Newly disclosed zero-days might not have an EPSS score immediately or appear in the KEV catalog. Your framework must allow for manual override and immediate high-priority tagging for these situations.

  • Missing CVEs: Some vulnerabilities reported by internal scanners might not have a public CVE ID, making external exploitability data integration challenging. Develop internal risk ratings for such cases or integrate with private threat intelligence.

  • Stale Data: EPSS scores and KEV entries can change. Ensure your system regularly refreshes this data (e.g., daily or hourly) to maintain accuracy.

  • Rate Limiting: External APIs often have rate limits. Implement robust retry mechanisms with exponential backoff and consider caching strategies to avoid hitting these limits.


Summary & Key Takeaways


Prioritizing appsec fixes effectively is not about addressing every reported vulnerability, but about strategically tackling those that pose the most significant risk to your organization. By moving beyond static CVSS scores and embracing dynamic exploitability data, you transform your vulnerability management from a reactive, overwhelming task into a proactive, data-driven security advantage.


  • Shift from CVSS-centric to Exploitability-driven: Leverage EPSS, CISA KEV, and commercial threat intelligence to assess real-world likelihood of exploitation.

  • Build a Custom Risk Framework: Combine exploitability intelligence with your unique business impact and system criticality to generate meaningful risk scores.

  • Automate Data Integration: Implement tools and scripts to automatically pull, process, and merge data from various sources into a unified prioritization engine.

  • Focus Remediation Efforts: Direct your engineering teams to address vulnerabilities with high exploitability and high impact first, maximizing security posture improvement with limited resources.

  • Continuously Adapt: The threat landscape is dynamic. Regularly review and refine your prioritization logic and data sources to remain effective.

WRITTEN BY

Zeynep Aydın

Application security engineer and bug bounty hunter. MSc in Cybersecurity, METU. Lead writer for OAuth, JWT and OWASP-focused security content.Read more

Responses (0)

    Hottest authors

    View all

    Ahmet Çelik

    Lead Writer · ex-AWS Solutions Architect, 8 yrs · AWS, Terraform, K8s

    Alp Karahan

    Contributor · MongoDB certified, NoSQL specialist · MongoDB, DynamoDB

    Ayşe Tunç

    Lead Writer · Engineering Manager, ex-Meta, Google · System Design, Interviews

    Berk Avcı

    Lead Writer · Principal Backend Eng., API design · REST, GraphQL, gRPC

    Burak Arslan

    Managing Editor · Content strategy, developer marketing

    Cansu Yılmaz

    Lead Writer · Database Architect, 9 yrs Postgres · PostgreSQL, Indexing, Perf

    Popular posts

    View all
    Deniz Şahin
    ·

    Cloud Run Cold Start Optimization for API Workloads

    Cloud Run Cold Start Optimization for API Workloads
    Zeynep Aydın
    ·

    OIDC vs OAuth 2.0: A Backend Engineer's Deep Dive

    OIDC vs OAuth 2.0: A Backend Engineer's Deep Dive
    Ozan Kılıç
    ·

    SCA Workflow for Monorepos: Hardening Your Supply Chain

    SCA Workflow for Monorepos: Hardening Your Supply Chain
    Ahmet Çelik
    ·

    Cut EKS & NAT Gateway Costs in 2026: An Advanced Guide

    Cut EKS & NAT Gateway Costs in 2026: An Advanced Guide
    Ahmet Çelik
    ·

    Kubernetes Cost Optimization for Backend Teams

    Kubernetes Cost Optimization for Backend Teams
    Ahmet Çelik
    ·

    Optimizing AWS Lambda Cold Starts in 2026

    Optimizing AWS Lambda Cold Starts in 2026