Do We Need Yet Another Vulnerability Scoring System? For SSVC, That's a YASS

ssvc hero
Ben Edwards
Written by Ben Edwards
Principal Research Scientist

Examining Yet Another Scoring System

The security world is awash in acronyms. As a niche in the security world, vulnerability, tracking, measurement, and management is no stranger to inscrutable collections of capital letters. We’ve got NVD, CPE, CWE, CVSS, EPSS, CAPEC, KEV, and of course “CVE”. The key goal of all these frameworks is to try to help folks organize information around vulnerabilities and assess how their presence might increase an organization's exposure.

Of these CVSS is the OG for addressing vulnerability severity, attempting to assess a vulnerability on a variety of different metrics. It’s cropped up in our research before. But recently, CVSS has been criticized as being not reflective of risk in a real way. Several years ago this resulted in the creation of the Exploitation Prediction Scoring System, which is a measure of how likely a vulnerability is to have detectable exploitation attempts in the next 30 days. EPSS has updated its model 3 times since its inception in 2020, and updates scores daily to reflect the changing exploitation landscape.

In parallel, another scoring system was developed: Stakeholder Specific Vulnerability Categorization (SSVC). Developed by the folks at Carnegie Mellon’s Software Engineering Institute, SSVC is not so much a score as rather a decision making framework. Whereas CVSS takes 8 metrics related to a vulnerability's severity and compresses them into a numeric score, and EPSS takes many more features and computes a probability of exploitation, SSVC attempts to guide its users to a decision about what to do with a vulnerability. It has been adopted by CISA’s vulnrichment program, and is now part of the official MITRE data feed in CISA’s capacity as an Authorized Data Provider (ADP).

It consists of two parts: decision points and outcomes. Decision points are assessments of the vulnerability on a variety of different features. For example, DHS’s vulnrichment program provides information on 3 decision points: exploitation, automatability, and technical impact. The official documentation lists 141 different potential decision points that can take on various values to construct the tree with 331,776 possible combinations. Obviously, it’s unlikely that anyone would assess a vulnerability on all decision points, but this is a feature and not a bug. The decision points are provided to allow for flexibility for those employing the framework to choose their own adventure.

Once an organization has decided which of the myriad of decision points they would like to use, they combine them into a decision tree. The leaves of this tree now need to be assigned outcomes. The “outcomes” are what you do after a vulnerability has been assessed at all the decision points. Currently there are 4 outcomes, and I’m going to just directly quote CISA:

  • Track The vulnerability does not require action at this time. The organization would continue to track the vulnerability and reassess it if new information becomes available. CISA recommends remediating Track vulnerabilities within standard update timelines.
     
  • Monitor (Track*)2 The vulnerability contains specific characteristics that may require closer monitoring for changes. CISA recommends remediating Track* vulnerabilities within standard update timelines.
     
  • Attend The vulnerability requires attention from the organization's internal, supervisory-level individuals. Necessary actions include requesting assistance or information about the vulnerability, and may involve publishing a notification either internally and/or externally. CISA recommends remediating Attend vulnerabilities sooner than standard update timelines.
     
  • Act The vulnerability requires attention from the organization's internal, supervisory-level and leadership-level individuals. Necessary actions include requesting assistance or information about the vulnerability, as well as publishing a notification either internally and/or externally. Typically, internal groups would meet to determine the overall response and then execute agreed upon actions. CISA recommends remediating Act vulnerabilities as soon as possible.

If this is all seeming a bit daunting, rest assured, I think so as well. In some ways making SSVC flexible enough to fit every organization makes it somewhat unwieldy. In my experience often organizations just want someone to “tell them which vulnerabilities they should fix”. This is likely part of the reason that despite being presented in 2020, it hasn’t seen wide adoption. CISA’s vulnrichment seeks to change that.

SSVC Vulnrichment

Since it was announced in May of this year, the Vulnrichment program has attempted to fill the gap left by NVD and provide additional information about CVEs to those interested. One feature in particular is that Vulnrichment provides information on 3 decision points for SSVC.

Being a data nerd I wanted to dive in and see what this data looks like and try to provide some guidance on using SSVC. As always it helps to start with counting, every vulnerability in the vulnrichment ADP has a SSVC assessment.

Growth of CISA's Vulnrichment initiative.
Figure 1 Size of Vulnrichment since the repository first was made public.

We are now over 20k vulnriched CVEs, providing a pretty darn good sample to examine Vulnrichment data. DHS does seem to be focused on “newer” vulnerabilities with the vulnrichment CVEs having a median age of about 3 months, compared to the overall distribution, where the median age of the overall corpus of vulnerabilities is approximately 3 years.

Histogram comparing the age of CVEs in MITRE's database vs. CISA's vulnrichment. CISA's vulnerabilities are considerably younger.
Figure 2 DHS Vulnrichment CVEs tend to be on the younger side.

As stated, only three decision points are provided in the vulnrichment data, with another 4th point “mission” left out of the public feed. This last decision point asks how critical the asset on which the vulnerability is located is to the operation of the organization. Since we are not privy to that information, we will take a risk averse approach and assume that all assets are mission critical, the worst case scenario.

We can use the tree paradigm to see how vulnerabilities break down across decision points and this worst case scenario outcome. Here we see some of the strength of SSVC in sharpening the focus of organizations towards those vulnerabilities that present actual risk. In particular a small percentage (0.52%) would be classified as Act, and larger, but manageable 21.9% as Attend.

Sankey Diagram of SSVC scores.
Figure 4 Breakdown of SSVC values in the DHS Vulnrichment feed.

It’s worth noting that no particular category has grown out of proportion to another over the course of the program.

Evolution of various SSVC values.
Figure 5 The evolution of theses values has been pretty consistent.

Guessing what’s under the hood

But how exactly are each of these decisions made? The answer is different for each decision point, but currently the answer seems to be CVSS and CISA’s KEV Catalog. For technical impact the vast majority of the vulnerabilities with “total” impact have “high” impact on Confidentiality, Integrity and Availability (88%), though it’s not universally true.

Breakdown of Total Technical Impact by CVSS impact
Figure 6 CVEs with Technical Impact “Total” overwhelmingly have CVSS Confidentiality, Integrity, and Availability values that are all “High”.

The same can be for assessing “automatable” with 95% having Network Access Vector, No Privileges Required and no User interaction required.

Whether a CVE is automatable by CVSS metrics.
Figure 7 “Automatable” CVEs are almost always have a CVSS Network Access Vector, with no privileges required and no user interaction required.

Now we can move onto the clearest risk signal “exploitation”. SSVC breaks it down into three categories: “none”, “proof of concept” and “exploited”. The obvious source of exploitation for SSVC is CISA’s KEV, with 97% of vulnerabilities scores as “active exploitation” are on the KEV (4 are exploited, but not on the KEV interestingly).

POC is a little tougher. When a CVE is published it includes references and those references are tagged with specific categories, and specifically if a reference refers to exploit code it’s tagged “exploit”. My initial feeling was this was where DHS might be obtaining Proof of Concept information but it doesn’t appear to be the case, with only 55% of those marked proof of concept having a tag in MITRE indicating some exploit code is available. It’s likely they are pulling in extra information.

What’s this data diversion really mean? Mostly that you can likely assess vulnerabilities using these SSVC decision points in an automated way. For technical impact and automatability, it’s not unreasonable to take those two substrings of CVSS and use those to categorize those decision points. SImilarly, if a vulnerability is on the CISA’s KEV (or another KEV), you can mark it as exploited. If it’s not, but is tagged as having an exploit in MITRE you can mark it as proof of concept. The only decision point left to your organization is asset importance.

Can SSVC reduce risk

OK, given that we can pull ourselves up by our vulnerability management bootstraps and don’t have to rely on any government data handouts, the question remains: could SSVC help reduce risk from vulnerabilities? Let’s do a little experiment and see if we can get within the ballpark of an answer.

First let’s define risk a little bit. There are whole conferences and frameworks made for assessing cybersecurity risk, and they have examined the concept from more than a few points of view. But I am going to look at just two facets of risk for a single CVE

  • The likelihood it is to be exploited
  • The prevalence of vulnerability

The first tells us how likely it is that a vulnerability is leveraged in an attack, and the second tells us how widespread a campaign targeting that vulnerability might be. For the first, we’ll use the aforementioned EPSS. As far as prevalence, we’ll take each of the vulnerabilities Bitsight scans for and measure the number of new detections we’ve had of that vulnerability per day in the last year. Next we divide up each CVE based on the “official” SSVC score, or lacking that, calculated via the method above based on CVSS, the CISA KEV Catalog, and the tags included in the references. We then take the worst case scenario with respect to asset importance to get our final SSVC outcome (Track, Track*, Attend, Act). What we get is below:

CVE Concentration by EPSS and SSVC Outcome
Figure 8 SSVC Outcomes based on some real life risk measures.

We’ve annotated the median values here. What’s interesting is that “Act” are the most risky vulnerabilities having both a near certainty of exploitation and being the most widespread, with Bitsight finding nearly 4 new instances per day. Everything else is orders of magnitude less risky in both dimensions. This indicates that SSVC is perhaps good, at least using these dimensions, and identifying stuff you should be really worried about, but not so great and differentiating everything else. Sure “Attend” are a little more likely to be exploited, and are found about 3 times a week, but this isn’t that much more than Track and Track*.

But of course this isn’t the purpose of SSVC. We’ve been using the acronym for a while now, but it’s worth reminding that the “SS” stands for “Stakeholder Specific”. Meaning that your organization should think about what decision points to use, and not necessarily rely on my ad hoc analysis.

Conclusion

Do we need YASS? I think so. Assessing vulnerabilities and navigating the vast swaths of data associated can be daunting when we just want to know “how do I best protect my organization?” It’s worth noting that SSVC is not really a scoring system, but a set of helpful guideposts to arrange that information in a way that’s useful for stakeholders. I’ve examined data that has been made available that is plug and play ready into SSVC, but in many ways I took a shortcut.

I am going to conclude by not necessarily advocating for SSVC in particular, but an SSVC approach to vulnerability management. First figure out what’s important to your organization and what aspects of risk you are most concerned about. Then start to collect data to assess those risks. Then based on the data you should have a clearer vision of what vulnerabilities are at the forefront of your risk metrics. Prioritize the top, and work like the dickens to reduce risk. Not to drop into sales mode, but Bitsight can help with this.

1In the official documentation. In the schema documented on github there are 17 potential decision points in the defined schema.
2The official documentation declares this “Track*” which is confusing. Some discussions with the folks who developed SSVC suggested maybe “Monitor” would be better, so we’ll use that here.