The Good, Bad and Ugly of CVSS Scores

Common Vulnerabilities and Exposures (CVEs) is a glossary of analyzed vulnerabilities that has become one of the most known terms within cybersecurity management and stakeholders. CVEs, and respective Common Vulnerability Scoring System (CVSS) are easy to understand, look informative and appear to be a major source to assess cybersecurity risk for an organization. However, CVEs have their limitations in their ability to describe vulnerabilities and accurately estimate cyber risk. In today’s environment of growing cyber risks that are translating into substantial financial costs, risk stakeholders at control room and board room alike need to look beyond the CVE-based calculations to achieve true cyber resilience.

Intent versus Outcome

The CVE program was originally launched to the public by MITRE in September 1999 with the goal of enumerating officially known vulnerabilities and evaluating them in a structured way. Today, it has undoubtably become a major standard for tracking vulnerabilities. However, the current CVE assigning approach is inconsistent and contains many inaccurate details. For example, many device and software vendors are trying to lower CVSS scores, which is the metric of severity of a particular vulnerability. They often do not include the full scope of their affected products or choose to simply not assign CVEs at all. This problem can affect ICS and OT organizations because many of the DCS and industrial vendors that they rely upon choose to either share vulnerabilities with them on a one-on-one basis or share only aspects of the vulnerability; sometimes both. This frequent practice lowers knowledge sharing and can harm an organization's ability to properly build mitigation plans and a scalable IT-OT cybersecurity strategy.

A notable example of this phenomenon where vulnerabilities were probably underestimated, improperly shared and managed across an organization can be observed with the WannaCry attack on Honda in 2017, which shut done production on a massive scale. Having a solid understanding of production assets, having the most recent information about vulnerabilities that WannaCry was exploiting and the way how malware was spreading within the network – such things could help building a mitigation plan before infection.


CVE Scoring Limitations in Practice – A False Sense of Security

Let’s take a look at a particular example: a vulnerability in Hitachi ABB Power Grids AFS Series (CVE-2020-9307) causes a denial-of-service condition on one of the ports in a HSR ring. Making a simple script will make this vulnerability constantly exploitable on the device so it will become completely unavailable. Detailed vector for this vulnerability is AV:A/AC:L/PR:N/UI:N/S:U/C:N/I:N/A:H, which means: 

CVSS 2The score for this vulnerability is 6.5 and treated as “Medium”. However, we should keep a context of usage of the vulnerable device or software, for example most common areas of usage, in this case energy sector, or centrality/relevance in the network and in the underlying industrial process. Depending on that, this specific vulnerability should be treated as a one with a higher score or higher risk if a central asset, or as one with a lower score if a non-critical asset in the edge of the network. Which one is the right one?. 

Another problem of CVE’s severity level (CVSS score). CVSS score is assigned by the vendor and there are multiple examples when the score is not properly calculated. Only after review of an exploit, PoC or any other exploitation example that provides evidence of vulnerability exploitation, it is possible to verify CVE’s severity.

Source of usage is also causing data inconsistency. Let’s have a look at one of vulnerabilities: CVE-2020-7802, Incorrect Default Permissions vulnerability in The Synergy Systems & Solutions (SSS) HUSKY RTU 6049-E70. While advisory published on CISA website has a CVSS score of 9.3 (which is a critical one), NIST NVD (National Vulnerability Database) has score of 5.3, with a different attack string vector:


The difference is in Scope and Integrity keys: CISA believes that an attack might cause changing the scope (for example leaving isolated environment, like sandbox, VM etc..) and the attack causes data integrity problems. Vendor’s advisory contains two CVE descriptions without impact breakdown and any CVSS score.

  1. An information exposure is the intentional or unintentional disclosure of information to an actor
    that is not explicitly authorized to have access to that information. An attacker can read
    sensitive information over the SNMP protocol.
  2. Incorrect Permissions, which could cause a network configuration changes in the device
    through the SNMP communication.

What is Actionable for Risk Leaders?

So back to the one million dollars question. What is more relevant for calculating the risk? A Is the right CVSS score 5.3 (medium) in a central asset, or 9.3 (critical) in an asset in the edge? We cannot verify that until we see the vulnerability exploitation results and understand the real impact.

There are some discussions in the industry about having ICS related CVSS score system in order to better and more efficient reflect cyber-physical impact for industrial environments. One of best examples of this challenge was highlighted at S4 conference in 2019 viewable here:


The problem of vulnerability handling is big – for some of the vendors it takes years to patch a vulnerability (because of lack of software development flexibility and having SDL without first “S”).

After that vendor does not include the full list of affected solutions. Or in the case of OEM technologies, it gets even worse (hello ISaGRAF, CodeSys, VxWorks, Java and other embedded technologies). And after that, still the asset owner needs to implement that patch. Another few years?

As for measuring cyber risk using CVEs, some firms offer services that claim to identify which are your most vulnerable assets by identifying how many CVEs each asset has and their CVSS score. However, based on inconsistent CVE information, contextual information, CVSS score and other important details, cyber risk can become dramatically different. Risk managers and leaders cannot operate under that basis.

What Else is Needed for Risk Leaders?

In the end, we cannot rely on CVEs alone to measure and quantify cyber risk. CVEs do not allow risk stakeholders to properly understand a vulnerability’s impact, nor does it provide the information they need to create a proper mitigation plan against; operationally and financially speaking. They are a data-point for cybersecurity and risk stakeholders – nothing more.

In addition to understanding the impact of a risk event, stakeholders within organizations have expanded beyond the control room into the board room – and even shareholders. As cyber threats grow more advanced in their sophistication, so does their impact to an organizations’ bottom line. The management of them must become more comprehensive and cohesive to include the effect of potential risk indicators beyond the scope of what a CVE score can support.

The DeNexus Approach

At DeNexus, we use the CVEs and their CVSS scores to measure criticality of attack scenarios for each particular technological process. Adding the contextual information that is needed for understanding consequences and impact of attack scenarios on a specific technological process and combining it with information about level of implementation of security policies and controls helps to hedge the inherent uncertainty on the CVSS scores.