DeNexus Blog - Industrial Cyber Risk Quantification

DeRISK QVM - A paradigm change in Risk-based Vulnerability Management for OT networks

Written by Jose M Seara | Apr 11, 2025 11:44:56 AM

There are thousands of vulnerabilities in the wild, thousands more discovered every month, and the growth is exponential. 

In an IT environment, hundreds or thousands of them are patched monthly as well, and the industry is tracking KPIs like Time to Detect, Mean Time to Mitigate, Vulnerability re-Open Rate, etc. to measure vulnerability management performance. Most of those KPIs are not relevant in OT networks, where patching new vulnerabilities may not even be an option because the device is end-of-life or not supported, or implementing a compensating control may require the disruption of a highly critical industrial process. That results on OT networks with thousands of vulnerabilities to manage. 

To help with Vulnerability Management and prioritization, the industry has developed scores to classify vulnerabilities on those that can easily be exploited, and those that are critical. CVSS (Common Vulnerability Scoring System) provides a 1-10 score based on a vulnerability's technical characteristics to indicate its severity. EPSS (Exploit Prediction Scoring System) uses a model to predict the likelihood of a vulnerability being exploited in the wild, providing a 0-1 dynamic score. KEV (Known Exploited Vulnerabilities) is a special list of vulnerabilities the US CISA intelligence agencies have flagged requiring remediation in US Government agencies. Combining CVSS, EPSS, and KEV has become a “standard” way to classify vulnerabilities to prioritize vulnerability management, and many proprietary scores with that foundation are offered by different vendors in the market. If a vulnerability in my network is critical and exploitable, it must be remediated. The problem is that CVSS & EPSS still do not tell the entire story. What about the presence of compensating controls? What about the role of the affected device in the network? In OT systems, what about the criticality (or lack of) the underlying industrial process? Or the impact of that industrial process in my business? Many of these questions can be answered by somehow tribal knowledge in your company. Your OT network, OT cybersecurity, and operations teams will know what devices are more critical for the most critical industrial processes. But that is far from efficient, does not scale, is risky and prone to mistakes, and information doesn´t flow as it should. Decision makers may not even have access to that tribal knowledge, and there are many other circumstances that advise a more efficient approach. 

So, how can you do that? How can you decide what vulnerabilities to consider from the many hundreds or thousands in your OT network? The answer is Risk-based Vulnerability Management. The challenge is how to measure risk or even agree on what risk truly means. Let me illustrate that with an example. 

Most in the industry will consider some combination of CVSS and EPSS with some proprietary algorithm behind a valid approach to Risk-based Vulnerability Management, when it should be called Score-based Vulnerability Management. The graph below shows a sample of 10 different vulnerabilities found in an OT network in one of the DeNexus´ customers. The size of the bubble represents the number of impacted devices, ranging from 1 to 149 devices in this case: 



In a traditional approach to Risk-based (please read Score-based) Vulnerability Management, the cybersecurity team should address first CVE-2019-12255 and CVE-2019-12257: they are top right, meaning highest exploitation probability and highest severity. A no-brainer you may think. 

But what happens if you add the financial risk dimension on top of the scores dimension? What is financial risk to begin with? Financial risk is the possibility of losing money or experiencing a financial loss, in our case caused by the presence of a vulnerability in our environment. For senior Management, for the decision maker, that is what really matters, and not how probable and critical is to exploit a given vulnerability. Let´s look at the same graph adding that financial risk dimension using colors, where green means less risk and red means more risk: 



Suddenly, the situation is totally different. Instead of CVE-2019-12255 and CVE-2019-12257, the cybersecurity team should focus on CVE-2012-0221, CVE-2012-6441, and CVE-2014-3566, even though their criticality is 4.0 or below.  

Clearly, a true Risk-based Vulnerability Management approach is superior compared to a Score-based Vulnerability Management approach. But that requires being able to measure financial risk associated to each vulnerability. Is that even possible? Now the answer is YES! 

A few weeks ago, DeNexus released our new DeRISK Quantified Vulnerability Management (QVM) product. DeRISK QVM came after years of development and hundreds of deployments of our flagship Cyber Risk Quantification platform DeRISK CRQ. DeNexus also completed the development and training of a new AI-agent that automatically and reliably maps new discovered vulnerabilities to the MITRE ATT&CK framework. DeRISK QVM reconciles that outside vulnerability data with the inside vulnerabilities found in our client’s OT networks, thanks to the integrations with OT telemetry vendors bringing asset and vulnerability inventory data, firewall data, etc. It then leverages DeRISK CRQ to calculate the financial risk associated to each vulnerability, enabling a paradigm change in Risk-based Vulnerability Management for OT networks. 


If you want to learn more, get in touch with our team, or understand how the above is put to use to quantify and manage cyber risks at 250+ industrial sites monitored by DeNexus, you can contact us at https://www.denexus.io/contact.