DeNexus Blog - Industrial Cyber Risk Quantification

Validation of Cyber Risk Quantification Models

Written by Romy Ravines | Jul 10, 2024 4:03:00 PM

It is important to always evaluate how well a cyber risk model captures the situations and scenarios it is trying to represent.  This blog post describes the validation process we go through at DeNexus. 

Validating a probabilistic modeling system involves evaluating how well it captures the situations it is trying to represent. This activity is challenging when it comes to quantifying industrial cyber risk due to the lack of (sufficient historical) real-world data. To validate our modeling system, we focus on two fronts: component validation and loss validation. In component validation, each module of the system is tested separately, while loss validation deals with the performance of the overall output. 

Examining model assumptions and evaluating model performance are critical aspects of validation. We constantly evaluate the model’s assumptions and run scenario tests to evaluate the quality of the output of each of the modeling system’s modules. Then we aggregate the results (use them collectively) to state that we have sufficient evidence that the risk profile estimated by the modeling system is consistent with experience and expectations, so it is reliable. 

Building and using Synthetic Profiles:  

As mentioned in our previous blog post on Calibration, we use synthetic profiles to validate our models. Synthetic profiles are particularly useful to test how the models behave on edge cases.  

Here is how we think about synthetic profiles and create them 

  • First, we design hypothetical scenarios that encompass a variety of cybersecurity maturity profiles at industrial facilities in the specific sector we test our models for. 
  • Based on the types of security controls, we build edge cases to define the boundaries of the model and where the computed outputs’ expected losses fall. 
  • These hypothetical scenarios can be used in combination with any facility profile, allowing for additional flexibility and increasing the scope of our validation. 
  • For hypothetical scenarios, as we do not have data on the loss distribution (this is the output of our system), we use the designed cyber security maturity profiles in combination with hypothetical or real facilities profiles to perform sensitivity or what-if analysis. Our subject matter experts validate the results. 

Synthetic profiles are only one of the three test categories we perform. We also run our models through real-world cyber incident scenarios. 

Using clients’ real-life profiles: 

  • We leverage scenarios based on real data collected from long-term clients of DeNexus to analyze the cyber risks facing industrial facilities. 
  • DeNexus has established extensive relationships with industrial clients in various sectors. The team has spent countless hours discussing cyber risks and assessing their security posture. 
  • In some cases, historical assessments conducted (not by DeNexus) are available as references to analyze the state of cyber risk within the industrial facilities. 
  • In other words, the results (expected loss obtained) have been discussed in depth with the facilities’ owners. 
  • As of May 2024, we have actively worked with use cases from EDF, Apex (add links to case studies on the website), and a few other clients that we cannot disclose. 

Using real-life incidents:   

  • We have developed several real-world scenarios that simulate different classes of cyber incidents in the industrial sector.  
  • The cyber impact of these incidents is known and can be applied to other industries or facilities.  
  • We gather information from various sources to inform the scenarios. The purpose of this review is to gain a deeper understanding of the scenarios and their potential impact on industrial facilities. 
  • Direct information about real-world cases may not be available, but it is possible to infer information from publicly available reports and statements made by the impacted company. For example, in one case, it was reported that Facility X installed a SIEM and IDS system in their plants five months after the attack, informing the world that these cybersecurity solutions and their related controls were not in place at the time of the incident. The absence of certain controls can also be inferred from the development of the attack, such as the use of a generic user with administrator permissions. Analyzing the controls involved in the attack path can provide insight into the status of controls and their impact on the attack. Although information on other controls may be lacking, their impact on the attack can still be determined based on the available data. 

Here are four examples of the real-life incidents that we use to validate and refine our models:  

Business 

Vertical 

Attack Type 

 IAV 

Ukrenergo 

Electric power transmission 

Industroyer 

Spear Phishing 

Natanz 

Enriched uranium in gas centrifuges 

Stuxnet 

USB Drive 

DMEA 

Power Generation 

Ransomware 

Phishing 

EirGrid 

Electricity transmission 

Backdoor.Oldrea 

Supply Chain 

 

If you have questions on the above or on the processes we follow to establish the validity of our models’ output, feel free to contact us.  You can also read more on the topic of model validation and calibration and why trust our models in the following two blog posts: