Calibration is an important step to confirm that models produce meaningful outputs.
Modeling systems need to go through a calibration process to produce meaningful outputs. In cyber risk, this stage of the modeling cycle is challenging due to the complexity of cybersecurity and the scarcity of historical data.
At a high level, the DeNexus data science and AI research team works with two complementary approaches to achieve the best possible calibration of our models:
1. We perform an exhaustive analysis of the robustness of our results with respect to inputs, parameters, hypotheses, experiment setups, and more;
2. We have a team dedicated to working on the benchmark of cases that we use to analyze and validate each piece of the system with the support of Subject Matter Experts (SMEs).
Even more importantly, no modeling system is useful without continuous learning and regular (re)calibration. As the British statistician George Box famously said, “All models are wrong, but some are useful.” Models must be refined over time with data and training to produce higher-fidelity results. Unfortunately, for cyber risk quantification, we also must account for the constantly changing nature of cyber threats that trigger risks, the rapidly evolving and varied digital technologies that are deployed, and the scarcity of historical data.
At DeNexus we follow a method based on 4 well-defined validation activities:
- Robustness: First, we conduct a deep sensitivity analysis. We analyze hundreds and hundreds of inputs to understand which ones are the most important contributors to risk. As new data and inputs come into play, it is critical to adjust the models accordingly. In this step, we ensure that our models are robust to changes in specifications and inputs.
- Reliability: Then, we work to guarantee the convergence of our algorithms, since the output of our system is an empirical distribution. Empirical distributions do not assume any specific underlying theoretical distribution, making them flexible and applicable to a wide range of cases. The properties and accuracy of the empirical distribution depend on the sample size. As the sample size (iterations of our algorithms) gets larger, the output tends to get closer to the true underlying distribution. In other words, setting the right number of iterations and performing several tests to ensure the convergence of the algorithms is imperative to provide reliable results with probabilistic simulations in general and with cyber risk in particular due to the high degree of uncertainty.
- Consistency: Also, we make our results coherent and consistent with the business perspective. Since there is not enough evidence to measure the accuracy of the results, we work with tailored scenarios that allow us to compare our outputs with the limited loss data available. We also work with synthetic scenarios to evaluate edge situations. The more scenarios, the more comparisons we make to check the consistency of the results between them.
- Research and Expertise: Finally, we use our own subject matter experts on OT cybersecurity and the expertise of our business partners to ensure the validity of the models. In other words, we leverage domain knowledge and expertise to the fullest, bringing together the right level of internal and external expertise to validate our results.
We are tackling the challenge of limited historical data for the purpose of aiding with calibration in the following ways:
- Synthetic Scenarios: We use internally generated synthetic data to reflect all possible security profiles, such as facilities without security controls or facilities with the most mature security controls, and anywhere in between, to understand the behavior of the model results. We combine those security profiles with different generated vulnerability scenarios.
- Incident-based scenarios: We use past cyber incidents that occurred in OT environments in specific industries to build bespoken scenarios for other industries. That is, we borrow information from real-world cyber incidents to build new scenarios. This is hard work because it relies on detailed research of what happened to infer all the inputs the system needs.
- On-going client assessments: We naturally review assessments run at customers and prospects with them to evaluate whether the outputs of our models are in line with the client's past performance. Any divergence triggers a full audit of our chain of data and models. This is also how we have learned over the past years and refined our work for high fidelity.
The above will be even further detailed in the 3rd blog post of this series: “Validation of Cyber Risk Quantification Models”
The bottom line is that we understand the complexity of the effort and the importance that our work can have on our clients. This is specifically why we have dedicated a team to improving the models through training and applying the highest scrutiny to them.