Millions, if not billions, of dollars, are at stake, both on the cybersecurity investment front (cybersecurity is a $227 Billion market, according to Gartner, and the cost of cyber incidents on the global economy is measured in Trillions). DeNexus recognizes the critical importance of building and maintaining credibility in our models and data.
Especially given our focus on critical infrastructures with industrial environments with Operational Technology (OT), Industrial Control Systems (ICS), or Cyber-Physical Systems (CPS).
The consequences of inadequately estimating losses due to cyber incidents in power production, energy transmission and distribution, transportation and airport facilities, and data center facilities could be devastating to the business.
In this series of blog posts, we explain the work and multi-year investments we have made to ensure the reliability and trustworthiness of our models to quantify cyber risk and translate detailed, technical cybersecurity data and signals into business metrics that CISOs, CFOs, executives, and board members can rely on to understand their cyber risk posture and make informed decision about cybersecurity investments.
Expertise and Continuous Learning: Our differentiation and high-fidelity outputs can only be built thanks to the team of experts we have hired who constantly challenge and revisit our models:
Both calibration and validation are baked into our development process and take place when we first launch a new industry sector (we announced the availability of DeRISK™ for manufacturing and energy transmission and distribution in 2024) but also continuously as we evolve our models for additional telemetry available or threat evolution.
Calibration: Calibration for AI or ML models involves little but meaningful adjustments to the model’s output to improve the level of confidence in those outputs and make them consistent with reality.
You can read more on how we approach calibration in our blog post: “Calibration of Cyber Risk Quantification Models”
Validation: This is the process to test the reliability and performance of AI models in real-world settings. It's an essential part of developing trustworthy and high-fidelity advanced models and can help ensure functional correctness and performance under various conditions and address data bias.
You can read more on how we approach calibration in our blog post: “Validation of Cyber Risk Quantification Models”
If you have questions, are interested in a demo, or want to try DeRISK, our Cyber Risk Management and Planning platform, please contact us.