What is Risk-based Cybersecurity?

What is Risk-based Cybersecurity? How to Make better Security Investment Decisions?

In my past 13 years in cyber, I have always taken the position that cybersecurity should be treated as a business issue, explained in business terms to executives. Don’t get me wrong, technology is by far the most important piece of the cybersecurity puzzle. But what’s the point of cool tech if nobody can explain what it does and articulate the business benefits of deploying it?

The success of cybersecurity professionals is measured by the absence of material cyber incidents. If you consider that showing a traditional ROI (value created/cost of the project) of cybersecurity cannot be calculated, you should, at a minimum, describe how the cybersecurity solutions deployed reduce risks. As you build a cybersecurity plan, this is a bare minimum to justify the investment.


Despite the staggering amount of money poured into cybersecurity products every year (Gartner estimates cybersecurity spending at $215 Billion in 2024), the world doesn't seem to be getting any safer. This begs the question: are these investment decisions truly driven by the need to address specific cyber risks, or are they influenced by other factors?

Global Cyber security market size and cyber attack trends

Wandering the alleys of RSA in early May, it felt that adopting the latest AI-based technology is more the norm than making decisions based on the greatest risks to mitigate. It’s understandable; it’s difficult to identify cyber risks, let alone articulate their potential financial impact on the business.

The past five years spent in cyber insurance have kept me optimistic because insurance can be the linchpin between the technology of cybersecurity and the business needs for understanding cyber risk in monetary terms. Insurance can act as a forcing function for all stakeholders to better qualify cyber risk and understand the value at risk, what can be done about it, the remediation cost, and the residual risk, and decide to mitigate or transfer to insurance.

Quantifying cyber risk is a multidimensional problem that is only made more complex by the lack of standardization across industries and companies.

Every industry is regulated by slightly different security and privacy rules. Every company is unique and deploys and uses technology in a unique way. Cloud computing, especially public clouds, has started to rationalize how technology is deployed, at least at the lower levels of the technology stack. Unfortunately, this advance has been offset by the numerous opportunities for cloud misconfigurations that have triggered waves of cyber incidents.

Advanced analytics and artificial intelligence are bringing unprecedented computing power and techniques to make sense of the vast amount of data available to model cyber risk. Such analyses can be fully backed by evidence and inside telemetry from cybersecurity tools. Most vendors offer APIs to get the data required to evaluate security controls that are in place.

The output informs cyber risk decision-makers about the cyber risk to address first. Instead of focusing on the latest security technology innovations, CISOs can prioritize the risk that might negatively impact the business the most and align their security investments to business priorities based on financial, evidence-based analysis.

With such an approach, budget and resources can be allocated to risk mitigation initiatives that will provide the greatest return. Here is an example of the type of output we provide to enable better decisions:

Understand the absolute and relative impact, in financial terms, of initial attack vectors based on identified vulnerabilitiesFigure 1: Understand the absolute and relative impact, in financial terms, of initial attack vectors based on identified vulnerabilities.


Understand your exposure to cyber risk in financial terms using evidence-based analysis Figure 2: Understand your exposure to cyber risk in financial terms and by type of damage or loss using evidence-based analysis.