Reconciling & Harmonizing OT Cybersecurity Maturity Models for Consistent, Defensible Reporting by Donovan Tindill
Cybersecurity maturity is a journey — so why do we keep reporting it like a one-time project?
If you’ve ever tried to justify OT cybersecurity investment to leadership, you’ve probably seen the same pattern: cybersecurity is treated like a project with a finish line. Fund it, deliver it, close it. In reality, cybersecurity is a long-term roadmap to build capability and then ongoing sustainment effort — especially in OT, where safety, availability, vendor constraints, and reliability requirements shape what “good progress” looks like.
That gap in expectations is one reason maturity reporting matters. Leadership wants a simple answer (“How are we doing? How mature are we?”). Cyber teams want a fair, repeatable way to show progress year-over-year without gaming the score. And the industry wants consistency so maturity claims are comparable across sites, assessors, and frameworks.
That’s the motivation behind my technical paper, Reconciling & Harmonizing Cyber Maturity Models.
This blog is an introduction. The technical paper has the full methodology, the crosswalk tables, and the details.
Many are familiar with or have used maturity frameworks available in the public domain: CMMI, NIST CSF, C2M2 and others.
All are used widely, and each brings something useful — but they don’t align cleanly, and 3 experts are likely to come to a different conclusion (i.e., insufficient consistency).
So what happens in practice?
The result is predictable: struggles to consistently determine maturity begin to undermine its credibility to leadership precisely when you need it to be defensible.
Let’s be explicit: This is not a fourth maturity model. The goal is reconciliation and harmonization — a public mapping that enables “effortless mapping, conversion, or adoption” across CMMI, NIST CSF, and C2M2.
Every good consultant has developed their own crosswalks privately — but they’re often kept proprietary, which has impeded widespread consistency and advancement in the community. My intent is the opposite: publish the logic openly so it can be scrutinized, improved, and reused.
The paper catalogs several recurring issues that OT security practitioners run into when maturity becomes the language of budgets, roadmaps, and executive reporting:
Even qualified assessors can rate the same environment differently because maturity language is broad and interpretive. When “good” isn’t defined at each level, teams invent local rubrics — which undermines benchmarking and repeatability.
OT security programs spend meaningful time in architecture, tooling deployment, process definition, and workforce enablement before benefits are measurable. A common failure mode is brutal: leadership sees “still low maturity” despite major effort, because the model doesn’t credit formative progress. I have seen companies stuck in the same maturity level for multiple years, or forced to increase 1 maturity level for reporting.
The harmonization work follows a straightforward (and intentionally transparent) flow:
This matters because the end goal isn’t prettier terminology — it’s reduced variance and improved defensibility when maturity is used to report current state, target state, and justify investment.
One of the most practical outcomes is the introduction of a Level 1.5 — “Developing” stage.
Why add it?
Because many frameworks don’t distinguish well between:
In my experience assessing companies the last 25 years of my career, organizations can sit in “partial/ad hoc” states for years until the next level’s bar is met. That’s exactly the “we’re working hard but the score won’t move” trap.
So Level 1.5 is designed to recognize the formative phase: planning started, tools being deployed, training initiated — the work is underway, but not yet achieved a Basic level.
Within the Developing maturity level, progress can be reported similar to ‘percent complete’ or the ‘S-curve’ from project management. If the deployment of infrastructure is still only 40% complete, then the Developing stage can also be reported as 40% where there is budget allocated and intention to reach 100%.
A maturity scale is only useful if it constrains scoring inflation.
In the harmonized model, the rule is simple:
Must accomplish all requirements of the lower level. No rounding up.
This cumulative logic is borrowed from CMMI v1.3 sequencing and is intentionally preserved to improve repeatability across assessors and organizations.
Today, assessors would review the brief criteria for NIST CSF, C2M2, et al. and then “pick the closest” level that resembles their progress. If the context is a security audit, there is interest to round-up, but this is fundamentally the problem that undermines maturity model mapping. There is insufficient criteria and checklists to clearly signal what is complete, and what is not.
In this harmonization exercise and the principle of not rounding up, a Requirements Checklist is provided to identify the requirements expected to complete a maturity level. If uncertain how to achieve those requirements (e.g, process is monitored), then refer to the mapped maturity model (i.e., CMMI, C2M2, CSF). Due to the detail and rigor that went into CMMI v1.3, it will provide the detailed guidance that is absent in C2M2, CSF, and others.
A major design constraint in the paper is maintainability & progress — the mapping should evolve without becoming proprietary or opaque. The work is structured so each definition and requirement is traceable to the inputs and reconciliation logic, details can be added, and so additional public-domain frameworks can be incorporated in future revisions.
That transparency is intentional: it creates a mechanism for critique, contribution, and improvement — instead of asking the community to accept a black-box scoring rubric, or use the simpler maturity frameworks available today.
I’m deliberately not going deep on use-cases here — that will be the focus of a second blog.
But it’s important to state what this enables: once you have a harmonized, more granular maturity backbone that reduces scoring variance, you can do higher-order work such as:
That’s where maturity stops being a qualitative label and starts becoming a real decision-support tool.
If your OT cybersecurity program is being judged like a one-time project, maturity reporting can either reinforce the misunderstanding — or correct it.
My goal with this work is to make maturity reporting:
If you want the full methodology, crosswalk tables, and level-by-level requirements, the technical paper is available in the link below.