CISA – Principles for the Secure Integration of Artificial Intelligence in Operational Technology
On 3 December 2025, CISA and a group of international cyber agencies released “Principles for the Secure Integration of Artificial Intelligence in Operational Technology.” It is one of the first major, joint pieces of guidance focused specifically on AI in OT and critical infrastructure – power, water, transport, manufacturing, and other systems that keep societies running.
If you are accountable for safety, resilience, or cyber risk in an OT-heavy organization, this document is effectively your playbook for how to say yes to AI in OT without compromising safety and reliability.
Below is an executive-friendly walkthrough of:
AI is already showing up in OT as:
The promise is compelling: better uptime, faster decision-making, lower costs. But in OT environments, any change that affects control logic, alarm handling, or operator behavior is not just an IT risk – it is a safety and availability risk.
The guidance is co-authored by:
In other words, this is not just a whitepaper. It is a signal to boards and executives that AI in OT requires structured, risk-informed governance.
The guidance is organized around four principles for critical infrastructure owners and operators:
Think of them as four lenses you can use to challenge any AI initiative that touches OT.
Before we dive into each principle, it is helpful to understand the risk themes that appear across the document.
AI introduces new ways for attackers to manipulate systems, including:
Because AI components are often connected to cloud services or central data lakes, they can also create new paths into OT if not properly segmented.
AI systems need a lot of data, including:
The guidance is blunt: this data is extremely valuable to adversaries and can expose how your plant or grid actually works. It also warns about:
Over time, systems change – new equipment, new operating modes, new conditions. AI models trained on historical data will naturally drift and become less accurate.
The guidance also calls out LLM hallucinations and limited explainability as unacceptable for taking autonomous safety decisions in OT. These models can still be useful as copilots, but should not be allowed to “press the red button” on their own.
Stated another way, the one-way movement of data to an AI engine to enhance decision making can add a lot of value because a human still makes the final decision and carries out any actions. Two-way AI integration is a very different risk profile to have closed-loop control where AI is both reading and writing to the control system.
For OT, being able to explain “why did the system do that?” is crucial – for:
Opaque AI models complicate all of this. The guidance emphasizes logging, monitoring, and explainability as key design requirements, not optional nice-to-haves.
There is a human risk angle:
AI tends to increase system complexity and regulatory exposure:
With that backdrop, the four principles provide a structured way to manage these risks.
For leaders, this principle boils down to three expectations.
The guidance explicitly asks organizations to understand where AI sits across the OT layers (for example, using the Purdue model – from field devices up through control and supervisory systems).
You should be able to answer:
The guidance aligns with existing secure AI system development recommendations (such as the joint CISA / NCSC-UK Guidelines for Secure AI System Development):
There is a strong emphasis on training and procedures:
This principle is a healthy counter to “AI-first” thinking.
The guidance explicitly asks: Is AI really the best solution here? Before deploying, organizations should:
If AI still makes sense, the guidance recommends following the secure AI lifecycle and consulting AI risk management frameworks such as NIST’s AI Risk Management Framework (AI RMF).
The document devotes significant space to data-related challenges, including:
In simple terms: you should know who can see, move, and use OT data – including for model training, fine-tuning, and inference.
There is a clear message to push vendors harder. The guidance encourages owners and operators to demand:
This is where boards, CISOs, and risk leaders come squarely into the picture.
Rather than creating a separate AI bureaucracy, the guidance says: embed AI into your existing security and risk frameworks. That includes:
It explicitly references NIST’s AI RMF and MITRE ATLAS (the AI attack knowledge base) as resources to help structure this.
Assurance is not a one-off certification:
The guidance acknowledges that most AI standards today are IT-oriented, and OT-specific AI regulation is still evolving. It recommends tracking work such as:
The key message: do not wait for perfect, OT-specific AI standards. Use existing frameworks now and adapt them pragmatically.
As organizations integrate AI into OT and expand their attack surface, they still have to prioritize traditional vulnerabilities, misconfigurations, and control gaps. Risk-based and financially informed prioritization is difficult to do manually across complex ICS/OT estates. Quantified vulnerability management platforms such as DeNexus DeRISK QVM help security teams financially rank OT vulnerabilities – including those introduced by new AI-enabled components – by their contribution to overall financial risk, not just technical severity.
This principle focuses on day-to-day operations, oversight, and resilience.
The guidance is clear: humans remain responsible for functional safety. Owners and operators should:
Assume that:
To handle that, the guidance calls for:
Architecturally, it also favors patterns like pushing data out of OT to AI systems (rather than pulling from the outside in) and maintaining strong segmentation so AI does not become a backdoor from IT into OT.
You asked specifically about cyber risk assessment, cyber risk quantification, and risk management.
Cyber Risk Assessment
The document does not use the exact phrase “cyber risk assessment.”
However, it repeatedly calls for:
In practice, that is a clear expectation that organizations will carry out structured risk assessments for AI in OT – just not bound to a specific methodology.
Cyber Risk Quantification
There is no mention of quantitative or monetary risk models (for example, FAIR) or “risk quantification” as a term. Searches of the guidance confirm that this language does not appear.
If you are already using quantitative cyber risk approaches to determine financial impact of cybersecurity, you can absolutely apply them here, but you will not find prescriptive instructions in this document.
For organizations that want to move from qualitative scoring to financially quantified OT cyber risk, platforms such as DeNexus DeRISK CRQ can help translate AI-related OT cyber exposure into value-at-risk (VaR) metrics that boards and business leaders understand. This supports more defensible decisions about where to invest in controls, resilience, and monitoring as AI capabilities are introduced into production environments.
Risk Management
Risk management is where the guidance is more explicit:
So, risk management is central, but framed at the framework and governance level, not at the level of specific scoring formulas.
Virtually all cybersecurity teams struggle with Probability determination in the risk management equation. In DeNexus DeRISK CRQ modelling system, the probability is broken down into its component elements: (i) estimated frequency of attacks, (ii) vulnerabilities or lack of safeguards, and (iii) effectiveness of existing safeguards. Through a probabilistic attack simulation, it helps reduce the uncertainty by providing a probability distribution of not just a single probability value, but the distribution of low-impact high-frequency as well as high-impact low-frequency events. A risk management framework that adopts not a single probability value, but recognizes that cybersecurity attacks are low-frequency high-impact events, is better equipped to manage this risk.
Practical Questions for Your Next Leadership Meeting
If you want to act on this guidance quickly, here are some concrete questions to bring to your next leadership or risk committee discussion.
If the answer to several of these is “we don’t know yet,” that is not a failure – it is your starting point. The CISA-led guidance gives you a shared language with regulators, partners, and vendors for how to move forward.
As you mature, financially quantified risk insights and quantified vulnerability management can help you prioritize where to act first. DeNexus DeRISK CRQ and DeRISK QVM are examples of platforms that operationalize these concepts for ICS/OT environments, helping organizations connect AI-in-OT exposure, vulnerabilities, and controls to measurable business impact rather than abstract scores.