Blog

CISA AI in OT Guidance: What Critical Infrastructure Leaders Need to Know

CISA – Principles for the Secure Integration of Artificial Intelligence in Operational Technology 

What Leaders Need to Know About CISA’s New AI-in-OT Guidance 

On 3 December 2025, CISA and a group of international cyber agencies released “Principles for the Secure Integration of Artificial Intelligence in Operational Technology.” It is one of the first major, joint pieces of guidance focused specifically on AI in OT and critical infrastructure – power, water, transport, manufacturing, and other systems that keep societies running. 

If you are accountable for safety, resilience, or cyber risk in an OT-heavy organization, this document is effectively your playbook for how to say yes to AI in OT without compromising safety and reliability. 

Below is an executive-friendly walkthrough of: 

  • The main risks the guidance highlights 
  • The four core principles for secure AI integration in OT 
  • What it actually says (and does not say) about risk assessment and risk management 

 

Why CISA’s AI-in-OT Guidance Matters 

AI is already showing up in OT as: 

  • Predictive maintenance models 
  • Anomaly detection 
  • AI-assisted diagnostics 
  • LLM-powered copilots for engineers and operators 

The promise is compelling: better uptime, faster decision-making, lower costs. But in OT environments, any change that affects control logic, alarm handling, or operator behavior is not just an IT risk – it is a safety and availability risk. 

The guidance is co-authored by: 

  • CISA (US) 
  • ASD’s Australian Cyber Security Centre 
  • NSA’s AI Security Center 
  • FBI 
  • Cyber security agencies from Canada, Germany, the Netherlands, New Zealand, and the UK 
      

In other words, this is not just a whitepaper. It is a signal to boards and executives that AI in OT requires structured, risk-informed governance. 

 

The Four CISA Principles for AI in OT at a Glance 

The guidance is organized around four principles for critical infrastructure owners and operators: 

  1. Understand AI 
  2. Consider AI use in the OT domain 
  3. Establish AI governance and assurance frameworks 
  4. Embed safety and security practices into AI and AI-enabled OT systems 

Think of them as four lenses you can use to challenge any AI initiative that touches OT. 

 

The Main Risks of AI in OT (in Plain Language) 

Before we dive into each principle, it is helpful to understand the risk themes that appear across the document. 

  1. New Cyber Attack Surface and AI-Specific Threats

AI introduces new ways for attackers to manipulate systems, including: 

  • Data poisoning – corrupting training or input data so the AI makes bad decisions 
  • Prompt / input injection – especially relevant for LLMs and agents 
  • Model tampering – modifying models or deployment pipelines to bypass safety or security 

Because AI components are often connected to cloud services or central data lakes, they can also create new paths into OT if not properly segmented. 

 

  1. Data Quality, Data Security and Data Sovereignty

AI systems need a lot of data, including: 

  • Engineering configuration data (diagrams, PLC logic, sequences) 
  • Process data (flows, voltages, setpoints, telemetry) 

The guidance is blunt: this data is extremely valuable to adversaries and can expose how your plant or grid actually works. It also warns about: 

  • Poor-quality or biased data leading to unsafe or unreliable decisions 
  • Centralizing sensitive OT data in ways that increase impact if breached 
  • Jurisdictional risk when foreign vendors or clouds hold critical OT datasets (data sovereignty) 

 

  1. Model Drift, Hallucinations and Reliability

Over time, systems change – new equipment, new operating modes, new conditions. AI models trained on historical data will naturally drift and become less accurate. 

The guidance also calls out LLM hallucinations and limited explainability as unacceptable for taking autonomous safety decisions in OT. These models can still be useful as copilots, but should not be allowed to “press the red button” on their own.  

Stated another way, the one-way movement of data to an AI engine to enhance decision making can add a lot of value because a human still makes the final decision and carries out any actions. Two-way AI integration is a very different risk profile to have closed-loop control where AI is both reading and writing to the control system. 

  

  1. Explainability, Transparency and Auditability

For OT, being able to explain “why did the system do that?” is crucial – for: 

  • Root cause analysis after an incident 
  • Regulatory scrutiny 
  • Safety cases and compliance evidence 

Opaque AI models complicate all of this. The guidance emphasizes logging, monitoring, and explainability as key design requirements, not optional nice-to-haves. 

 

  1. Human Factors: Overload and Skill Erosion

There is a human risk angle: 

  • Too many AI-generated alerts can overwhelm operators and create noise that hides real issues. 
  • Over-reliance on automation can erode manual skills needed when systems fail, when AI is offline, or when OT must run in degraded mode. 

 

  1. Integration, Complexity and Compliance

AI tends to increase system complexity and regulatory exposure: 

  • New interfaces, APIs, cloud links and third-party dependencies 
  • Real-time and latency constraints that many AI systems are not designed for 
  • AI standards and regulations that are mostly IT-focused, leaving gaps for OT contexts 

With that backdrop, the four principles provide a structured way to manage these risks. 

 

Principle 1: Understand AI 

For leaders, this principle boils down to three expectations. 

Know How AI Interacts with Your OT Stack 

The guidance explicitly asks organizations to understand where AI sits across the OT layers (for example, using the Purdue model – from field devices up through control and supervisory systems). 

You should be able to answer: 

  • Which OT data sources feed AI models? 
  • Where does the AI output land – advisory dashboards, alarm thresholds, control loops? 
  • What could go wrong if the AI is wrong, manipulated, or offline? 

 

Treat AI Like Any Other Engineered System – with a Secure Lifecycle 

The guidance aligns with existing secure AI system development recommendations (such as the joint CISA / NCSC-UK Guidelines for Secure AI System Development): 

  • Secure design – threats and safety implications considered upfront 
  • Secure development/procurement – whether you buy, build, or customize 
  • Secure deployment – segmentation, least privilege, and robust verification 
  • Secure operation & maintenance – patching, monitoring, and model lifecycle management 

 

Invest in AI Literacy for OT Personnel 

There is a strong emphasis on training and procedures: 

  • Train OT teams on AI concepts and threat modeling. 
  • Teach operators how to validate AI outputs (for example, cross-checking with independent sensors). 
  • Maintain clear SOPs for when AI outputs look wrong or systems fail.  

 

Principle 2: Consider AI Use in the OT Domain 

This principle is a healthy counter to “AI-first” thinking. 

Start with the Business and Safety Case – Not the Technology 

The guidance explicitly asks: Is AI really the best solution here? Before deploying, organizations should: 

  • Compare AI options against existing deterministic controls or analytics. 
  • Evaluate security, safety, performance, complexity, and cost. 
  • Consider whether the organization can operate and maintain the AI system over time – it cannot simply by assigned to existing staff workloads, consider new training and team capacity to support this new complexity. 
  • Account for the expanded attack surface (new hardware, software, networks).   

If AI still makes sense, the guidance recommends following the secure AI lifecycle and consulting AI risk management frameworks such as NIST’s AI Risk Management Framework (AI RMF). 

 

Treat OT Data as a Strategic Asset 

The document devotes significant space to data-related challenges, including: 

  • Data assurance and access control 
  • Data sovereignty when foreign vendors or cloud regions are involved 
  • Protection against poisoning and misuse 
  • Balancing data sharing with the need for segmentation 

In simple terms: you should know who can see, move, and use OT data – including for model training, fine-tuning, and inference. 

 

Get More from Your Vendors 

There is a clear message to push vendors harder. The guidance encourages owners and operators to demand: 

  • Transparency on embedded AI features and external connections 
  • Clear data usage policies (no silent model training on your operational data) 
  • Options to disable or run AI features offline 
  • Security and support obligations defined in contracts 

 

Principle 3: Establish AI Governance and Assurance Frameworks 

This is where boards, CISOs, and risk leaders come squarely into the picture. 

 

Build AI Governance on Top of Existing Structures 

Rather than creating a separate AI bureaucracy, the guidance says: embed AI into your existing security and risk frameworks. That includes: 

  • Clear roles and responsibilities across OT, IT, data, and cyber teams 
  • Regular security audits and risk assessments for AI systems 
  • Integration with existing standards and regulations in your sector 

It explicitly references NIST’s AI RMF and MITRE ATLAS (the AI attack knowledge base) as resources to help structure this. 

 

Test Early, Test Often, and Test Realistically 

Assurance is not a one-off certification: 

  • Start in low-risk environments with synthetic or non-production data. 
  • Use representative OT testbeds where possible (including hardware-in-the-loop). 
  • Ensure clear go/no-go criteria before moving AI into production. 
  • Continuously monitor performance and re-validate after changes.  

   

Navigate a Fast-Moving Regulatory Landscape 

The guidance acknowledges that most AI standards today are IT-oriented, and OT-specific AI regulation is still evolving. It recommends tracking work such as: 

  • ETSI’s “Securing Artificial Intelligence” technical reports and standards 
  • Sector-specific safety and performance regulations 
  • Emerging national AI security codes of practice 

The key message: do not wait for perfect, OT-specific AI standards. Use existing frameworks now and adapt them pragmatically. 

 

Quantified Vulnerability Management in the AI Context 

As organizations integrate AI into OT and expand their attack surface, they still have to prioritize traditional vulnerabilities, misconfigurations, and control gaps. Risk-based and financially informed prioritization is difficult to do manually across complex ICS/OT estates. Quantified vulnerability management platforms such as DeNexus DeRISK QVM help security teams financially rank OT vulnerabilities – including those introduced by new AI-enabled components – by their contribution to overall financial risk, not just technical severity. 

 

Principle 4: Embed Safety and Security Practices into AI and AI-Enabled OT Systems 

This principle focuses on day-to-day operations, oversight, and resilience. 

Keep Humans Firmly in the Loop 

The guidance is clear: humans remain responsible for functional safety. Owners and operators should: 

  • Maintain an inventory of AI components and dependencies. 
  • Log and monitor inputs and outputs to AI systems. 
  • Define thresholds for “safe behavior” and known-good states. 
  • Use human-in-the-loop or human-on-the-loop approaches for critical decisions. 

 

Design for Failure – and for Recovery 

Assume that: 

  • AI will sometimes be wrong. 
  • AI will sometimes be attacked. 
  • AI will sometimes be unavailable. 

To handle that, the guidance calls for: 

  • Clear fallback modes to manual control or non-AI automation. 
  • Updated functional safety procedures that explicitly account for AI. 
  • Integration of AI into the cybersecurity incident response plan – including how to respond if the AI itself is compromised or misbehaves. 

Architecturally, it also favors patterns like pushing data out of OT to AI systems (rather than pulling from the outside in) and maintaining strong segmentation so AI does not become a backdoor from IT into OT. 

 

What Does the Guidance Say About Risk Assessment and Risk Management? 

You asked specifically about cyber risk assessment, cyber risk quantification, and risk management. 

Cyber Risk Assessment 

The document does not use the exact phrase “cyber risk assessment.” 

However, it repeatedly calls for: 

  • Risk-based evaluation of AI business cases 
  • Incorporating AI into existing “risk evaluation, mitigation, and monitoring processes” 
  • Conducting regular security audits and assessments of AI systems 

In practice, that is a clear expectation that organizations will carry out structured risk assessments for AI in OT – just not bound to a specific methodology. 

 

Cyber Risk Quantification 

There is no mention of quantitative or monetary risk models (for example, FAIR) or “risk quantification” as a term. Searches of the guidance confirm that this language does not appear. 

If you are already using quantitative cyber risk approaches to determine financial impact of cybersecurity, you can absolutely apply them here, but you will not find prescriptive instructions in this document. 

For organizations that want to move from qualitative scoring to financially quantified OT cyber risk, platforms such as DeNexus DeRISK CRQ can help translate AI-related OT cyber exposure into value-at-risk (VaR) metrics that boards and business leaders understand. This supports more defensible decisions about where to invest in controls, resilience, and monitoring as AI capabilities are introduced into production environments. 

 

Risk Management 

Risk management is where the guidance is more explicit: 

  • It adopts NIST AI RMF’s definition of “risk” as probability × consequence, and then tailors it to “AI risk” in OT. 
  • It recommends consulting AI risk management frameworks such as NIST’s AI RMF when the business case for AI in OT has been established. 
  • It emphasizes embedding AI systems into existing security and cybersecurity frameworks and risk processes, rather than treating AI separately. 

So, risk management is central, but framed at the framework and governance level, not at the level of specific scoring formulas. 

 

Virtually all cybersecurity teams struggle with Probability determination in the risk management equation. In DeNexus DeRISK CRQ modelling system, the probability is broken down into its component elements: (i) estimated frequency of attacks, (ii) vulnerabilities or lack of safeguards, and (iii) effectiveness of existing safeguards. Through a probabilistic attack simulation, it helps reduce the uncertainty by providing a probability distribution of not just a single probability value, but the distribution of low-impact high-frequency as well as high-impact low-frequency events. A risk management framework that adopts not a single probability value, but recognizes that cybersecurity attacks are low-frequency high-impact events, is better equipped to manage this risk. 

 

Practical Questions for Your Next Leadership Meeting 

If you want to act on this guidance quickly, here are some concrete questions to bring to your next leadership or risk committee discussion. 

  1. Inventory and Exposure
  • Where are AI or AI-like capabilities already present in our OT and adjacent systems? 
  • Which of those can materially affect safety, availability, or regulatory compliance? 
  1. Business Case and Risk
  • For any new AI proposal in OT: what specific business problem is it solving? 
  • Have we explicitly weighed the benefit against added complexity and cyber/safety risk? 
  1. Data and Vendors
  • What OT data is being shared with external vendors or cloud services for AI? 
  • Do our contracts clearly address data use, model training, security responsibilities and support? 
  1. Governance and Assurance
  • Who is accountable for AI risk in OT – at the board, executive and operational levels? 
  • How are we testing and validating AI models before and after deployment? 
  1. Operations and Resilience
  • Do we have documented fallback modes if AI is wrong, unavailable, or compromised? 
  • Are AI scenarios covered in our incident response and safety procedures? 

If the answer to several of these is “we don’t know yet,” that is not a failure – it is your starting point. The CISA-led guidance gives you a shared language with regulators, partners, and vendors for how to move forward. 

As you mature, financially quantified risk insights and quantified vulnerability management can help you prioritize where to act first. DeNexus DeRISK CRQ and DeRISK QVM are examples of platforms that operationalize these concepts for ICS/OT environments, helping organizations connect AI-in-OT exposure, vulnerabilities, and controls to measurable business impact rather than abstract scores.