Summary
Zero Trust Meets the Plant Floor: What the NSA ZIGs Mean for ICS/OT—and What’s Feasible Today
Zero Trust is no longer an abstract philosophy—it’s being operationalized into phased implementation playbooks. In January 2026, the U.S. National Security Agency (NSA) published Phase One and Phase Two of its Zero Trust Implementation Guidelines (ZIGs), designed to move organizations from “Discovery” to a defined Target-level Zero Trust maturity.[1]
That’s a big deal in enterprise IT. But what happens when you try to push the same rigor into industrial control systems and operational technology (ICS/OT)—down to Purdue levels 0–3, where deterministic control, safety constraints, and legacy device capability dominate?
This blog summarizes (1) what the NSA ZIGs are, (2) what is not feasible today in OT environments, and (3) what you can do now vs. what’s hard vs. what likely requires new technology.
1) What are the NSA Zero Trust Implementation Guidelines (ZIGs)?
The NSA Zero Trust Implementation Guidelines (ZIGs) are a structured, phased set of implementation activities intended to help practitioners achieve Target-level Zero Trust as defined by the U.S. Department of Defense/“Department of War” CIO Zero Trust framework. NSA’s press release describes Phase One and Phase Two as mapping “activities, requirements, precursors and successors,” emphasizing modularity and customization.[1]
A phased “activity model,” not a single checklist
The NSA ZIG program currently consists of:
- Primer
- Discovery Phase
- Phase One
- Phase Two
The Primer explains how these phases organize the framework’s activities:
- Discovery: 14 activities / 13 capabilities
- Phase One: 36 activities / 30 capabilities
- Phase Two: 41 activities / 34 capabilities
This is explicitly aligned to NIST SP 800-207 (Zero Trust Architecture) concepts and language.
The conceptual core: identity- and policy-driven enforcement
NIST SP 800-207’s framing is useful because it clarifies what “doing Zero Trust” tends to require in practice: moving from perimeter-based trust to explicit authentication and authorization (including device identity), making access decisions per-request, and using policy decision points (PDP) and policy enforcement points (PEP) as first-class architectural components.[2]
The ZIGs operationalize that into implementation guidance: they break down activities into tasks, list dependencies, and describe expected outcomes and end states—aimed at “skilled practitioners.”
So the ZIGs are best understood as a playbook for building:
- strong identity and access foundations,
- device and service identity (including non-person entities),
- segmentation and enforcement points,
- telemetry/analytics, and
- automation/orchestration
…in a way that approaches “continuous verification” rather than “authenticate once, trust forever.”
2) What isn’t feasible today in ICS/OT environments?
If you try to implement the ZIGs “literally” down to Purdue levels 0–1 (sensors/actuators, PLCs/RTUs/IEDs), you quickly hit constraints that are structural, not merely “immature tooling.”
NIST’s OT security guide is blunt: some OT components (e.g., PLCs, controllers, HMI) may not support the technologies or protocols required to fully integrate with a Zero Trust Architecture, and therefore ZTA “will not be practical for some OT devices.” NIST recommends applying ZTA to compatible devices, typically found at higher functional levels (e.g., Purdue levels 3–5 and the OT DMZ).[3]
That single paragraph captures the heart of the issue: in OT, many lower-layer devices and workflows cannot tolerate the enforcement patterns and dependencies that enterprise Zero Trust assumes.
What breaks first: determinism, safety, and lifecycle reality
ICS/OT environments differ from IT in ways that directly oppose common ZT implementation patterns:
- Deterministic timing and operational continuity often trump confidentiality.
- Safety and process integrity are primary constraints.
- Legacy lifecycles mean devices can remain in service for decades, and patching or upgrading may be constrained by validation and downtime windows.
- Many endpoints cannot run agents (EDR, posture assessment, continuous telemetry collectors). NIST explicitly notes that anti-malware may not be available for systems like PLCs and DCS and recommends compensating controls instead.[3]
What is “not feasible” (today) at Purdue 0–1
In practical terms, the following are commonly not feasible across the bulk of installed ICS/OT at levels 0–1:
- Per-device continuous posture enforcement
“Comply-to-connect” or continuous health gating sounds great in IT; in OT it can create unsafe failure modes if a control workstation, HMI, or controller is blocked at the wrong moment. NIST cautions that encryption and other security controls can add latency and may not be suitable for all OT devices.[3] - Universal strong device identity (PKI) across field/control devices
ZIG-like rigor assumes scalable identity, credential lifecycle, and revocation behavior. Many controllers and field devices lack the UX, secure key storage, time sync reliability, maintenance tooling, and operational processes needed to manage certificates safely at scale. - Inline policy enforcement per “resource request” on control traffic
NIST 800-207 expects PDP/PEP-style enforcement close to resources.[2]
But many industrial protocols and cyclic control flows were not designed for inline authentication/authorization handshakes that can tolerate dropped packets, added latency, or temporary policy engine outages. - “Default-deny everything” at the message level
In OT you typically default-deny routes and conduits, not every intra-cell control exchange, unless the environment is modernized and engineered specifically for that.
The right takeaway is not “Zero Trust doesn’t apply to OT.” It’s: Zero Trust must be implemented with OT-safe control points—often at boundaries, supervisory layers, and mediated access paths.
3) What’s achievable today vs. what’s difficult vs. what needs new technology in ICS/OT?
A workable strategy is to separate “Zero Trust principles” from “Zero Trust mechanics”:
- Principles (least privilege, assume breach, minimize implicit trust) are broadly applicable.
- Mechanics (continuous posture gating, per-request policy enforcement, PKI everywhere) are variably feasible depending on Purdue layer and device generation.
Below is a practical capability map.
Achievable today (high confidence, deployable now)
A) Brokered OT–IT connectivity via DMZ patterns
A modern best practice is to keep legacy/industrial control protocols inside isolated OT segments and broker OT–IT exchange through a DMZ, using secure interoperable protocols (e.g., OPC UA over TLS, MQTT over TLS, HTTPS) and historian replication patterns.[4]
This is “Zero Trust compatible” because you:
- reduce implicit trust by eliminating direct inbound IT-to-OT access,
- force access through a small number of enforceable choke points,
- and make authentication/authorization/logging viable where it matters.
B) Strong user access controls for OT administration and remote access
Even if you cannot identity-enable every PLC, you can identity-enable the humans and systems that administer them:
- jump hosts / intermediate systems,
- MFA and privileged access workflows,
- session recording, time-bounded vendor access
This is standard practice in high-consequence environments. For example, NERC CIP-005-7 requires interactive remote access to use an Intermediate System such that the initiating asset does not directly access the target asset.[5]
C) Zero Trust for Purdue level 2–3 endpoints and services
Most plants can meaningfully apply ZT-style controls to:
- engineering workstations,
- HMIs,
- SCADA servers,
- historians, OT application servers,
- patch management and backup services
This is aligned with NIST’s recommendation to apply ZTA more readily to the higher levels (e.g., Purdue 3–5 and the OT DMZ).[3]
Difficult today (possible in some environments, but operationally heavy)
A) PKI-based device identity at “controller scale”
Some industrial ecosystems support strong identity and encryption, but scaling it is the hard part.
Examples of technologies that can support stronger device identity/crypto:
- OPC UA uses Application Instance Certificates (X.509) for application-level security and secure channel establishment.[6]
- CIP Security (EtherNet/IP) supports endpoint authentication using X.509 certificates or pre-shared keys, with options for device enrollment.[7]
However, the operations side is the barrier: enrollment, renewal, replacement workflows, revocation handling, and safe behavior under outage conditions. Without OT-friendly lifecycle tooling, PKI becomes fragile and labor-intensive.
B) Fine-grained microsegmentation inside OT cells
Segmentation at zone boundaries is doable. Microsegmentation inside a control cell—without breaking cyclic traffic, multicast/broadcast dependencies, and vendor support assumptions—requires deep traffic baselining, rigorous change control, and often vendor coordination.
C) Continuous monitoring that is “complete” enough for policy automation
OT teams can deploy logging and network monitoring at higher layers, but turning that into reliable automated enforcement (“orchestration”) is hard because false positives can have production consequences.
Likely to require new technology (or major product evolution)
If “ZIG-level rigor” means pushing identity and policy deeper—into levels 1–0—the industry will need better primitives and safer defaults.
A) First-class device identity with safe lifecycle automation
OT needs device identity that is:
- easy to commission,
- robust under replacement,
- automatically renewable,
- and safe under partial connectivity.
CIP Security’s work on profiles for constrained devices points in this direction, but broad adoption across the installed base is still limited.[7]
B) Signed/attested controller state and control logic
A future “ZT-native PLC” world would include:
- cryptographically signed logic/configuration,
- remote attestation of controller firmware and state,
- and vendor-neutral verification and inventory of what’s running where.
Today, pieces exist in pockets, but it’s not yet a broadly interoperable norm.
C) Determinism-friendly cryptography and enforcement patterns
NIST highlights that encryption adds latency and may not suit all OT devices.[3]
Newer OT architectures need security mechanisms designed explicitly for bounded latency/jitter budgets, with engineered fail-safe behavior.
D) Policy enforcement that fails safely (not merely “fails closed”)
Enterprise ZT often prefers “deny by default.” OT often must “fail operational” in tightly defined scenarios. That implies a different enforcement design philosophy: safety-aware PEPs, graded modes, and predictable degradation.
Bringing it together: a realistic “ZIG-to-OT” approach
A pragmatic interpretation is:
- Apply ZIG-style Zero Trust directly at Purdue levels 2–3 and in the OT DMZ, where identity, logging, patch/vuln processes, and enforceable boundaries are viable.[3]
- Apply ZIG principles indirectly at levels 0–1 by securing around them: isolate protocols, restrict conduits, broker access, and reduce the number of paths through which those devices can be reached.
- Modernize selectively where new protocols and devices support stronger identity and cryptography (e.g., OPC UA security, CIP Security), but treat “full depth” ZT as a modernization program, not a retrofit expectation.[6]
In other words: the NSA ZIGs are an excellent blueprint for where Zero Trust maturity is heading, but in ICS/OT, the “last mile” into Purdue 0–1 is constrained by physics, safety engineering, and the long tail of legacy control technology.
If you want more details, continue reading.
Deep-Dive Version
What NSA released and how the guidance is meant to be used
On Jan. 30, 2026, NSA published Phase One and Phase Two of its Zero Trust Implementation Guidelines (ZIGs) to outline the activities needed to reach “Target-level” Zero Trust maturity as defined by the DoW CIO Zero Trust Framework.[1]
A few framing points NSA makes up front:
- The ZIGs are phased and modular: Phase One + Phase Two are designed to move an organization from Discovery to Target-level implementation by mapping out activities, requirements, and activity relationships (e.g., precursors and successors).[1]
- The ZIGs are not positioned as a “one-size-fits-all,” prescriptive, sequential checklist, and they are vendor-agnostic (technology examples are representative, not exhaustive).
- NSA explicitly encourages readers to start with the earlier releases (Primer and Discovery) before Phase One/Two.[1]
How the ZIGs are organized (the “elements”)
1) The DoW CIO Zero Trust Framework alignment
The ZIGs are organized around the DoW CIO framework’s phased activity model. A key diagram shows:
- 152 total activities in the framework
- Target level: 91 activities (Discovery 14, Phase One 36, Phase Two 41)
- Advanced level: 61 activities (Phase Three 37, Phase Four 24)
2) Pillars → Capabilities → Activities
NSA’s methodology closely follows the framework structure:
- Pillars are the top-level structure.
- Each pillar contains Capabilities.
- Capabilities are implemented via Activities, and the ZIG methodology treats the Activity level as the “lowest-level element,” decomposing activities into discrete tasks and recommended actions/processes.
3) What’s inside each Capability
For each capability, the ZIGs include:
- a Scenario (illustrative use cases),
- Positive Impacts (benefits),
- and Technology (a representative list, not all possible technologies).
4) What’s inside each Activity
Activities are presented in a structured “Activity Table” format that includes:
- ID
- Description
- Predecessor(s) and Successor(s)
- Expected Outcomes
- End State
Then the ZIG expands each activity with:
- Considerations (prereqs, challenges, dependencies, lessons learned),
- Implementation (a roadmap and high-level tasks/process steps derived from the framework’s description/outcomes/end state),
- Summary (including items like readiness assessment, strategic insights, and expected outcomes presented via workflow-style summarization).
5) Definitions, roles, and appendices
A few additional structural elements that matter in practice:
- The ZIGs define “Enterprise” (the higher-level entity responsible for policies/guidance) versus “Component” (the organization implementing ZT).
- The ZIGs include appendices for Terms/Definitions, Acronyms, References, and Activity Task Diagrams.
Core Zero Trust principles and design concepts emphasized by NSA
Drivers and mindset
The guidelines anchor Zero Trust adoption to U.S. government direction such as EO 14028 and NSM-8, and they describe a ZT mindset as assuming traffic/users/devices/infrastructure may be compromised—driving rigorous authentication/authorization and continuous validation.
The “Adopt a Zero Trust Mindset” section emphasizes practices like:
- coordinated monitoring/management/defensive ops,
- continuously verifying resource requests and traffic,
- continuously validating device and infrastructure security posture,
- preparing for rapid response and recovery.
Guiding principles (NIST SP 800-207–aligned language)
NSA highlights core ZT principles as:
- Never trust, always verify
- Assume breach
- Verify explicitly
Design concepts (mission-driven and DAAS-centered)
NSA frames ZT architecture design around:
- defining mission outcomes and identifying critical DAAS (Data, Assets, Applications, and Services),
- architecting “inside out” around protecting critical DAAS,
- determining who/what needs access to DAAS to create access control policy,
- inspecting/logging traffic “before acting.”
Strategic goals and enabling functions
The documents also reference four high-level strategic goals (as described in the DoW ZT strategy context):
ZT cultural adoption, secured and defended information systems, technology acceleration, and ZT enablement, and they call out policy and training as important enablers (even if not deeply handled in the ZIG scope).
The framework pillars (as depicted in the ZIGs)
The ZIGs use seven pillars (shown in the pillar diagram), with the intent that they work together:
- User (continuous authentication and monitoring of user activity patterns)
- Device (device health/status to inform risk decisions)
- Application & Workload (secure applications through runtimes like containers/VMs/hypervisors)
- Data (transparency/visibility, encryption, data tagging)
- Network & Environment (segment/isolate/control the network with granular policy)
- Automation & Orchestration (automated security response actions based on defined processes/policy, potentially AI-enabled)
- Visibility & Analytics (analyze events/behaviors, potentially applying AI/ML for better detection and real-time access decisions)
Highlights from Phase One (Target level foundation)
NSA describes Phase One as 36 activities to build on/refine the environment and establish a secure foundation supporting 30 ZT capabilities in this phase.[1]
Below are practical highlights (by pillar) based on the Phase One contents:
User pillar: establish strong identity and privileged access foundations
Phase One includes capabilities/activities centered on:
- Organizational MFA and Identity Provider (IdP) integration
- Privileged Access Management (PAM) initial implementation/migration steps (Part 1)
- Identity Federation / Credentialing and Identity Lifecycle Management (ILM) foundations
- Default-deny for users (“deny user by default”) and initial continuous authentication constructs
Device pillar: inventory, manage, and continuously assess endpoints
Phase One emphasizes getting the endpoint/device baseline under control:
- Device inventory and identity linkage (including NPE/PKI concepts as shown in the contents list)
- Remote access capabilities
- Default-deny for devices
- Asset/vulnerability/patch management
- Endpoint management (UEM/UEDM/EDM Part 1 concepts in the contents list)
- EDR integration with comply-to-connect style enforcement (as listed)
Application & Workload pillar: build a secure delivery pipeline and baseline workload controls
Phase One’s contents highlight:
- building a DevSecOps software factory (Parts 1 and 2)
- approved binaries/code practices
- a first wave vulnerability management program (Part 1)
- initial resource authorization and software-defined compute (SDC) resource authorization activities (Part 1)
Data pillar: governance + tagging + initial protection and monitoring
Phase One focuses heavily on data fundamentals needed for ZT policy:
- data tagging standards and interoperability standards
- implement tagging/classification tools
- file activity monitoring (Part 1)
- implement data rights management (DRM)/protection tools (Part 1)
- establish enforcement points
Network & Environment pillar: map and segment
The Phase One contents emphasize the network steps you need before finer-grained policy can work:
- data flow mapping (to understand what must communicate)
- software-defined networking (SDN) programmable infrastructure
- macro-segmentation (e.g., datacenter macro-segmentation)
- micro-segmentation
Automation & Orchestration + Visibility & Analytics: start building the feedback loop
Phase One also includes early building blocks for scalable ZT operations:
- policy decision/orchestration building blocks (as a pillar theme)
- SOAR tooling (listed as an activity)
- API standardization (Part 1)
- SOC/IR workflow enrichment (Part 1)
- log handling and alerting foundations (e.g., log parsing, threat alerting Part 1, asset/alert correlation, initial analytics/CTI program activities)
Highlights from Phase Two (Target level integration)
NSA describes Phase Two as 41 activities that initiate integration of core ZT solutions inside the component environment and enable 34 capabilities specific to the phase.[1]
NSA’s own Phase Two “Purpose” section is explicit about the progression:
- Discovery collects information about the environment (DAAS, users/person/non-person entities, etc.).
- Phase One builds/refines the environment to establish a foundation.
- Phase Two marks the beginning of integrating distinct ZT fundamental solutions within the environment.
User pillar: conditional access, behavior/context, and stronger ICAM integration
Phase Two’s contents show a move toward richer access decisions:
- Conditional user access (including application-based permissions and rule-based dynamic access activities)
- PAM continuation (Part 2)
- more explicit behavioral/contextual identity (including UEBA/UAM tooling as listed)
- periodic authentication as part of continuous auth evolution
- an integrated ICAM platform and enterprise PKI/IdP activities
Device pillar: compliance-to-connect, real-time inspection, BYOD/IoT, and XDR
Phase Two expands device controls toward continuous authorization:
- device detection/compliance tied to comply-to-connect (C2C) and compliance-based network authorization
- real-time inspection elements (including application control / file integrity monitoring tools as listed)
- managed BYOD/IoT support
- endpoint management continuation (e.g., EDM Part 2)
- XDR tools and integration with C2C (Part 1)
Application & Workload pillar: automate remediation and validate continuously
Compared with Phase One’s baseline pipeline establishment, Phase Two highlights:
- automated application security and code remediation (Part 1)
- vulnerability management continuation (Part 2)
- continual validation
- resource authorization continuation (Part 2) including SDC authorization continuation
Data pillar: policy-driven storage/access and analytics-enforced protections
Phase Two’s contents show a shift from “standards + initial tooling” toward enforceable, integrated data policy:
- develop software-defined storage (SDS) policy
- manual data tagging (Part 1) as a step in making tags usable at scale
- monitoring/protection continuations: file activity monitoring (Part 2), DRM tools (Part 2)
- DRM enforcement via data tags and analytics
- DLP enforcement via data tags and analytics
- explicit data access control integrations (including DAAS + SDS policy integration, and integrating solutions/policy with enterprise IdP)
Network & Environment pillar: more granular segmentation + protect data in transit
Phase Two pushes into deeper segmentation and transport protection:
- segmentation of flows into control/management/data planes (SDN-oriented)
- macro-segmentation constructs (e.g., base/camp/post/station macro-segmentation)
- application and device micro-segmentation
- protect data in transit
Automation & Orchestration + Visibility & Analytics: operationalizing ZT with ML and baselining
Phase Two’s contents emphasize the operational maturity needed to run ZT continuously:
- enterprise security profile and broader workflow provisioning/integration
- machine learning (ML) use for data tagging/classification (as listed)
- API standardization Part 2
- SOC/IR workflow enrichment (Part 2)
- analytics maturity: log analysis, threat alerting (Part 2), user/device baselines, baseline behavior, and continued CTI integration activities
The Phase One → Phase Two “through-line” (what changes as you progress)
A useful way to interpret the two phases (based on NSA’s own wording) is:
- Phase One = “make the environment ZT-ready”
Establish identity/device/app/data/network baselines and the enforcement/monitoring scaffolding required for ZT controls to work reliably.[1] - Phase Two = “start integrating the core ZT solutions”
Move from baseline controls toward integrated policy decisioning, richer context/behavior signals, tighter data-policy enforcement, and more automated operations.[1]
Finally, NSA flags that Phase Two’s alignment may not perfectly match older NSA ZT CSI publications and notes an intent to update the CSIs in 2026 to better align with these ZIGs.
Challenges Implementing the NSA ZIGs in ICS/OT
Implementing the NSA Zero Trust Implementation Guidelines (ZIGs) “as-written” all the way down into ICS/OT Purdue levels 0–3 runs into a hard reality: the ZIGs assume you can do identity-centric, policy-driven, continuously verified access control and telemetry across users, devices (including non-person entities), networks, and services—but many lower-layer OT components either can’t support those control points or can’t tolerate the latency/fragility of enforcing them inline.
NIST says this plainly in its OT security guide: some OT components (e.g., PLCs, controllers, HMI) may not support the technologies/protocols required to fully integrate with a Zero Trust Architecture, and therefore a ZTA “might not be practical for some OT devices”; it suggests applying ZTA to compatible devices typically found at Purdue Levels 3–5 and the OT DMZ instead.
Below is what tends to break, what can be done today, and what would likely require new(ish) OT product capabilities to achieve NSA-ZIG-like rigor at levels 0–3.
What the NSA ZIGs imply when pushed into OT (why this is hard)
A few ZIG expectations become especially challenging in lower Purdue layers:
1) Centralized identity + MFA, and retiring local accounts
Phase One guidance explicitly emphasizes centralizing identities via an IdP/MFA and retiring local/built-in accounts, denying access if multiple factors aren’t presented.
That maps well to engineering workstations, jump hosts, historians, OT application servers—but much less well to PLCs, RTUs, safety controllers, and field devices that may only support local users, shared accounts, or vendor-specific mechanisms.
2) “Non-person entity” (device) identity, PKI, lifecycle management
Phase Two puts substantial weight on PKI governance and certificate lifecycle processes spanning User/Person Entities and Non-Person Entities.
In OT, doing true device identity at scale means you need (at minimum): device certificate enrollment, renewal, revocation handling, secure time, resilient key storage, and safe failure modes. Many legacy devices don’t have the compute, storage, UX, or maintenance model for this.
3) Defined policy enforcement points (PEPs/PDPs) and “cataloging” enforcement components
Phase Two also assumes you can enumerate and manage ZT components like PDPs, PEPs, PIPs and integrate them into a service catalog/CMDB.
In OT, you can do this at Level 2/3 and at cell/area boundaries, but you usually cannot insert a “PEP per resource request” in front of a time-critical PLC I/O cycle without risking operational impact.
OT-specific obstacles at Purdue levels 0–3
Core constraint: timing, uptime, safety, and resource limits
NIST highlights OT’s time-critical/deterministic response requirements, high availability expectations, and resource constraints that often exclude “typical contemporary IT security capabilities,” warning that indiscriminate use of IT security practices can cause availability and timing disruptions and that adding resources/features “may not be possible.”
This is the single biggest reason “full ZIG-style ZT everywhere” collides with plant-floor reality.
Purdue Level 0 (field I/O: sensors/actuators)
Typical obstacles
- Often not IP-based (4–20 mA, HART, discrete wiring, fieldbus variants), or uses very constrained embedded comms.
- No practical place to do identity, MFA, posture checks, EDR, agent-based telemetry.
- Security is dominated by physical access control, safety design, and segregation.
What “ZIG-like” would look like
- You’d push controls upward: secure zones/conduits, hard boundaries, and brokered data exchange rather than authenticating every L0 element directly. NIST shows OT segmentation patterns using Purdue levels and a DMZ.
Bottom line
- Full ZIG alignment at Level 0 is generally not feasible with today’s typical instrumentation. Security here is mostly achieved indirectly (architecture + physical + boundary controls).
Purdue Level 1 (basic control: PLCs, RTUs, IEDs, controllers)
Typical obstacles
- Many controllers run proprietary/embedded OS, are resource constrained, and may lack encryption/logging/password protections.
- Legacy/installed-base protocols often weren’t designed for strong authn/authz.
- High consequence of misconfiguration; changes require outage windows and heavy validation.
What can be supported today (selectively)
Modern industrial ecosystems are moving toward stronger cryptography/identity in some segments:
- OPC UA supports secure channels with confidentiality/integrity/authentication, application instance certificates, and security modes like Sign / SignAndEncrypt.[8]
- CIP Security (EtherNet/IP) uses TLS/DTLS, supports endpoint authentication (X.509 or PSKs), integrity, and optional encryption—and even defines a resource-constrained profile and certificate provisioning approaches.[7]
- DNP3 Secure Authentication provides cryptographically strong authentication and message integrity.
What remains hard
- Retrofits: adding strong identity/encryption to an existing mixed-vendor Level 1 environment is often blocked by device capability and operational risk.
- Certificate lifecycle (enroll/renew/revoke) for hundreds/thousands of controllers is still operationally heavy unless vendors provide “OT-friendly” tooling and safe defaults.
Purdue Level 2 (supervisory control: HMI, SCADA servers, engineering workstations)
This is where many ZIG practices become achievable—with OT-specific care.
What’s achievable now
- Centralized IdP and MFA for users accessing OT systems (especially remote/privileged actions) aligns strongly with ZIG Phase One intent.
- EDR/allowlisting/FIM on Windows-based HMIs and servers is feasible if validated for operational impact (NIST explicitly warns to ensure tools do not adversely impact performance/safety).
- Privileged access workflows and session recording via jump hosts.
What’s hard
- Continuous posture enforcement and automatic quarantine (“comply-to-connect” style) can create unsafe failure modes if it blocks an HMI/operator station at the wrong time. OT often needs “fail operational” designs, not “fail closed” by default.
Purdue Level 3 (site operations: historians, batch/MES interfaces, OT domain services, patch mgmt)
Level 3 is where you can implement a lot of “ZT plumbing” without touching Level 0/1 timing.
What’s achievable now
- Strong segmentation between L3 and L2/L1.
- Service identity and API-based data flows.
- Central logging/monitoring, asset inventory, vulnerability/patch management (often via passive discovery + carefully scheduled maintenance).
NIST shows the Purdue segmentation concept and the role of boundary devices/DMZs to control communications.
The “architecture move” that makes ZIG-to-OT workable: broker, don’t expose
A practical way to reconcile ZIG expectations with OT constraints is to treat OT control protocols as toxic-to-route across trust boundaries, and instead broker data/services through controlled chokepoints.
A recent OT connectivity guidance document recommends:
- Restrict industrial control protocols (e.g., Modbus, OPC DA, EtherNet/IP) to isolated OT segments
- Broker OT–IT data exchange through a DMZ
- Use secure standardized protocols for interoperability (e.g., OPC UA over TLS, MQTT over TLS, HTTPS)
- Replicate historians into the DMZ via a unidirectional mechanism where needed, and have IT query DMZ data via secure HTTP-based APIs rather than directly accessing OT.
That pattern is very compatible with ZIG ideas (least privilege, explicit authorization, strong enforcement points), because you move “policy enforcement” to places you can safely control.
Can today’s DCS/SCADA/PLC technology support “NSA ZIG-level” rigor at L0–L3?
Short answer
- Level 2–3: Mostly yes (with engineering discipline and OT-safe failure modes).
- Level 1: Partially—depends heavily on vendor generation, protocol stack (OPC UA / CIP Security / secure DNP3 / IEC 62351 adoption), and how much you can redesign.
- Level 0: Generally no—you secure it indirectly.
This aligns with NIST’s recommendation that full ZTA integration may not be practical for some OT devices and should focus on higher levels (L3–L5 + OT DMZ).
What “support” really means in OT
In OT, “supporting ZT” often means:
- You don’t enforce identity+policy on every sensor/actuator message.
- You do enforce identity+policy at:
- remote access points,
- engineering action points (logic download/upload, firmware change),
- inter-zone conduits,
- OT–IT data egress points,
- and on the systems that mediate control.
What this looks like by industry
1) Typical manufacturer (discrete or process manufacturing)
Likely state: heterogeneous brownfield, long-lived PLCs, vendor remote support needs, flat-ish networks in pockets.
A realistic ZIG-to-OT implementation
- Level 3 / 3.5: Build a true OT DMZ, broker IT/OT data (historian replication), central logging, asset inventory, and remote access gateways.
- Level 2: MFA + PAM for engineering workstations and operator consoles; hardening; controlled admin paths.
- Level 1: Cell/area segmentation; protocol-aware firewalls; selectively adopt secure protocols where equipment supports it.
Main obstacles
- Production downtime risk, validation burden, vendor constraints, and mixed device capability. OT’s availability and timing constraints are fundamental here.
2) Hyperscale datacenter
There are two “OTs” here:
- Datacenter IT (servers/network/storage) — already very ZT-friendly.
- Facility OT (power/cooling, BMS/DCIM, generators, switchgear controls) — more like classic OT.
Practical picture
- The IT side can implement ZIG-like controls deeply (service identity, mutual TLS, microsegmentation).
- Facility systems should follow the broker/DMZ model: isolate building/power controls, strictly govern remote access, and export telemetry via secure APIs/brokers.
Common advantage
- Hyperscalers often have the budget and automation maturity to operate PKI and identity at scale—making some Phase Two PKI/NPE practices more reachable than in manufacturing.
3) Power generation / grid environments
Power environments already have regulatory drivers around segmentation and controlled remote access. For example, NERC CIP requires for interactive remote access:
- Use an Intermediate System so the initiating asset does not directly access the cyber asset
- Use encryption terminating at the intermediate system
- Require multi-factor authentication for interactive remote access sessions
Where ZIG rigor fits
- ZIG-aligned controls map well to: access governance, jump hosts, MFA/PAM, logging/analytics, and strict conduits between zones.
- Deep cryptographic protection on substation/process bus traffic is possible in some cases (e.g., IEC 62351 family for power comms; plus secure DNP3). (For DNP3 SA specifically: cryptographically strong authentication + message integrity. )
Main obstacles
- Deterministic performance and operational safety/reliability constraints (NIST’s OT constraints apply strongly).
- Interoperability maturity: even when standards exist, product enablement and operationalization may lag.
Would this level of rigor be needed?
It depends on consequence and threat model, not on whether the environment is “OT.”
- For national critical infrastructure with high consequence (grid stability, major hazards, defense production), the principles behind ZIG-level ZT are absolutely relevant: minimizing implicit trust and reducing lateral movement can materially reduce systemic risk.
- For many manufacturers, a full “Target-level ZIG everywhere” program is likely beyond what is operationally justified—but adopting the high-leverage subset (identity + remote access + segmentation + monitoring + brokered data flows) often is justified.
A key OT takeaway from NIST is that security measures that impair safety/availability are unacceptable, and that OT environments are time-critical and resource constrained.
So OT ZT rigor must be applied selectively, with explicit safety and operational impact analysis.
What can be achieved today vs. what’s difficult vs. what needs new technology
Achievable today (high confidence)
1. Strong remote access controls- Intermediate/jump systems, MFA, encrypted sessions, session recording, time-bound vendor access.
- This is consistent with both ZIG Phase One direction (MFA/IdP focus) and established power-sector practice (Intermediate System + encryption + MFA).
2. Segmentation + brokered OT–IT data exchange
- Keep control protocols inside OT zones; broker via DMZ; export via OPC UA over TLS/MQTT over TLS/HTTPS; replicate historians.
3. ZT for Level 2–3 endpoints
- Hardening, logging, controlled privilege, application allowlisting/FIM, careful EDR deployment (validated not to harm safety/performance).
4. Selective “secure protocol islands”
- Use OPC UA security where supported.[8]
- Use CIP Security/TLS/DTLS where supported (especially for new builds or refreshed cells).[7]
- Use secure DNP3 where relevant.
Difficult today (but sometimes doable with constraints)
- Continuous device posture enforcement / auto-quarantine
- OT failure modes make “deny/quarantine automatically” risky (you need engineered safe states and runbooks).
2. Device identity (PKI) at controller scale
- Phase Two-style PKI governance and lifecycle management for NPEs is operationally heavy unless your vendors have strong automated enrollment and safe renewal mechanisms.
3. Fine-grained microsegmentation inside control cells
- You can segment at zone boundaries, but making it highly granular without breaking broadcast/cyclic industrial communications is hard and requires excellent traffic baselining and testing.
Likely to require new or significantly improved OT technologies
4. Pervasive, interoperable controller/workcell identity
- “First-class” device identities with automated certificate lifecycle that OT teams can operate safely.
5. Signed/attested control logic and control actions
- Widespread support for signing ladder logic/configs, hardware roots of trust, remote attestation of controller state, and vendor-neutral verification tooling.
6. Determinism-friendly cryptography and enforcement patterns
- Security mechanisms designed not to violate strict timing/jitter constraints (NIST’s determinism and performance constraints are a core limiter).
7. Policy enforcement that fails safely
- OT-specific PEP designs that degrade gracefully and predictably during outages, rather than “fail closed” in a way that halts production or compromises safety.
A pragmatic way to interpret “NSA ZIGs down to Purdue 0–3”
If you’re aiming for the spirit of the ZIGs in OT (minimize implicit trust, reduce lateral movement, make access explicit and auditable), the best practical approximation is:
- ZT “inside” Level 2–3 (users, endpoints, servers, remote access, monitoring)
- ZT “around” Level 1–0 (zones/conduits, brokered flows, protocol isolation, strict boundaries)
- Cryptographic modernization opportunistically (OPC UA security, CIP Security, secure DNP3) where your device stack supports it [8]
- DMZ-mediated OT–IT exchange by default
Relevant DeNexus Products
Quantified Vulnerability Management (QVM) — Relevant when you need risk-based vulnerability prioritization and remediation planning.
Cyber Risk Quantification Management (CRQ) — Relevant when you need to quantify cyber risk in financial terms for decision-making and reporting.
References
[1] National Security Agency. “NSA Releases Phase One and Phase Two of the Zero Trust Implementation Guidelines.” Press Release, 30 Jan. 2026, https://www.nsa.gov/Press-Room/Press-Releases-Statements/Press-Release-View/Article/4393480/nsa-releases-phase-one-and-phase-two-of-the-zero-trust-implementation-guidelines/.
[2] Rose, Scott, Oliver Borchert, Stu Mitchell, and Sean Connelly. “Zero Trust Architecture.” NIST Special Publication 800-207, National Institute of Standards and Technology, Aug. 2020, https://nvlpubs.nist.gov/nistpubs/specialpublications/NIST.SP.800-207.pdf.
[3] Stouffer, Keith, Victoria Pillitteri, Suzanne Lightman, Marshall Abrams, and Adam Hahn. “Guide to Operational Technology (OT) Security.” NIST Special Publication 800-82 Revision 3, National Institute of Standards and Technology, Sept. 2023, https://nvlpubs.nist.gov/nistpubs/SpecialPublications/NIST.SP.800-82r3.pdf.
[4] Internet Crime Complaint Center (IC3). “Secure Connectivity Principles for Operational Technology (OT).” Cybersecurity Advisory, 14 Jan. 2026, https://www.ic3.gov/CSA/2026/260114.pdf.
[5] North American Electric Reliability Corporation. “CIP-005-7 — Cyber Security – Electronic Security Perimeter(s).” Reliability Standard CIP-005-7, https://www.nerc.com/globalassets/standards/reliability-standards/cip/cip-005-7.pdf. Accessed 4 Feb. 2026.
[6] OPC Foundation. “UA Part 2: Security — Certificate Management.” OPC UA Online Reference (Version 1.04), https://reference.opcfoundation.org/Core/Part2/v104/docs/8. Accessed 4 Feb. 2026.
[7] ODVA. “CIP Security™ | Common Industrial Protocol.” ODVA Technology Standards, https://www.odva.org/technology-standards/distinct-cip-services/cip-security/. Accessed 4 Feb. 2026.
[8] OPC Foundation. “UA Part 2: Security — OPC UA Security Architecture.” OPC UA Online Reference (Version 1.04), https://reference.opcfoundation.org/Core/Part2/v104/docs/4. Accessed 4 Feb. 2026.