Skip to content Skip to footer

The Silence After the Question: Why “We Have a Data Diode” Is the Most Dangerous Sentence in OT Security

ot security data diode

The Question That Stops the Conversation

Every CISO I talk to who runs data diodes opens the same way.

“We’re fully secure on the OT side. We have a diode.”

I let that sit for a moment. Then I ask one question.

“When was your last firmware update on the controllers behind it?”

Silence. Sometimes thirty seconds. Sometimes a long inhale and a redirect – “well, that’s a separate process.” Sometimes the honest answer: “I don’t know. I’d have to check.”

The silence is the most diagnostic moment in any OT security conversation. It is not the silence of unpreparedness. CISOs running diodes care deeply about their environments. The silence is the silence of a CISO realizing, in real time, that a question they should be able to answer in their sleep – when did we last patch our most critical assets? – has no clean answer.

The diode kept threats out. It also kept patches out. And in 2026, an unpatched controller behind a diode is not protected. It is preserved – frozen at whatever firmware level was current when the diode was installed, exposed to every CVE published since, and increasingly visible to insurance underwriters, auditors, and threat actors who all understand exactly what physical-access-only patching produces.

This article documents what stale firmware behind data diodes actually costs CISOs, why the architecture made sense in 2014 and doesn’t in 2026, what the alternative looks like, and how to migrate without opening the network. The silence has an answer. The architecture that produces it does not.

What “Stale Firmware Behind a Diode” Actually Means

The term “stale firmware” sounds abstract. In OT environments, it is highly specific.

Claroty’s 2025 OT vulnerability research found that industrial environments take an average of 180 days to patch known vulnerabilities – and that the majority of successful attacks use freely available tools against weaknesses that have been documented for months or years. CISA’s Known Exploited Vulnerabilities catalog lists OT-relevant CVEs with federal patching deadlines that most utilities cannot meet because the patching process requires physical site visits.

Behind a diode, this gets worse. The diode prevents inbound connections, which means:

Firmware files cannot be pushed remotely. The vendor cannot upload a patch over a remote connection. Every update requires a physical visit, a USB stick, an air-gapped file transfer process, or an out-of-band channel that bypasses the diode.

Vulnerability scanning becomes asymmetric. Inbound scanning from the IT side is impossible. The OT team scans internally, generates a report, and shares it with the IT side via the diode’s outbound channel. The remediation conversation happens with a 24-hour latency.

Patch validation is delayed. Even when a firmware patch is available, the validation process (test in lab, test in pilot zone, deploy to production) takes weeks. Each phase requires physical access. A patch released Tuesday for a critical CVE doesn’t reach production until the next quarterly maintenance window – sometimes 90 days later. Sometimes longer.

Inventory drift accumulates. Without continuous remote management, the actual firmware versions running in production diverge from the documented inventory. A CISO who believes 80% of controllers are on firmware version 4.2 may discover during an incident that the real distribution is 4.0, 4.1, 4.2, 4.3, and one site running 3.9 because nobody flew out to update it last September.

The result is a documented gap between what the architecture claims to protect and what the architecture actually contains. A practical examination of how to secure OT networks without opening inbound firewall ports addresses the underlying architectural question – eliminating inbound exposure without freezing the patching cadence.

The Numbers That Define the Stale Firmware Problem

Metric

Industry Average

Source

Average time to patch OT vulnerabilities

180 days

Claroty 2025

Verified OT intrusions starting from internet-facing remote access

82%

Claroty 2025

Ransomware groups targeting industrial organizations (2025)

119 (+49% YoY)

Dragos 2026

Industrial organizations impacted (2025)

3,300+

Dragos 2026

OT IR cases with operational disruption (2025)

100%

Dragos 2026

System intrusion breaches involving ransomware

75%

Verizon 2025 DBIR

OT environments with comprehensive visibility

<10%

Dragos 2026

Detection time – comprehensive visibility vs. industry average

5 days vs. 42 days

Dragos 2026

The 180-day patching average is the foundational number. The 82% intrusion vector via internet-facing remote access is what happens when patching falls behind. The 119 ransomware groups represent the threat landscape that targets exactly these gaps.

For a deeper analysis of how the 2025 threat landscape specifically exploits stale OT systems, the Dragos 2026 OT cybersecurity findings document the attack pattern – VPN credentials → RDP/SMB lateral movement → VMware encryption → OT shutdown – that depends on legacy systems remaining unpatched long enough to be exploited.

The Reframe: The Diode Isn’t Protecting You. It’s Preserving Vulnerabilities.

Data diodes were a brilliant 2014 architecture. The threat landscape was different. Remote access tools were less mature. Compliance frameworks were less specific. Cyber insurance asked fewer questions. Air-gapping made operational sense for utilities with no remote support requirement and no immediate vendor maintenance pressure.

None of those conditions hold in 2026.

The threat landscape now includes 119 ransomware groups specifically targeting industrial organizations. The vendors require remote support to be commercially viable. Compliance frameworks (NERC CIP, AWIA, IEC 62443, NIS2, CISA ZTMM) require evidence of patch management cadence that physical-only architectures cannot produce. Cyber insurance underwriters now ask specifically about patch latency, with documented patch management as one of the five baseline controls that drive premium calculations.

What the diode protects you from in 2026:

  • Inbound TCP connections from the internet
  • Inbound TCP connections from the IT network
  • Inbound TCP connections from anywhere

What the diode does not protect you from:

  • CVEs in the firmware running on devices behind it
  • Insider threats from anyone who has physical access
  • Supply chain attacks via firmware files transported in via USB
  • Compliance gaps when auditors ask “when was your last patch validation?”
  • Insurance premium increases when underwriters score documented patch management
  • Regulatory findings when regulators audit specific update cadence

This isn’t security. It’s vulnerability preservation that looks like security from the outside. The architecture stops attacks. It also stops the updates that would have prevented them. The math hasn’t gotten worse over time – the math has stayed exactly the same. What’s changed is everyone else’s threshold for accepting it.

Why CISOs Have Defended This Architecture

CISOs operating diode environments are not making mistakes. They are managing complex tradeoffs in environments that have historically rewarded the most cautious choice. The architecture that locks down access is the architecture that fewer auditors flag.

But “fewer auditors flag it” was the dynamic of 2018. In 2026, the dynamic has inverted. Now auditors specifically ask about patch latency, vulnerability remediation timelines, and continuous monitoring evidence. The architecture that prevented audit findings five years ago now generates them. CISOs who haven’t updated their mental model are operating from outdated playbooks.

The objections we hear consistently:

“We can’t risk opening the network for remote updates.”

This is the right concern, mapped to the wrong architecture. Traditional remote access (VPN, RDP gateway) does open the network – inbound port 443, inbound RDP, exposed services. Reverse Access architecture inverts the connection direction. The Access Controller inside the OT network initiates an outbound connection to a Gateway. No inbound ports are opened. The network does not open. The patching does.

“Our auditors accept the diode as our compensating control.”

In 2018, yes. In 2026, increasingly no. The CISA ZTMM evaluates patch management as part of the Devices pillar. NERC CIP-007 requires documented security patch management. NIS2 Article 21(2)(e) requires “policies and procedures regarding the use of cryptography and, where appropriate, encryption” – which assumes patches reach systems. AWIA requires water utilities to demonstrate cybersecurity preparedness that includes vulnerability remediation. The compensating control argument is shrinking, not expanding.

“We do quarterly maintenance windows for patches.”

Quarterly = 90 days. Critical CVEs require 14-day or 30-day patching under most frameworks. CISA KEV catalog deadlines are typically 21 days. Quarterly is documented non-compliance. The maintenance window cadence was acceptable when patch criticality was rated quarterly. Threat actors have moved faster than the maintenance schedule.

“The vendor handles patches during scheduled visits.”

Three problems. First, the vendor visits on the vendor’s schedule, not the threat schedule. Second, the vendor visits a single site at a time – multi-site organizations stagger patches across weeks or months. Third, vendor visit costs are increasing while frequency is staying the same. The patching cadence is constrained by economics, not by security need.

A comprehensive guide to Zero Trust Network Access documents how modern remote access architectures handle the same security concerns the diode was designed for – without freezing the patching cadence the diode created.

The Specific CVEs That Are Probably Behind Your Diode

The abstract problem becomes concrete when CISOs map specific CVEs to specific equipment behind their diodes. A representative sample of vulnerabilities that affect equipment commonly found in industrial OT environments:

CVE Class

Affected Equipment

Severity

Typical Patch Latency Behind Diode

Siemens S7 PLC firmware

S7-1200, S7-1500 controllers

Critical

6–12 months

Rockwell ControlLogix

5570, 5580 series

High–Critical

6–18 months

Schneider Modicon

M580, M340, Quantum

High

9–18 months

Wonderware/AVEVA InTouch HMI

InTouch, System Platform

Critical

12–24 months

GE iFIX/Proficy

iFIX, CIMPLICITY

High

12–18 months

Windows-based engineering workstations

XP Embedded, Win 7, Server 2008

Critical (multiple)

Often never patched

VxWorks RTOS

Various controller families

Critical

6–12 months

OpenSSL/OpenSSH on OT devices

Many controllers, gateways

Critical (Heartbleed-class)

12–36 months

CODESYS runtime vulnerabilities

Multiple PLC vendors

Critical

9–18 months

Each of these CVE classes has had multiple critical disclosures in 2024 and 2025. The patch latency reflects the operational reality of physical-only updates – not vendor responsiveness or technical patching difficulty. The patches exist. They reach customers slowly because of how the architecture forces them to be deployed.

CISOs operating diode environments often discover during incident response that the equipment running behind the diode has CVE exposure that pre-dates the most recent maintenance visit by 12+ months. The compensating control was the diode. The diode was bypassed by something the diode wasn’t designed to address – the patches that didn’t get applied.

The Architectural Alternative

Reverse Access architecture solves the firmware update problem without opening the network. The mechanism is simple to describe and architecturally specific:

The Access Controller deploys inside the protected OT network. It runs in deny-all state for inbound connections. No listening services. No exposed ports.

The Access Controller initiates outbound connections to an Access Gateway. The Gateway sits in the DMZ or in cloud infrastructure. The connection is encrypted, mutually authenticated, and persistent.

Vendors and IT staff connect to the Access Gateway from outside. They authenticate with named accounts, MFA, and policy-based authorization. They never directly connect to the OT network – the Gateway proxies authorized traffic to the internal Access Controller.

Firmware files travel through the Gateway via Heimdall SMB Proxy. Every file is scanned by Content Disarm and Reconstruction before reaching the OT side. CDR rebuilds files from scratch – stripping active content, embedded scripts, and exploit structures – rather than scanning for known signatures.

Sessions are recorded. Every interactive session (RDP, SSH, HTTPS) is video and keystroke recorded. Every file operation is logged with identity attribution.

The result is an architecture where the network has zero inbound exposure (better than the diode), patches reach controllers within days instead of months (qualitatively better than the diode), and every operation is auditable (something the diode never provided).

This is the architecture documented in the TerraZone truePass platform, which provides patented Reverse Access technology recognized in 22 countries.

The Migration Path: How CISOs Actually Deploy This

The migration follows a phased pattern designed to preserve the existing diode’s security posture during transition while progressively shifting patching responsibility to the new architecture.

Phase 1: Parallel Deployment (Weeks 1–4)

Deploy the Reverse Access infrastructure alongside the existing diode. The diode keeps running. No production patching workflow changes. The new platform integrates with the agency’s identity infrastructure and connects to the OT network via outbound-only connections.

Phase 1 deliverable: a fully operational Reverse Access architecture running in parallel – not yet handling production patching traffic.

Phase 2: First Patches via the New Path (Weeks 5–8)

Migrate a controlled patching activity to the new platform. Typical first migration: a non-critical firmware update on a single non-production controller. The vendor authenticates through the new platform with MFA, the firmware file passes through CDR, the session is recorded.

Both paths remain operational. If anything goes wrong, the patching falls back to the diode-and-physical-visit model. After 2–3 successful pilot patches, expand the scope.

Phase 3: Production Patch Migration (Weeks 9–14)

Migrate routine firmware updates and quarterly maintenance to the new platform. The diode remains in place but is no longer the primary patch path. Track the patch latency reduction in real-time – most utilities see 90-day cycles compress to 14-day cycles within Phase 3.

Phase 4: Diode Decision (Weeks 15–18)

After 2–3 successful production patch cycles, the CISO has data to make a strategic decision about the diode. Three options:

Retain the diode. Some CISOs keep the diode as a deeper-layer compensating control while the new platform handles all routine traffic. The diode becomes the failsafe for a specific class of high-sensitivity operations.

Replace the diode. Some CISOs replace the diode with the new platform’s controls. The Reverse Access architecture provides the same outbound-only data flow guarantee with operational flexibility the diode couldn’t.

Repurpose the diode. Some CISOs move the diode to a deeper Purdue level boundary (Level 2 → Level 1) where patch latency is genuinely acceptable, while the new platform handles the Level 3 ↔ Level 4 boundary where patch latency was the problem.

The decision is informed by 18 weeks of operational data. It is not made on day one based on architectural ideology.

The Compliance and Insurance Math

Beyond the operational improvements, two specific financial dimensions change when patch latency drops from 180 days to 14 days:

Compliance evidence improves. NERC CIP-007 requires documented security patch management with specific timelines. AWIA requires demonstration of cybersecurity preparedness. CISA ZTMM Devices pillar requires real-time inspection and patch management. The audit conversation shifts from “we have a compensating control” (defensive) to “here is our patch latency dashboard” (evidentiary).

Cyber insurance premiums adjust. Underwriters increasingly evaluate documented patch management as one of five baseline controls. Organizations that can demonstrate continuous patching with audit evidence see premium stabilization or reduction. Organizations stuck on quarterly patch cycles face premium increases of 15–30% per renewal cycle.

The financial impact is documented in detail in a comprehensive analysis of how Zero Trust connectivity reduces cyber insurance premiums – particularly the 50–60% premium reduction available to organizations that implement comprehensive controls including documented patch management.

The financial framing matters because the migration from diode to Reverse Access is rarely justified on security alone – even though the security argument is overwhelming. CISOs who present the migration to CFOs and boards win the conversation when they layer compliance evidence + insurance premium impact + operational efficiency on top of the security improvement. The Zero Trust ROI calculation typically shows positive financial impact within 12–18 months for industrial environments with significant physical-only maintenance overhead.

How truePass Gravity Addresses the Stale Firmware Problem Specifically

truePass Gravity is designed for exactly this OT cross-network use case – including the specific scenario where a diode currently restricts patching cadence.

The three-layer architecture:

Layer 1 – Reverse Access (Zero Inbound Ports). Patented technology eliminates inbound port exposure. The OT network remains architecturally invisible from the internet. The diode’s outbound-only data flow guarantee is preserved (or strengthened) by the new architecture.

Layer 2 – Heimdall SMB Proxy with CDR. Every firmware file, configuration backup, and engineering project file passes through Content Disarm and Reconstruction. Per-vendor named accounts. Per-operation policy. Identity-attributed audit trail for every file operation.

Layer 3 – Zero Trust Application Access with Session Recording. Per-session RDP, SSH, HTTP, and TCP access for engineering workstations, HMIs, and administrative interfaces. MFA per session. Video + keystroke recording. Time-bounded vendor sessions.

The complete Zero Trust application access capability extends across the full IT/OT boundary – not just file transfer for firmware, but also interactive session management, vendor coordination, and audit evidence production. The result is an architecture that does what the diode did (eliminate inbound exposure) and adds what the diode couldn’t (continuous patching, identity attribution, content inspection, session recording, audit evidence).

Frequently Asked Questions

Does this mean we have to remove the diode?

No. The migration is designed so that the diode keeps running through Phase 3. The decision about the diode’s long-term role is made in Phase 4, based on 18 weeks of operational data. Three valid outcomes: retain it as a deeper-layer control, replace it with the new platform, or repurpose it to a different Purdue boundary.

How quickly can patches actually reach controllers with this architecture?

In production deployments, organizations migrating from diode-based patching (90-180 day cycles) to Reverse Access (14-day cycles) see the change within Phase 3 – typically 8-12 weeks into the migration. The architectural ceiling is hours, not weeks. Organizational processes (change management, validation) determine actual cadence.

What about controllers that can’t be patched at all?

Some legacy OT equipment cannot accept firmware updates from any path – physical or remote. The Reverse Access architecture doesn’t change this. What it does change is the surrounding context: the unpatchable controller is now isolated through identity-based segmentation, traffic to and from it is recorded, and the compensating controls around it are auditable. The vulnerability isn’t eliminated, but its operational risk is bounded and documented.

Can our existing OT vendors work with this architecture?

Yes. The vendor authenticates through the new platform with named accounts and MFA. Their workflow – connect, perform updates, disconnect – is the same. What changes is that the connection is recorded, identity-attributed, and policy-controlled. Most vendors prefer this because it provides them with audit evidence of what they did, when, and with what credentials – protecting them from disputes after the fact.

How does this affect our NERC CIP / AWIA / IEC 62443 compliance?

It improves compliance posture across all three frameworks. NERC CIP-007 (security patch management) and CIP-005 (electronic security perimeters) both benefit from documented patch cadence and identity-attributed access. AWIA’s cybersecurity preparedness requirements become measurable. IEC 62443’s zone-and-conduit model is enhanced by per-operation policy enforcement at the conduit. Auditors generally welcome this architecture because it produces evidence the diode-based architecture cannot.

What’s the realistic deployment timeline?

18 weeks for a typical mid-sized utility with 6 sites, 40+ controllers, and an existing diode. Phase 1 (parallel deployment) takes 4 weeks. Phase 2 (pilot patching) takes 4 weeks. Phase 3 (production migration) takes 6 weeks. Phase 4 (diode decision) takes 4 weeks. Patch latency reduction is visible from Phase 2 onward.

What if our organization doesn’t have a diode but has VPN/jump server-based patching?

The same migration pattern applies, with one difference: the legacy infrastructure being replaced is more directly attack-surface-creating than a diode. VPN concentrators and jump servers create the inbound port exposure that 82% of OT intrusions exploit. Reverse Access eliminates that exposure entirely.

Conclusion

The CISO who described their OT environment as “fully secure because we have a diode” wasn’t wrong about the diode’s defensive properties. They were wrong about whether stopping inbound traffic is the same as security.

In 2014, those two things were close enough to identical for the architecture to make sense. In 2026, the gap between them is wide and growing. The diode prevents inbound attacks. It also prevents the patches that would prevent inbound attacks from succeeding when they bypass the diode through other channels – physical access, supply chain, USB-borne content, vendor laptops, insider threats.

The silence after the question “when was your last firmware update?” is the silence of an architecture that is structurally incompatible with continuous security in the current threat environment. It isn’t a knowledge gap or a process gap. It is an architectural gap that the architecture itself created and cannot solve.

Reverse Access architecture preserves what the diode did right (zero inbound exposure, outbound-only data flow guarantee) and fixes what the diode does wrong (frozen firmware, no audit evidence, no identity attribution, no continuous patching). The migration takes 18 weeks. The patch latency reduction is visible at week 8. The compliance and insurance improvements compound over the following two to three audit cycles.

The diode protected you. It also stopped you. The next CISO who hears the question “when was your last firmware update?” will have an answer – because the architecture they operate produces one continuously, in real time, without anyone having to fly anywhere.

The silence ends when the architecture changes.

 

Welcome! Let's start the journey

AI Personal Consultant

Chat: AI Chat is not available - token for access to the API for text generation is not specified