Dale Peterson debuted the OTI Impact Score at S4x26 last month. The idea is a 0–10 crowdsourced metric for rating the real-world consequences of industrial cyberattacks, using the formula (Severity × Reach × Duration) / 100. Scores published within 12 hours, outliers trimmed. He calls it a Richter Scale for OT cyber incidents.
The problem is real
Dale is right that media over-sensationalisation of OT incidents does serious damage to industries and organisations alike.
The Muleshoe Water incident is his clearest example, and it holds up. A negligible event, OTI Score 0.0, triggered congressional hearings and global headlines. Colonial Pipeline scored 3.9. That gap between media response and actual impact is not academic. It shapes regulation, budget cycles, and public perception in ways that make the industry’s job harder.
If you’ve spent time explaining OT risk to executives or board members, you’ve felt this directly. Every over-hyped minor incident creates noise that distorts the signal. Leaders who should be focused on genuine systemic risk get pulled into reacting to headlines instead.
What the OTI Score does well
The score addresses the industry-level signal problem. Over time, if the community adopts it consistently, it builds a credibility asset. Journalists and policymakers who want a reference point will have one. That matters.
The 12-hour publication target is the right instinct. Speed is what competes with the news cycle. A score published after the story has already run is far less useful than one that gives reporters something to check before they file. I also think the crowdsourced, trimmed-mean approach is defensible. It’s transparent about its methodology, which is more than most industry assessments manage.
The partner it needs: organisational resilience
The OTI Score addresses the external signal problem. It doesn’t address the communications gap that lets misinformation fill the vacuum in the first place. The 12-hour window matters only if the incident-response community has accurate information by hour three or four. And in our experience architecting a global OT security programme across more than 40 manufacturing sites, that is far from guaranteed.
The internal communications breakdown often happens before anyone starts thinking about external messaging.
How an organisation communicates during a crisis is not a separate discipline from how it responds. Organisations that can’t get accurate information to their communications team can’t get it to their own response coordinators either. The same structural gaps that break external messaging break internal coordination. If your incident communications are chaotic, your incident response almost certainly is too.
Case: Norsk Hydro and the resilience behind the communications
The 2019 LockerGoga attack on Norsk Hydro is the clearest positive case study I know. It deserves more attention than it gets in these discussions.
Here’s how their communications actually unfolded:
| Timing | Action | What they shared |
|---|---|---|
| Mon 18 Mar, ~midnight | Attack begins | Ransomware spreading across the network. The variant, later identified by researchers as LockerGoga, ultimately reached 22,000 computers across 170 locations in 40 countries |
| Tue 19 Mar, early morning | Oslo Stock Exchange notification | Mandatory disclosure: “victim of an extensive cyber-attack.” Share price dropped 2% |
| Tue 19 Mar, morning | Emergency executive meeting | Three decisions: no ransom, bring in Microsoft DART, full transparency |
| Tue 19 Mar, daytime | First press conference + webcast | CFO Eivind Kallevik: “a classic ransomware attack,” situation “quite severe,” no plans to pay ransom |
| Tue 19 Mar onwards | Facebook updates begin | Regular status updates via social media while main website was down |
| Wed 20 Mar onwards | Daily press conferences at Oslo HQ | Operational status by business area, journalists invited into operations control rooms |
| Thu 21 Mar | Detailed public statement | Energy and Primary Metal running normal, Extruded Solutions at ~50%. Malware contained |
| Week 1 | New website launched | Temporary site built as primary communications channel |
What’s worth noticing is what this timeline actually shows. Hydro did not have full clarity on day one. No one does. The early hours of any major incident are defined by confusion, contradictory information, and a situational picture that shifts as the investigation progresses. That is normal.
Anyone who has been through a major incident knows this. What Hydro got right was not instant transparency. It was honest communication paced to their actual knowledge. They said what they knew, admitted what they didn’t know yet, and committed to regular updates at predictable intervals. That approach fills the information vacuum without over committing to facts that might change.
It also respects a reality that the “extreme transparency” label glosses over. There are legitimate reasons to limit what you share publicly. Reputation risk, share price impact, regulatory exposure, and the risk of giving attackers useful feedback all place boundaries on how open you can be. The skill is knowing where those boundaries sit and communicating openly within them, not pretending they don’t exist.
What made this work was not a communications strategy. It was mature incident response. Decision authority was established before the incident. Roles were defined and rehearsed. Internal coordination held under pressure.
The communications were a visible output of that operational maturity. Hydro could brief journalists accurately because their internal response structure could produce a current situational picture and get it to the right people fast enough. The same coordination that informed the CFO’s press conference informed the decisions about containment, production continuity, and recovery.
Communications and response ran on the same information flow.
The business outcome tells the story. The incident cost Hydro approximately $70M. The share price initially dropped 2%, then recovered and went up. Customers stayed. Regulators were supportive rather than adversarial. Insurers had a clear record of responsible action.
It protected the business relationships and market confidence that matter most when you’re recovering from an incident of that scale.
What Hydro demonstrated was not just good incident response. It was organisational resilience. They didn’t just contain malware and restore from backups. They switched to manual operations across production sites, maintained output in most business areas, kept stakeholders informed, and preserved market confidence throughout.
The technical recovery was one part of it. The ability to maintain business operations and stakeholder trust through the disruption was the rest.
That’s the bar. Most manufacturing organisations are nowhere near it.
Communications maturity as a litmus test
Architecting OT security improvements at scale, we saw the same structural problems across different business areas and regions. They compound each other, and they don’t just affect external messaging. They affect the entire response.
| Gap | Communications impact | Business impact |
|---|---|---|
| IT/OT/Comms split | Accurate information doesn’t reach communicators in time | Executives and response coordinators make decisions based on an incomplete or wrong situational picture |
| Legal paralysis | Default to silence; journalists fill the vacuum | Customer, regulator, and investor confidence erodes while the organisation says nothing |
| No operationally fluent spokesperson | Corporate comms can handle a data breach but can’t explain a production disruption under pressure | The market and regulators hear from journalists before they hear from the company |
| Exercises exclude comms | External communications never gets rehearsed | The full response chain, including the business continuity and stakeholder management dimensions, is never tested end-to-end |
Organisations that struggle to communicate during an incident are not organisations with a communications problem. They are organisations with a resilience problem that becomes visible through communications.
The IT/OT/Comms split is the clearest example. An incident happens. OT engineers understand what it is and what it isn’t. That technical assessment takes hours to reach the communications team through layers of management, legal review, and organisational caution. But the same delay hits the people making business decisions. The same organisational distance that prevents a timely press statement prevents a timely decision about production continuity, customer notification, or supply chain activation.
Building the other half
The OTI Score is something the community builds together. Incident response maturity is something individual organisations have to build for themselves, and most haven’t tested it end-to-end.
The starting point in most global manufacturing companies is fragmentation. Perhaps IT runs major incident management centrally. Individual mills and production sites have their own physical security and crisis response procedures, developed for fires, chemical spills, or workforce safety events. OT sits somewhere between, often reporting into engineering or operations rather than IT, with its own vendor relationships and its own understanding of what constitutes an incident.
Corporate communications exists as a separate function again, typically experienced with financial reporting and product issues but not with explaining why a production line went down at 2am. These structures evolved independently for good reasons. The problem is that a cyber incident cuts across all of them simultaneously, and nobody designed the joins.
When we improved OT security across a global manufacturing programme, the approach was not to replace these structures but to connect them. OT incident response plugged into the existing IT major incident management and crisis management processes, extended to include relevant OT stakeholders at site, business area, and global level.
I wrote separately about why asset inventory is an operational necessity, not premature consensus, drawing on the same programme.
At mill level, the existing physical security crisis response organisation was extended to cover cyber events, because the site already had people trained to coordinate under pressure during a crisis. Between the site and global response, the IT service delivery manager for that site became the link, someone who already understood both the local operations and the central IT infrastructure.
Critically, decision authority was already distributed. The site general manager retained responsibility for mill-level decisions. When multiple mills were affected, the business area prioritised recovery order and resource allocation according to existing business continuity plans. The IT crisis response team had delegated authority for global incident response decisions. That meant extending the structure to cover OT didn’t require negotiating new decision rights in the middle of an incident.
None of this required building a parallel organisation. It required mapping the existing response structures, identifying where OT fell through the gaps, and connecting the pieces that were already there. The result was that when an incident happened at a site, the information flow reached both the local crisis team and the global response coordinators through established channels rather than improvised phone calls.
Norway’s four national crisis management principles informed this approach. The owner retains responsibility during a crisis. Decisions are made at the lowest capable level. Organisations use the same structures in crisis as in daily operations. Sectors coordinate rather than centralise. Hydro’s response reflected the same thinking.
That worked in that context. Running similar operations across multiple regions meant production could potentially shift to unaffected sites to minimise business disruption while recovery progressed. A just-in-time manufacturer serving a single regional market would face a very different set of constraints. The specific integration points will look different depending on how a company is structured, where OT reports, and what redundancy exists across sites. The principle is the same, though. Start with what you have, find the gaps where cyber falls through, and connect rather than rebuild.
A few more things that would make a difference, all of which need to happen before an incident, not during one.
Pre-written holding statements help more than most organisations realise. A holding statement doesn’t commit you to a position. It tells the public and the press that you know something has happened, you’re investigating, and you’ll have more information at a specific time. Issued within the first hour, it slows the speculation cycle. It also forces the organisation to have a situational assessment ready that fast, which is a response discipline, not just a communications one.
A named spokesperson who can translate operational reality into business language under pressure. Not a PR professional learning about production systems on the day. Someone who understands the operations well enough to speak accurately and knows what not to say to protect the investigation. This person needs to be in the response structure, not waiting outside it for a briefing.
Hydro’s CFO filling that role was not accidental; it signalled to the market that the business was in control, not just the technical team.
Getting executive and legal sign-off on a messaging framework before an incident, rather than during one, removes one of the biggest delays. Pre-approved language for the most likely scenarios means the legal review step doesn’t add four hours to your response time on the day.
The one that makes the biggest difference is including communications in tabletop exercises. Not as an add-on at the end, but as part of the scenario from hour one. When you test whether your response can produce an accurate public statement within 60 minutes, you’re also testing whether it can produce accurate internal coordination within 60 minutes. Both the organisation and the wider community benefit when that capability exists before it’s needed.
Two parts of the same problem
Dale is solving the industry-level signal problem. Someone needs to. The OTI Score, if the community builds it consistently, gives credible external reference points that can compete with sensationalised headlines over time. But the reason those headlines run unchallenged in the first hour is that most affected organisations can’t get accurate, approved, spokesperson-delivered information out fast enough.
That’s not a communications gap. It’s a resilience gap that becomes most visible through communications.
Hydro won the narrative because they were resilient across the board. The OTI Score would have told the world that the LockerGoga attack was serious. Hydro’s resilience told the world that they could absorb a serious blow and keep operating, keep communicating, and keep their stakeholders informed while they recovered. Both things need to exist. The OTI Score is the community’s contribution. Organisational resilience, the ability to maintain operations, decision-making, and stakeholder confidence through a major disruption, is each company’s responsibility.
Boards and regulators are increasingly asking for exactly this. EU regulation NIS2 and DORA make resilience an explicit obligation, not just a good practice.
If you want to know whether your organisation is ready for crisis response, ask how quickly you could issue an accurate public statement that your nominated executive would stand behind.
The answer will tell you more about your resilience than any tabletop score ever would.