There’s an ongoing debate in business continuity circles about where resilience planning should start. One side says begin with a Business Impact Analysis. The other jumps straight to setting Recovery Time Objectives and Recovery Point Objectives.
The BIA-first approach is correct. Recovery targets should be derived from survival limits: how long the business can absorb disruption before the impact becomes unacceptable. That’s your Maximum Tolerable Period of Disruption (MTPD). RTOs and RPOs flow from there.
If MTPD is guessed, rushed, or copied from last year’s document, every recovery target built on top of it becomes unreliable. Either over-engineered or dangerously optimistic.
But here’s where the BIA and DRP process typically stops. It defines what needs to recover and how fast. It rarely examines whether recovery is actually achievable.
Five realities sit outside that formal scope. They’re where plans actually break down in practice.
1. The BIA and DRP are done in isolation
BIA and DRP requirements typically miss the full picture on the business side. Business needs, resource constraints, system architecture, and resilience requirements need to be considered together, not in separate workstreams run by different teams at different times. The BIA defines what matters. The DRP defines how to recover it. But neither process typically examines whether the two are aligned with each other or with business reality.
There’s a significant difference between a business that cannot tolerate any downtime, one that can manage for a day, and one that can absorb a week of disruption. Each scenario puts emphasis and investment on entirely different parts of the system and recovery capability. Zero-downtime tolerance demands active-active architectures, real-time replication, and automatic failover. A week’s tolerance might need nothing more than reliable backups and a tested restore procedure.
When the BIA and DRP are conducted without genuine business-side input about these tolerance realities, or when business stakeholders provide aspirational answers rather than operationally honest ones, the resulting targets look precise on paper but aren’t grounded in how the organisation actually functions. Both become documentation exercises rather than strategic alignment conversations.
Getting this right means bringing business leaders, IT, and operations into the same room for the same conversation. Not sequential sign-offs on a document that circulates between teams.
2. The system architecture wasn’t built for this
Most systems were designed for function. They were built to process transactions, serve customers, manage data. Resilience requirements came later, sometimes years later, and got bolted on through governance rather than engineering.
That’s not poor planning. It reflects how organisations grow. Business needs drove architecture decisions. Continuity frameworks were applied after the fact.
The result is a common but uncomfortable mismatch: recovery targets that the underlying systems physically cannot meet. You can run a thorough BIA, set defensible MTPDs, derive precise RTOs, and still face an architecture that will not recover within those windows.
No amount of governance fixes a fundamentally fragile design. Acknowledging that is the starting point for honest conversations about what recovery actually requires.
3. Unknown dependencies fail first
Ask any BC professional where plans break down during real incidents and the answer is rarely ‘we got the process mapping wrong.’ The process layer usually holds up. The dependency layer doesn’t. Specifically, the dependencies nobody knew existed until they failed.
Critical processes get mapped well. The dependencies between them often aren’t. Consider:
- A third-party API that three business processes rely on, but nobody documented as a dependency
- A shared database that multiple teams assume someone else is responsible for recovering
- A key vendor whose own recovery capability has never been tested or even questioned
- A network segment that wasn’t classified as critical because it ‘only’ carries monitoring data
This is the norm. Dependencies exist in the spaces between teams, between organisations, between documented and undocumented infrastructure. They’re nobody’s explicit responsibility, which means they’re everybody’s blind spot.
The plans that fail in real incidents almost always fail at the dependency layer, not the process layer. Mapping dependencies with the same rigour applied to critical processes would address this, but it requires cross-functional collaboration that most organisations struggle to sustain.
4. The funding gap
In global manufacturing environments, there’s a phrase that captures this well: what you see is what you get. The current IT infrastructure, its capabilities, its limitations, its recovery posture, reflects exactly what the business has been willing to invest in over the years. No more, no less.
This is an important expectation to clarify early in any BIA or DRP conversation: IT and its suppliers can typically deliver whatever recovery capability the business needs. Four-hour RTOs, real-time replication, geo-redundant failover. All technically achievable. The constraint is financial, not technical.
The disconnect is between what the business expects and what it’s willing to pay for. RTOs get set at 4 hours where the actual budget supports 24+. Recovery capabilities get documented as though they exist at a level the organisation hasn’t invested in. The gap between business expectations and business investment is where most resilience programmes quietly fall apart. The Btec v. Nordlo case illustrates this starkly: a company paying €270/month for IT services assumed they had business continuity protection covering millions in operational risk.
This isn’t a criticism of leadership. It’s a resource allocation reality that every business function faces. The CFO doesn’t get unlimited finance staff. Marketing fights for every euro. IT juggles technical debt against new features. Why would business continuity be exempt from the same trade-offs?
The question isn’t whether a funding gap exists. It almost always does. The question is whether leadership understands that the current recovery capability is a direct reflection of current investment, and whether the gap between expectation and reality is visible, documented, and consciously accepted.
This is a negotiation you want to have before the major outage, not during one. When a critical system goes down and leadership asks ‘why is this taking so long?’, the answer is already determined by years of investment decisions. Clarifying what recovery capability current funding actually buys is one of the most productive conversations in resilience planning. It’s also one of the most avoided, because it forces an honest comparison between stated priorities and actual spending.
5. No specific plans for most probable threat scenarios
Pick up most disaster recovery plans and you’ll find generic procedures. Restore from backup. Failover to secondary site. Activate the crisis management team. The steps read the same regardless of what actually happened.
But a ransomware attack, a cloud provider outage, and a physical site loss require fundamentally different responses. The decision points are different. The escalation paths are different. The first 30 minutes look completely different.
How many DRPs contain pre-planned, scenario-specific playbooks for the organisation’s top five most probable threat scenarios? In my experience, very few. Most assume a generic response will flex to fit whatever occurs. That works in tabletop exercises where the facilitator guides the narrative. It falls apart when the incident doesn’t match the generic template and teams have to improvise under pressure.
Scenario-specific planning doesn’t mean predicting every possible event. It means identifying the threats most likely to materialise based on the organisation’s industry, infrastructure, and threat landscape, and working through the specific decisions each one demands. The difference between ‘we have a plan’ and ‘we’ve thought about this specific situation’ is the difference between a document and a capability.
What the standards actually cover
These five points aren’t just practitioner observations. They map directly onto gaps in the major BC/DR standards. Checking ISO 22301, ISO 22317, ISO 27031, NIST SP 800-34, the BCI Good Practice Guidelines, and the FFIEC BCM handbook against all five reveals a consistent pattern: most standards partially acknowledge these realities in abstract language, but none require organisations to address them systematically.
BIA and DRP in isolation: ISO 22301 and the BCI GPG both reference ‘holistic’ analysis and tiered prioritisation. Neither prescribes how to integrate business context, IT architecture, and financial constraints into a single conversation, or how to ensure the BIA and DRP are aligned with each other. The standards say you should. They don’t say how. In practice, both become questionnaire exercises run by BC coordinators disconnected from the people who understand architecture, budgets, and business strategy.
System architecture capability: Only ISO/IEC 27031 directly addresses whether current technical architecture can meet stated recovery targets. But ISO 27031 sits outside the ISO 22301 certification path and is rarely implemented. An organisation can be ISO 22301 certified, with documented RTOs and recovery strategies, without ever validating whether its architecture can deliver them.
Unknown dependencies: All standards mention dependencies. ISO 22318 provides dedicated supply chain continuity guidance. The FFIEC handbook goes deepest, with specific requirements for assessing third-party recovery capabilities. In practice, known dependencies end up as flat lists in BIA templates. The unknown ones (undocumented APIs, assumed ownership, untested vendor recovery) don’t appear at all until they fail.
The funding gap: The most consistently absent element across every standard reviewed. ISO 22301 acknowledges ‘cost-benefit consideration’ in a single clause. The BCI GPG notes that solutions should be ‘considerate of costs and benefits.’ No standard requires quantifying the investment gap between stated RTOs and actual capability. Not one requires presenting leadership with the cost of achieving versus the cost of not achieving recovery targets. This is the gap that matters most, and no standard addresses it.
Scenario-specific plans: ISO 22301 deliberately avoids threat-specificity. Its core philosophy is to plan for the impact of disruption regardless of cause. The reasoning: organisations can’t predict every threat, so plans should be generic enough to address any disruption. This philosophy breaks down for threats with unique recovery dynamics. Ransomware invalidates core DR assumptions: backup integrity can’t be assumed, failover may spread the infection, and serial recovery of thousands of servers creates timelines measured in months rather than hours. Only the FFIEC handbook and supplementary NIST cybersecurity publications (SP 800-184, NISTIR 8374) require threat-intelligence-driven, scenario-specific testing.
The pattern across standards: Regulatory frameworks (FFIEC) consistently outperform voluntary standards (ISO, BCI) on all five points. Financial regulators have seen what happens when recovery plans fail and have written requirements that reflect real-world failure modes. Voluntary standards stay abstract to maintain broad industry applicability, which is precisely what allows these five gaps to persist.
One structural observation: the ISO family distributes relevant requirements across multiple documents: ISO 22301 (certifiable), ISO 22317, ISO 22318, and ISO 27031 (all non-certifiable guidance). An organisation can hold ISO 22301 certification while never implementing the guidance documents that would actually surface these five realities.
What the effective practitioners do
The BC professionals I’ve worked with who navigate these five realities successfully share a common approach. They don’t fight the dynamics. They make them visible.
For the BIA and DRP, they insist on holistic conversations, bringing business needs, resource constraints, system architecture, and resilience requirements into the same room at the same time. They push business stakeholders for operationally honest answers, not aspirational ones. They ensure the BIA and DRP are aligned with each other and with the business reality they’re supposed to protect.
For system architecture, they quantify the gap between what recovery plans require and what the current architecture can deliver. They present this as an engineering reality, not a governance failure. Leadership can then make an informed investment decision.
For unknown dependencies, they push for cross-functional dependency mapping, not as a one-off exercise, but as an ongoing practice. They make dependency ownership explicit rather than assumed, and they actively hunt for the undocumented dependencies that only surface during incidents.
For funding, they clarify the dynamic early: what you see is what you get, and IT can deliver whatever the business is willing to fund. They document the accepted gap. When leadership sets an RTO at 4 hours but funds for 24, the effective practitioners get that disconnect acknowledged in writing. Not as a weapon for blame, but as transparent risk acceptance. That’s governance working as intended.
For scenario planning, they move beyond generic procedures. They identify the top five most probable threat scenarios and build specific playbooks for each, with the decision points, escalation paths, and first-hour actions that each scenario demands. Then they exercise them.
The thread that connects all five: treating resilience as a business decision with trade-offs, not a compliance exercise with right answers.
The bottom line
A BIA-first methodology is the correct foundation for resilience planning. But none of these five realities are systematically addressed by the major BC/DR standards, and the most critical gap (funding) isn’t addressed by any of them. They sit in the space between documented plans and operational reality.
Until organisations bring genuine business context, architecture constraints, dependency mapping, budget alignment, and scenario-specific planning into the same conversation as recovery targets, those targets remain theoretical. Comfortable on paper. Untested against reality.
A recovery target nobody funded isn’t a plan. It’s a hope.
Connect: Follow for more insights on security governance and risk management on LinkedIn • Mastodon • Bluesky