Business Continuity Plan Examples: 4 Real-World Templates
TL;DR: A business continuity plan (BCP) documents how an organisation will keep critical functions running during a major disruption. A strong BCP example includes a business impact analysis (BIA), clear recovery time objectives (RTO), named owners for each task, and tested recovery procedures. Below are four complete BCP examples — IT outage, ransomware, pandemic, and supplier failure — plus a downloadable template.
Download the free BCP template (Word + Excel) — pre-populated with the structure used by regulated financial institutions under DORA Article 11.
What a good business continuity plan looks like
Every credible BCP example has the same seven sections. Skip any one and the plan fails its first test.
| Section | Purpose | Owner |
|---|---|---|
| 1. Business Impact Analysis (BIA) | Identifies which processes are critical and how long they can be down | COO / Risk Manager |
| 2. Recovery Time & Point Objectives (RTO / RPO) | Quantifies "how fast" and "how much data loss" per process | IT / CISO |
| 3. Recovery strategies | Documented playbooks for each disruption scenario | Business unit leads |
| 4. Resource requirements | People, systems, vendors needed to recover | COO |
| 5. Communication plan | Who talks to regulators, customers, staff, press | Communications / Legal |
| 6. Testing schedule | At least annually, ideally quarterly | Internal audit |
| 7. Maintenance & review | Quarterly updates after any material change | Risk Manager |
Regulated firms under DORA, NIS2 and ISO 27001 are required to have this structure. Even unregulated firms need most of it — the first ransomware attack is not the moment to start drafting.
Example 1 — IT outage (SaaS company)
Scenario: A regional AWS outage takes down authentication, customer dashboard, and billing pipeline for 6+ hours.
BIA snapshot
| Process | Max tolerable downtime | RTO target | RPO target | Dependencies |
|---|---|---|---|---|
| Customer login | 30 min | 15 min | 0 (session-based) | Auth0, RDS |
| Customer dashboard | 2 h | 1 h | 5 min | Primary app, CDN |
| Billing pipeline | 24 h | 12 h | 1 h | Stripe, queue |
| Marketing website | 72 h | 24 h | 24 h | Vercel, CMS |
Recovery procedure
- T + 0 min — Detection. Datadog alert fires. On-call engineer acknowledges within 5 min.
- T + 5 min — Classification. On-call confirms customer-impacting, escalates to incident commander.
- T + 15 min — Failover decision. If region outage confirmed, initiate failover to secondary region (us-west-2).
- T + 30 min — Customer communication. Status page updated. In-app banner activated. Support team briefed with scripted response.
- T + 45 min — Service restored in secondary region.
- T + 2 h — Ongoing communication. Hourly status page updates, CEO-signed customer email at 6-hour mark.
- T + 24 h — Post-incident review scheduled within 5 business days.
Who owns what
- Incident Commander: on-call engineering manager
- Customer comms: VP Customer Success
- Regulator comms: General Counsel (if regulated customers affected)
- Technical recovery: SRE team
Example 2 — Ransomware attack (mid-size bank)
Scenario: A ransomware strain encrypts the core banking application's primary storage. Branches cannot process transactions.
BIA snapshot
| Process | Max tolerable downtime | RTO | RPO | Dependencies |
|---|---|---|---|---|
| Branch transactions | 4 h | 2 h | 15 min | Core banking, WAN |
| Online banking | 2 h | 1 h | 15 min | Core banking, mobile app |
| ATM network | 8 h | 4 h | 1 h | Core banking, Visa/MC |
| Payroll processing | 48 h | 24 h | 1 day | Core banking, HR systems |
Recovery procedure
- Isolation. Infected systems immediately segmented from network. Backups taken offline.
- Notification. BaFin and the national CERT notified within 4 hours (DORA Article 19 and NIS2 requirement).
- Forensics. Third-party IR firm engaged. Chain of custody preserved.
- Recovery from immutable backups. Restore core banking from previous night's offline backup. Accept ~15 min of transaction replay from logs.
- Customer communication. Branch staff switch to offline procedures. Customers notified via SMS, web, and branch signage.
- Regulator reporting. Interim report within 72 hours, final report within 1 month.
- Post-incident. Mandatory external audit of the ISMS and root-cause remediation.
Why this BCP example matters
Under DORA Article 11, ransomware is explicitly in scope as a major ICT-related incident. Without a tested BCP, fines can reach 2% of annual revenue. See our DORA incident reporting guide for the full reporting timeline.
Example 3 — Pandemic / prolonged remote operations
Scenario: Government restrictions require 90% of staff to work from home for an indefinite period.
BIA snapshot
| Process | Max tolerable downtime | RTO | RPO | Dependencies |
|---|---|---|---|---|
| Customer support | 1 day | 4 h | N/A | Remote phones, VPN |
| Development & deploy | 3 days | 1 day | N/A | Laptops, VPN, CI/CD |
| Finance close | 7 days | 3 days | 1 day | Remote ERP access |
| Physical mail | 14 days | 7 days | N/A | Mail-forwarding contract |
Recovery procedure
- Day 0 — Activate remote work. All laptops pre-imaged with VPN, MFA, and required software. Onboarding call for staff without home setup.
- Day 1 — Equipment delivery. Pre-packed kits (monitor, headset, ergonomic chair voucher) shipped to home addresses.
- Week 1 — Cadence normalisation. Daily standups, weekly all-hands, asynchronous updates in a single tool.
- Week 2+ — Monitor mental health and productivity. HR tracks wellbeing via pulse survey. Finance tracks project velocity.
- Return plan. Staged return (10% → 50% → 90%) contingent on public health guidance.
Lessons from 2020
Firms that had a pandemic BCP example on file before 2020 restored full productivity in 3-5 days. Firms without one took 4-8 weeks and lost an average of 18% of productive output. The plan does not have to be long; it has to exist.
Example 4 — Critical supplier failure
Scenario: Your primary cloud provider experiences a 3-day regional outage. No failover to another region exists.
BIA snapshot
| Process | Max tolerable downtime | RTO | RPO | Dependencies |
|---|---|---|---|---|
| Production workload | 4 h | 2 h | 15 min | Primary cloud |
| Data pipeline | 24 h | 12 h | 4 h | Primary cloud + warehouse |
| Internal tooling | 72 h | 48 h | 24 h | Primary cloud |
Recovery procedure
- Pre-incident (continuous). Infrastructure-as-code reproducible in secondary provider. Warm backups replicated daily.
- T + 1 h — Failover decision. If primary outage >2 h confirmed, initiate secondary provider deploy.
- T + 4 h — Workload running on secondary. DNS cutover. Read-only mode during data sync.
- T + 24 h — Full write traffic on secondary. Customer comms every 6 hours.
- T + 72 h — Failback plan. Only when primary is confirmed stable for 24 consecutive hours.
The third-party risk angle
DORA Article 28 explicitly requires financial institutions to document their exit strategy for every critical ICT third-party provider. A supplier failure BCP example is a direct implementation of that requirement. See our third-party risk management guide.
Business Impact Analysis template (the foundation of every BCP)
Copy this table into your BCP document and fill in one row per critical process.
| Process | Criticality | Max tolerable downtime | RTO | RPO | Dependencies | Recovery strategy | Owner |
|---|---|---|---|---|---|---|---|
| e.g. customer login | High | 30 min | 15 min | 0 | Auth0, RDS | Multi-region failover | VP Eng |
Criticality ratings:
- High — loss of process creates immediate regulatory, financial, or safety impact
- Medium — loss creates revenue or reputation impact within 24 hours
- Low — loss creates workflow inefficiency but no immediate business impact
How often should you test a BCP?
| Test type | Frequency | What it validates |
|---|---|---|
| Walkthrough | Quarterly | Everyone knows their role |
| Tabletop exercise | Semi-annually | Decisions flow correctly under pressure |
| Partial failover | Annually | Technical recovery works |
| Full failover | Every 2-3 years | End-to-end recovery is real |
Regulated firms under DORA must conduct threat-led penetration testing (TLPT) every three years, which includes business continuity validation. See our DORA resilience testing requirements for the full scope.
Download: ready-to-use BCP template
The matproof business continuity plan template includes:
- Editable Word document with all seven sections pre-structured
- Excel BIA worksheet with 20 pre-filled example rows for common process types
- RACI matrix for incident roles
- Regulator notification templates aligned with DORA, NIS2 and GDPR timelines
Frequently asked questions
Q: What is the difference between a BCP and a disaster recovery (DR) plan?
A: The BCP covers the whole business (people, processes, suppliers, communication). The DR plan is a narrower subset focused specifically on IT systems recovery. A good BCP references the DR plan for IT-specific recovery procedures.
Q: Is a BCP legally required?
A: It depends on jurisdiction and sector. Financial institutions in the EU must have one under DORA Article 11. Critical infrastructure operators must have one under NIS2. ISO 27001 requires one as part of Annex A.17. Most other sectors have no hard legal requirement but may be contractually required by enterprise customers.
Q: How long should a BCP document be?
A: Between 20 and 60 pages for most mid-size organisations. Longer plans often fail because no one reads them. The most critical information should be on a 1-page quick-reference card for incident responders.
Q: Who should own the BCP?
A: A single named person, typically the Head of Risk, Chief Operating Officer, or Chief Information Security Officer. Ownership by committee means ownership by nobody.
Q: What is the single most common mistake in a BCP?
A: Not testing it. A plan that sits in a shared drive, untested for two years, will fail when you need it. Quarterly walkthroughs and annual failovers are non-negotiable.
See also: