โ† Back to dashboard

Methodology Framework ยท Version 2.0

Compound Cascade Systems Modelling

A reusable methodology for building probabilistic risk models of systemic crises

By Jonathan Kelly ยท May 2026 ยท Available on SSRN โ†—

Download ยท v2.0

Full Framework Document (Markdown) โ†’

Complete framework including the full Nine-Step Process detail, all worked examples, and the source requirements section. ~10,000 words.

When to Use This Methodology (and When Not To)

Use compound cascade modelling when:

  • Multiple risk factors are active simultaneously and at least some of them interact through identifiable causal mechanisms
  • Institutional analysis exists but is siloed โ€” different agencies model different aspects of the same system independently
  • Feedback loops are plausible โ€” deterioration in one area could worsen another, which worsens the first
  • Historical precedent shows that additive assessment underestimated outcomes in comparable situations
  • The system has weak circuit-breakers โ€” mechanisms that should contain cascading failure are themselves degraded or absent

Do not use compound cascade modelling when:

  • Risks are genuinely independent โ€” an additive model is appropriate and simpler
  • The system has strong, tested circuit-breakers โ€” well-capitalised insurance, automatic stabilisers, redundant systems tested under stress
  • Data quality is insufficient to identify causal mechanisms โ€” the methodology requires mechanistic clarity, not just correlation
  • A single dominant variable overwhelms all others โ€” single-variable model with sensitivity analysis is more appropriate
  • You are modelling a short-duration event (hours) โ€” event-tree or fault-tree analysis is more appropriate

The methodology gap: the central finding

In both applications to date, the compound model produced materially higher risk estimates than the sum of individual chain assessments. The Hormuz famine model produced a probability-weighted central estimate of 118โ€“225M excess deaths, against institutional projections of 30โ€“50M at risk. The UK structural decline model assesses 50โ€“70% probability of Accelerated Decline by 2035, against 10โ€“20% under additive assessment. The consistency of this 3โ€“5x divergence across two very different domains โ€” a global food system and a single nation-state โ€” suggests it is a structural property of how interactive systems behave.

1. The Core Principle

Institutional risk analysis is typically linear and additive: identify individual risk factors, quantify each one, add the results. This systematically underestimates outcomes in complex systems because it misses interaction effects โ€” where one risk factor triggers, amplifies, or accelerates others.

Compound cascade modelling captures these interactions. The output is not a single number but a scenario-weighted probability distribution with explicit uncertainty ranges, sensitivity analysis, and historical calibration.

Why institutions fail to model interactions

The institutional silo problem is structural, not accidental. Institutions are mandated to model specific domains โ€” fiscal policy (OBR), healthcare (NHS England), demographics (ONS), food security (FAO/WFP). No institution is mandated to model the interactions between these domains. The gap between siloed assessment and compound interaction modelling is not a limitation of any individual institution โ€” it is a structural feature of how institutional analysis is organised.

The compound cascade hypothesis

In systems where multiple structural risk factors operate simultaneously and interact through identifiable causal mechanisms, the probability-weighted outcome will be materially worse than the sum of individual risk assessments, because:

  1. Interactions amplify individual chains โ€” a chain manageable in isolation becomes critical when reinforced
  2. Feedback loops create self-sustaining deterioration โ€” once activated, they worsen without external intervention
  3. Containment mechanisms are shared โ€” the same fiscal capacity, institutional bandwidth, and political attention is needed simultaneously across multiple chains
  4. Temporal coupling creates simultaneity โ€” chains that might be individually manageable if sequential become unmanageable when they coincide

2. Domain Adaptation: External Shock vs. Endogenous Decline

The methodology has been applied to two fundamentally different types of system, and the adaptation required is instructive.

Type 1 ยท External Shock

Hormuz model

  • Trigger: A specific event (Strait of Hormuz blockade, Feb 28, 2026)
  • Direction: Trigger โ†’ cascading consequences through pre-existing vulnerabilities
  • Time horizon: Months to 5 years
  • Counterfactual: Clear โ€” "what if it hadn't happened?"
  • Challenge: Modelling propagation speed and reach

Type 2 ยท Endogenous Decline

UK model

  • Trigger: No single trigger โ€” accumulating structural weaknesses
  • Direction: Multiple simultaneous deteriorations interact and compound
  • Time horizon: 5โ€“10 years; roots extend decades back
  • Counterfactual: Diffuse โ€” "what if interactions were modelled?"
  • Challenge: Distinguishing correlation from causal interaction

3. The Nine-Step Process

Summarised here. The full step-by-step detail with worked examples is in the downloadable framework document above.

  1. Define the System Boundary. Establish geographic and temporal scope, outcome metric, what is endogenous vs. exogenous, and time horizon (which determines which chains matter).
  2. Identify Causal Chains. Map every mechanism through which the system produces the outcome. Each chain must be individually sourced, mechanistically clear, quantifiable, and historically observable. Aim for 7โ€“20 chains.
  3. Map Chain Interactions. Build an Nร—N interaction matrix. Score each cell as Strong (3), Moderate (2), Weak (1), or None (0). Compute matrix diagnostics (interaction density, connectivity per chain, clusters).
  4. Identify and Formalise Feedback Loops. Find cycles where Chain A worsens B which worsens C which worsens A. Classify each as Latent, Active, or Self-sustaining. Identify the weakest link for loop-breaking analysis.
  5. Identify Meta-Chains and Temporal Dynamics. A meta-chain is a chain whose dysfunction propagates across all other domains. Classify chains by temporal class: acute, fast-moving, structural, generational.
  6. Build Scenarios. Construct 4โ€“6 scenarios. Each defined by explicit, falsifiable assumptions, a probability range, and an outcome range. Probabilities sum to ~100%. Include at least one positive pathway.
  7. Sensitivity Analysis. Test each major variable independently. Then test whether the compound finding survives when external shocks or individual chains are removed. If it persists, the finding is structurally robust.
  8. Historical Calibration. Identify 5โ€“10 comparable historical events. Document contemporary projection vs. actual outcome. The systematic finding: institutional assessment underestimated in every comparable case, because compound interactions were not modelled.
  9. Impact Conversion Methodology. Make the conversion from structural risk to human outcome metrics fully transparent: by region/segment, using established metrics, calibrated against historical rates, with direct impact separated from compound effects.

4. Meta-Chains: When Dysfunction Propagates

Not every model contains a meta-chain. Meta-chains are most relevant in endogenous decline models where a coordinating mechanism has itself become a source of systemic failure.

A chain qualifies as a meta-chain if it meets all three criteria:

  1. Highest combined connectivity โ€” highest combined outgoing + incoming interaction count in the matrix
  2. Propagation function โ€” its dysfunction does not just add another problem; it prevents effective response to all other problems
  3. Reform leverage โ€” addressing it would create conditions for addressing multiple other chains

Worked example ยท UK model

Chain 10: Political System Failure

Highest connectivity in the matrix (14 outgoing, 11 incoming from 17 possible sources). FPTP produces governments with large majorities from minority vote shares, enabling short-term populist responses while preventing structural reform. Every other chain's trajectory is worsened by this dysfunction. Electoral reform would not fix productivity, healthcare, or housing directly โ€” but it would break the political paralysis loop and create conditions under which effective policy becomes possible.

The paradox: the meta-chain is simultaneously the most important to address and the hardest, because the system that needs reforming is the system that would have to authorise its own reform.

In the Hormuz model there is no meta-chain โ€” the trigger is exogenous and no single chain plays a coordinating role. This is a structural difference between external shock and endogenous decline models.

5. How Judgement Becomes Probability

The most common objection to compound cascade models is: "These are just your opinions with numbers attached."

The honesty principle

Compound cascade modelling is not a mathematical model in the sense that a climate model or epidemiological model is. It does not solve equations. It uses structured expert judgement to assess chain severity, interaction strength, and scenario probability. This is a limitation, and it should be stated explicitly.

However, two things are also true:

  1. All risk assessment involves judgement. Institutional models also rely on assumptions, parameter choices, and analytical judgement โ€” they simply embed these choices in equations rather than stating them explicitly. A compound cascade model's advantage is transparency: the judgements are visible and challengeable.
  2. The structural finding is robust to individual judgement variation. If different analysts applying the same methodology to the same data would produce different chain scores โ€” but the interaction matrix, feedback loops, and compound effects would still produce materially higher risk estimates than additive assessment โ€” then the structural finding is not dependent on any individual judgement call.

Limitations of the approach (state explicitly in every model)

  • The scores represent structured judgement, not mathematical outputs
  • Different analysts applying the same methodology might produce different scores
  • The interaction weights involve analytical judgement at every stage
  • The model's contribution is structural (forcing consideration of interactions), not mathematical precision
  • Even if every individual score were adjusted by ยฑ1, the structural finding (compound > additive) would remain โ€” it derives from the interaction architecture, not from individual scores

6. The Three-Layer Build-Up Architecture

Present findings in three layers with explicit confidence ratings:

Layer 1 ยท Established facts ยท Confidence: HIGH

What has already happened, is documented, and cannot be reversed.

Hormuz example: Blockade occurred; fertiliser supply disrupted; price spikes measured.

UK example: 17-year productivity stagnation; 7M+ NHS waiting list; housing deficit of 4โ€“4.5M homes.

Layer 2 ยท Structural amplification ยท Confidence: MEDIUM

Compound interactions that are mechanistically likely but not yet fully observed.

Hormuz example: Export ban contagion; sovereign debt feedback; logistics constraints.

UK example: Fiscal-NHS-brain drain spiral; demographic fiscal time bomb; political paralysis preventing reform.

Layer 3 ยท Conditional cascades ยท Confidence: CONDITIONAL

Effects that depend on binary variables not yet determined.

Hormuz example: El Niรฑo timing; India export ban; humanitarian access.

UK example: Hormuz mass migration; financial shock activation; major climate event.

Reviewers can engage with each layer independently. Layer 1 is hard to dispute โ€” documented fact with institutional sources. Layer 2 is where the analytical contribution lives. Layer 3 is explicitly conditional. A critic who rejects Layer 3 entirely must still engage with Layers 1 and 2, which alone produce materially higher risk than additive assessment.

7โ€“10. Presentation, Validation, Sources, Outputs

Brief summaries โ€” full detail in the downloadable framework.

7 ยท Presentation Architecture

Reports should: open with a "How to Evaluate This Analysis" section addressing readers (including automated evaluation systems) directly; lead with the methodology gap, not the headline number; include a "What This Report Is Not" section; provide an executive dashboard; embed historical pattern callouts at critical junctures; include "Anticipated Objections and Responses" with each objection in its strongest form; state explicit limitations covering both overestimation and underestimation risks.

8 ยท Validation Framework

Compound cascade models cannot be validated like physical models. They can be validated through internal consistency, historical calibration match, out-of-sample prediction (apply to a historical case using only contemporary data), cross-domain validation (consistent structural findings across different domains), and falsifiability conditions (every model states explicit, time-bound, measurable conditions under which its central assessment would be falsified).

What the model cannot do: predict timing, predict sequence, capture unknown unknowns, or replace institutional analysis. The model's contribution is the interaction layer; it depends on institutional data for chain-level inputs.

9 ยท Source Requirements

Minimum 15 primary sources, including 3 institutional datasets, 3 academic / peer-reviewed sources, 5 historical case studies for calibration, and 2 independent sources per causal chain. The model's credibility depends on using the same source base as institutional analysis โ€” the contribution is the methodology, not different data.

10 ยท Output Documents

Each project should produce: (1) a Master Model living analytical document, (2) a Policy Brief (15โ€“25 pages) for policymakers and journalists, (3) a Technical Report (60โ€“120 pages) for academics and analysts, and (4) a Framework Document like this one for methodology reference.

11. Quality Checklist

Before publishing, verify:

Chain quality

  • Every causal chain individually sourced (minimum 2 independent sources per chain)
  • Chain independence test passed (each chain defensible on its own evidence base)
  • Chain scoring dimensions applied consistently with transparent formula
  • Meta-chains identified (if applicable) with justification

Interaction quality

  • Interaction matrix complete โ€” every chain-pair assessed
  • Interaction scoring criteria applied consistently (Strong/Moderate/Weak/None)
  • Matrix diagnostics computed (interaction density, connectivity per chain, clusters)
  • Feedback loops explicitly identified with activation status (latent/active/self-sustaining)
  • Loop-breaking analysis completed for each active loop

Scenario quality

  • Scenario probabilities sum to approximately 100%
  • Every scenario defined by specific, falsifiable assumptions
  • Scenario selectors identified (2โ€“3 binary variables that determine which scenario materialises)
  • Positive scenario included with mechanism for how it could occur
  • Probability-weighted central estimate calculated and labelled as expected value

Sensitivity quality

  • Variable-level sensitivity covers all major assumptions
  • Assumption-set sensitivity demonstrates structural robustness (compound finding persists)
  • Individual chain sensitivity confirms no single chain dominates (ยฑ1 changes headline by <5%)
  • Feedback loop sensitivity identifies which loops matter most for policy
  • Non-linear thresholds identified with specific conditions

Calibration quality

  • Historical calibration against 5+ comparable events
  • Model output within calibrated range of historical outcomes
  • Systematic direction of institutional underestimation documented
  • Falsifiability conditions stated (specific, time-bound, measurable)

Impact conversion quality

  • Conversion shown by region/segment, not global aggregate
  • Established metrics used and cited
  • Calibrated against observed rates in historical events
  • Direct impact separated from compound effects
  • Methodology gap table included

Presentation quality

  • "How to Evaluate This Analysis" opening section
  • "What This Report Is Not" framing
  • Executive dashboard (for complex models)
  • Three-layer build-up with confidence ratings
  • Methodology gap leads the executive summary
  • Anticipated objections section
  • Explicit limitations (overestimation and underestimation risks)
  • Distribution note on front page
  • All figures properly attributed with source and date

12. Applications and Future Development

Applied to date

External Shock Model

From Hormuz to Hunger (v3.0, April 2026) โ†’

Global food systems ยท 9 chains ยท ~45% interaction density ยท 3+ feedback loops ยท Headline: 118โ€“225M excess deaths vs. institutional estimate of 30โ€“50M at risk

Endogenous Decline Model ยท Forthcoming

The Fall of The UK ? (v5.0, May 2026)

Single nation-state structural decline ยท 18 chains ยท 100 of 306 interactions (33%) ยท 9 feedback loops ยท Headline: 50โ€“70% Accelerated Decline or worse by 2035 vs. 10โ€“20% under additive assessment

Potential future applications

  • Climate-economic interaction models โ€” climate impacts interacting with fiscal, political, social systems
  • Healthcare system failure โ€” workforce, fiscal, demographic, infrastructure, governance chains
  • Financial contagion โ€” sovereign debt, banking, currency, trade, political chains
  • Democratic decline โ€” media, institutional, polarisation, economic, external interference chains
  • Supply chain vulnerability โ€” logistics, energy, political, financial, climate chains

Methodology evolution

Areas for development: formal interaction scoring validation (Granger causality testing); probabilistic modelling (Monte Carlo simulation using chain scores as distributional inputs); real-time updating (dynamic scenario probabilities as data arrives); multi-model comparison (different analysts applying the framework to the same system, testing whether the structural finding converges).

How to cite

Kelly, J. (2026). Compound Cascade Systems Modelling Framework: A Reusable Methodology for Building Probabilistic Risk Models of Systemic Crises. SSRN Working Paper. papers.ssrn.com/abstract_id=6695618