← All writing

CAL: A DSL Where the Cascade Is the Program

SWOT gives you four boxes. Porter gives you five forces. CAL gives you syntax. An introduction to the Cascade Analysis Language — a DSL that traces how failure and success propagate across six organizational dimensions, scores the result, and produces a deterministic decision.

Michael Shatny··7 min read

The Problem With Frameworks

Business analysis frameworks are everywhere. SWOT gives you four boxes — Strengths, Weaknesses, Opportunities, Threats. Porter's Five Forces gives you five competitive pressures. PESTEL gives you six environmental categories. You fill them in, you discuss them in a meeting, you move on. The analysis lives in a slide deck.

The problem is not the frameworks — it is what they cannot do. They describe dimensions in isolation. A quality issue goes in one box. A revenue concern goes in another. But organizations do not fail in isolation. A quality issue cascades into customer dissatisfaction, which cascades into revenue pressure, which triggers regulatory scrutiny, which strains operations, which drives staff turnover. The cascade is the event — and no framework traces it.

CAL does.

Traditional frameworksCAL
Evaluate dimensions in isolationTraces propagation across all six dimensions
Output is a filled-in templateOutput is a scored, reproducible cascade map
Analysis depends on who is in the roomSame input always produces same output
Decision is a discussionDecision is a threshold — a number with a meaning
No validation of predictionsPrognostic cases close the loop empirically

The Six Dimensions

CAL models organizations across six dimensions — not as independent categories but as a connected system. When one dimension moves, the others respond.

D1

Customer

Market impact, user sentiment, adoption

D2

Employee

Talent, workforce, human capital

D3

Revenue

Financial health, pricing, market position

D4

Regulatory

Compliance, legal exposure, policy risk

D5

Quality

Risk management, product performance

D6

Operational

Process, infrastructure, systems

The typical failure cascade runs: Quality issue (D5) surfaces, customer trust erodes (D1), revenue contracts (D3), regulators take notice (D4), operations are restructured under pressure (D6), talent exits (D2). CAL traces that chain explicitly — not as an observation after the fact, but as a declared program that scores each step.

D5Quality
D1Customer
D3Revenue
D4Regulatory
D6Operational
D2Employee

Typical failure propagation sequence — traced explicitly in CAL

What a DSL Changes

The key word is syntax. CAL is not a scoring rubric you fill in — it is a language with a parser, a runtime, and deterministic evaluation. You declare the conditions, the dimensions, the thresholds. The runtime executes it. The same program produces the same result every time.

That distinction matters more than it sounds. When analysis lives in a framework, it is only as good as the analyst applying it on a given day. When analysis is a program, it is auditable, reproducible, and independent of who is in the room.

CAL is a five-layer pipeline, each layer encoded as keywords:

SENSEFORAGESearch for entities that meet declared conditions across dimensions
ANALYZEDIVE INTOTrace the cascade — what triggers what, in what sequence
MEASUREDRIFTScore the gap between what should be explained and what is
DECIDEFETCHProduce a numeric action score against a declared threshold
ACTCHIRP / SURFACEEmit alerts, surface results, close the loop

A Minimal CAL Program

Here is a complete CAL program — sensing for cascade conditions, measuring the gap, and producing a decision:

example.cal
-- SENSE: find entities where conditions are elevated
FORAGE entities
WHERE sound > 7
ACROSS D1, D3, D5
DEPTH 2
SURFACE cascade_map

-- MEASURE: score the gap between methodology and performance
DRIFT cascade_map
METHODOLOGY 85
PERFORMANCE 35

-- DECIDE: produce an action score against a threshold
FETCH cascade_map
THRESHOLD 1000
ON EXECUTE CHIRP critical "Cascade conditions met — review D1, D3, D5"

FORAGE searches. DRIFT measures the gap — here, Methodology 85 minus Performance 35 equals a gap of 50, a significant teaching signal. FETCH produces a numeric score — Chirp × |DRIFT| × Confidence — and compares it against the declared threshold. If the score exceeds 1,000, the decision is EXECUTE. Below 500, it is WAIT.

No discussion required. The program produces a decision.

CAL finds the hidden 70–90%

When something costs $100K visibly, cascades reveal $700K–$1.1M in actual organizational cost. Traditional frameworks capture the visible event. CAL traces what propagates from it — across all six dimensions, scored and sequenced.

Production Evidence

CAL is not a proposed framework. It has been applied to 228+ case studies across 148+ sectors — banking, technology, geopolitics, healthcare, real estate, energy, sports, agriculture. FETCH scores range from 898 to 4,461.

The highest-scoring case is UC-039 — Silicon Valley Bank. Six of six dimensions compromised in 48 hours. FETCH score: 4,461. The cascade was traceable before the bank run accelerated — asset-liability mismatch, uninsured deposit concentration, an 18-month CRO vacancy. CAL maps the chain. The score reflects the severity.

The runtime is open source, published on npm, and carries a Zenodo DOI. The methodology is reproducible by anyone.

228+

Case studies published

148+

Sectors analyzed

v1.3.0

Current runtime version

What Comes Next

This post is the introduction. Three deeper concepts are worth their own treatments:

DRIFT

The mathematics of the gap — how CAL encodes how much to explain as a signed number that determines adaptive communication.

FETCH

Deterministic decisions — Chirp × |DRIFT| × Confidence = an action score with a semantic threshold. Decision without opinion.

RECALL in CAL

Closed-loop validation — prognostic cases fire at a future date, measure stated vs actual confidence, produce a calibration verdict.

Resources

Michael Shatny is a software developer and methodology engineer and founding contributor to .netTiers (2005–2010), one of the earliest schema-driven code generation frameworks for .NET. His work spans 28 years of the same architectural pattern: structured input, generated output, auditable artifacts. CAL and RECALL are the latest expressions of that instinct.

ORCID: 0009-0006-2011-3258