The Alphabet Soup Numbers — Reverse-Engineered
- Angie D
- Feb 23
- 7 min read
Updated: Mar 1
(Part 3 of the series: The problem with “reported numbers”)
In Parts I and II, we examined how discretion and fragmented response pathways dilute accountability inside organizations. This installment applies that same structural lens to the broader oversight landscape.
If internal systems leak harm through discretionary gates, we should expect to see similar patterns in the public data. Part III tests that assumption by reverse-engineering the numbers published by Ontario’s workplace “alphabet soup” — WSIB, HRTO, ESA, MOL — to see what their own ratios reveal about loss, denial, and procedural attrition.
I call it “The Alphabet Soup” because when you are a worker trying to navigate harm in Ontario, you are immediately handed a bowl of acronyms:
WSIB, HRTO, ESA, MOL, OLRB, WSIAT.
Each one has its own forms, timelines, thresholds, filters, and appeal layers. Each publishes annual reports, and each reports throughput numbers. However, very few of them publish the ratios people actually want to understand:
How many harms become remedies.
How many claims are denied.
How many cases are dismissed procedurally before facts are heard.
I am not a mathematician or an actuary. I am someone who reads annual reports carefully and asks the questions the reports do not answer directly. I used publicly available numbers and simple ratio calculations — with the assistance of AI as a calculator and aggregator — to reverse-engineer the funnels. The math itself is not sophisticated. The questions are.
The Alphabet Soup Numbers — Reverse-Engineered
Every system (WSIB, HRTO, ESA/MOL) tells the public a story using intake, service standards, and throughput. What they don’t put front-and-centre is the thing people actually need to know:
Out of 100 real-world harms, how many reach a remedy — and how many get filtered out, delayed out, or procedurally ended before anyone hears the facts?
So for Part 3, I’m doing this the only honest way:
take what they do publish
rebuild the funnel
show what the “nice” numbers are hiding in plain sight

Most “alphabet soup” systems run the same invisible pipeline:
Harm → Reporting attempt → Intake filter → Process burden → Delay → Procedural end (or settlement) → A tiny % gets a true merits decision
The public sees the front door. Workers live inside the middle.
HRTO - Human Rights Tribunal of Ontario
(FY 2023–2024)
From Tribunals Ontario’s annual report, HRTO statistics show:
Applications received: 3,687
Applications reactivated: 259
Cases closed: 4,826
Active cases at year-end: 8,546
Case processing time: 595 days (reported as a system measure)
Now here’s the part most people miss:
“Final decisions on the merits” (aka: the tribunal actually decides whether discrimination happened) were 40. No, this is not a typo.
There were also 1,389 “final decisions other than on the merits,” which the report explicitly says includes things like jurisdictional/procedural dismissals, withdrawals, and summary hearings.
The math: what % of new HRTO applications got a merits decision?
Using the HRTO’s own numbers:
Merits decisions: 40
Applications received: 3,687
That’s: 40 ÷ 3,687 = 0.01085 → ~1.1%
So in the same fiscal year the tribunal received 3,687 new applications, it issued 40 final merits decisions.
Now, anyone who understands backlog will say:
“Those 40 decisions aren’t only from cases filed that year.” Correct. That’s the whole point. Even with a backlog, the tribunal’s visible “truth-finding outputs” are tiny relative to intake. The system is structurally designed to resolve most files without a merits decision. When we run the full math, a different picture emerges.
What about “wins” and “losses” on the merits?
The report breaks out merits decisions into:
Discrimination found: 14
Discrimination not found: 27
Discrimination found vs intake: 14 ÷ 3,687 = 0.38%
Discrimination not found vs intake: 27 ÷ 3,687 = 0.73%
Again: not a “success rate.” A visibility rate.
It tells you what the system produces in public findings compared to how many people show up at the door.
WSIB — Psychological Injury by the Numbers
Psychological injury is a small category in Ontario’s compensation system - but small does not mean insignificant. When we reverse the math, the pattern becomes harder to ignore.
In the 2023 injury year, WSIB registered 57,880 lost-time claims across all injury types. Of those, 2,567 were mental stress claims. That means psychological injury represented 4.4% of all lost-time claims.
On its face, that looks minor, but here is the number that matters:
Of those 2,567 mental stress claims, 1,519 were allowed.
Reverse math:
1,519 ÷ 2,567 = 59% allowed
Which means:
41% were not allowed at the entitlement stage.
Four out of ten workers who filed psychological injury claims did not receive entitlement. That 41% figure exists after multiple filters have already occurred.
Those 2,567 workers are only the individuals who:
Recognized their injury as work-related
Reported it
Secured medical documentation
Met diagnostic criteria
Completed required forms
Entered adjudication
The data does not capture:
Workers who never filed
Workers who withdrew
Workers unable to meet evidentiary thresholds
Workers who abandoned claims due to procedural burden
The 41% non-allowance rate is not the total loss rate. It is the loss rate inside the adjudicated pool.
Psychological injury represents a small share of overall claims, yet once inside the system it faces a materially high entitlement barrier. Unlike many physical injuries, mental stress claims rely heavily on narrative credibility, documentation sufficiency, and adjudicative interpretation.
It is a high-filter funnel.
This is not an argument that WSIB rejects claims arbitrarily. It is an observation about structure. Psychological injury operates within a narrower adjudicative channel than most physical injuries. The visible category is small. The human impact is not.
PTSD, CPTSD, and Structural Recognition
When the category is broken down further, another pattern emerges. The overwhelming majority of allowed traumatic mental stress claims are diagnosed as PTSD. PTSD is typically linked to a discrete traumatic event. A single assault. A single violent exposure. A single identifiable incident. It aligns cleanly with event-based adjudication models.
CPTSD — Complex Post-Traumatic Stress Disorder — represents a small fraction of approvals. That distinction matters. CPTSD reflects cumulative exposure — prolonged harassment, repeated intimidation, chronic organizational breakdown. It is not easily tied to one date, one shift, one moment.
Systems designed around events process episodic trauma more comfortably than cumulative trauma. The data suggests that acute incidents are more legible to the compensation system than chronic erosion.
According to WSIB’s published traumatic mental stress breakdowns in recent reporting cycles, PTSD consistently represents the vast majority of allowed cases — typically in the range of 85–95% of approved traumatic mental stress claims.
CPTSD approvals, by contrast, have represented a small minority — generally in the low single digits to low teens as a percentage of total traumatic stress allowances, depending on the year reported.
In practical terms, this means that for every 100 traumatic mental stress claims allowed:
• Roughly 85–95 are classified as PTSD• A small fraction — often fewer than 10 — are classified as CPTSD or other cumulative trauma presentations.
The ratio fluctuates slightly year to year, but the structural pattern remains stable: event-based trauma dominates approvals. Cumulative trauma remains comparatively rare in allowance data.
That does not mean cumulative harm is rare. It means cumulative harm is harder to adjudicate within episodic frameworks.
Approval Rates Up — Payouts Down
Recent reporting cycles introduce another tension. In at least one recent year, allowance rates for traumatic mental stress increased — suggesting greater recognition at the entitlement stage.
Yet in that same period, total psychological injury benefit payouts dropped significantly, in one instance nearly halving compared to the prior year.
Approval up. Payout down.
Several explanations are structurally possible:
Shorter benefit duration
Faster return-to-work deeming
Increased partial loss-of-earnings calculations
Earlier benefit suspensions
More stringent ongoing entitlement reviews
None of these require bad faith, but they change the outcome.
Recognition without sustained compensation shifts the balance between symbolic allowance and material repair. If more claims are allowed while aggregate payouts decline sharply, the question is not simply whether access improved — but whether recovery support narrowed. The structural signal is tension between approval optics and compensation depth.
What the Funnel Reveals
Layer the numbers together:
Psychological injury represents 4.4% of lost-time claims
41% of mental stress claims are not allowed at first decision
Acute PTSD dominates approvals
CPTSD — cumulative trauma — remains comparatively rare
Approval rates have increased in some cycles
Payout totals have, at times, decreased significantly
This suggests a system calibrated for discrete trauma events and tightly managed compensation exposure. The compensation system is the endpoint of a longer chain. By the time a worker files for traumatic mental stress, upstream organizational processes have already occurred — reporting pathways, internal investigations, policy responses.
If those upstream systems function well, fewer cumulative trauma cases should require compensation intervention. If they do not, the compensation system becomes the backstop. The numbers suggest that backstop is event-oriented and heavily filtered.
The Design Question
Part III is not an indictment of institutions. It is a design observation. When systems are structured around throughput metrics, episodic events, and controlled exposure, cumulative harm becomes procedurally difficult to translate into remedy.
The numbers are not dramatic. They are simply not foregrounded. Four out of ten mental stress claims do not receive entitlement. Cumulative trauma diagnoses remain rare in allowance data. Approval rates can rise while payout totals fall. Thousands of human rights applications rarely reach merits decisions.
These are not anomalies. They are structural signals, and once the funnel is visible, the redesign conversation can begin.
In Part IV, we will continue the reverse-math analysis with the Ontario Labour Relations Board (OLRB) and the broader MOL/ESA enforcement data. The goal is not to indict individual institutions. It is to complete the picture of how harm moves — or fails to move — through Ontario’s oversight architecture.
Source: Tribunals Ontario Annual Report 2023–2024; WSIB 2023 Statistical Supplement.
Because decisions issued in a year include files from prior intake years, these ratios are structural indicators, not cohort outcomes.




Comments