Response Behaviour vs Outcome
- Angie D
- Feb 20
- 6 min read
Updated: Mar 18
What the Boards Measure — and What They Do Not
(Part 2 of the Institutional Response Design Series)

In the previous article in this series, I mapped how a single workplace harm event can move through internal grievance systems, employer investigations, and multiple external oversight bodies. The structural conclusion was straightforward: fragmentation is not incidental. It is embedded in statutory design.
This article turns to a different question. Not how systems are structured, but how they measure themselves.
Each oversight body reports within its mandate. The Ontario Labour Relations Board reports intake and closure rates. The Human Rights Tribunal of Ontario reports applications received and active caseload. The Workplace Safety and Insurance Board reports claims volume and appeal outcomes. Employment Standards enforcement reports investigations and monetary recovery. The Ministry of Labour reports inspections and compliance orders.
These metrics describe activity, outcomes, and how files conclude - but they do not describe how systems behave while matters are active.
In 2024–2025 alone, the Ontario Labour Relations Board closed approximately 3,024 matters while carrying roughly 1,577 pending cases forward. The Human Rights Tribunal of Ontario reported 5,056 new applications, 5,070 closures, and an active caseload of 17,779 open files. The Workplace Safety and Insurance Board registered approximately 64,000 claims in a single year, including roughly 6,700 lost-time claims in one quarter, and reported an appeal allowance rate of 32% — nearly one in three appealed decisions overturned or varied. Employment Standards enforcement initiated 11,940 investigations. The Ministry of Labour conducts tens of thousands of inspections annually, issuing thousands of compliance orders across sectors.
These figures represent more than administrative volume. They represent individuals navigating high-stakes workplace harm pathways.
Canadian population data suggests that approximately 8–10% of adults will meet criteria for PTSD at some point in their lifetime, with significantly higher rates among those exposed to chronic stress, workplace harassment, injury, or economic precarity. Estimates of chronic or complex trauma exposure are substantially higher in populations experiencing repeated institutional or workplace stressors. Even if only a conservative fraction of the 64,000 annual WSIB claimants experience trauma-related symptomatology — for example 10% — that represents 6,400 individuals in a single year navigating compensation decisions while potentially experiencing clinically significant stress responses.
If even 10% of the HRTO’s 17,779 active files involve individuals experiencing trauma-related symptoms linked to discrimination, harassment, or reprisal, that reflects 1,700+ people actively engaged in prolonged adjudicative processes under psychological strain at any given time.
These are conservative estimates. The scale is not theoretical.
These are not peripheral disputes. These are high-volume governance systems operating continuously, at industrial scale.
When viewed independently, each body appears to function as designed. However, when examined collectively, a consistent structural pattern emerges. Across forums, intake volume remains high relative to adjudicative capacity. Closure metrics dominate performance reporting. Jurisdiction remains segmented, procedural compliance functions as a gateway to substantive review.
Behaviour within the response phase is rarely tracked with the same visibility as final disposition.
Outcomes are recorded. The behavioural pathway that precedes those outcomes is not. Closure statistics tell us how files end. They do not reveal how individuals moved through the system before the file closed. They do not reflect the duration between filing and first response, nor do they do not reflect forum transfer, reconsideration loops, narrowing of scope, or procedural attrition prior to substantive review.
These are not peripheral features. They are structural features of institutional response architecture. They are also lived in real time.
In high-volume adjudicative environments, delay is often interpreted as an operational reality rather than a qualitative variable. Yet from a governance perspective, time is not neutral. Jurisdictional clarification is not neutral. Procedural filtering is not neutral. They shape the experience of navigating the system, even when the final outcome category appears administratively clean.
A matter recorded as “resolved” may have travelled through multiple redirections before closure. A dismissal may reflect threshold screening rather than adjudication on the merits. A withdrawal may reflect fatigue rather than settlement. Performance indicators do not distinguish among these pathways.
This is not a critique of individual actors. It is an observation about measurement design. What institutions choose to measure shapes what becomes visible in reform discourse. When outcome becomes the dominant metric, response behaviour disappears from view.
The distinction between behaviour-focused evaluation and response-focused evaluation clarifies this divergence.

The graphic contrasts two analytic centers. A behaviour-focused framework evaluates whether a rule was violated and whether a sanction followed. A response-focused framework asks a different question: how did the institution behave once harm was disclosed? That shift in analytic center changes what counts as integrity.
When oversight systems are examined collectively rather than in isolation, the pattern becomes difficult to ignore.
The Ontario Labour Relations Board reports high volumes of matters closed each year, with a significant proportion categorized as resolved without hearing. The Human Rights Tribunal of Ontario reports annual intake and closure figures that appear roughly balanced, while maintaining an active caseload in the tens of thousands. The Workplace Safety and Insurance Board processes tens of thousands of claims annually and reports an appeal allowance rate that indicates meaningful variance between decision layers. Employment Standards enforcement statistics record thousands of investigations initiated, while inspection-based enforcement under the Occupational Health and Safety Act documents thousands of workplace visits and compliance orders.
Each of these numbers is operationally important. Each reflects a system functioning within its statutory mandate. Yet across forums, similar structural characteristics emerge.
Intake volume remains high relative to adjudicative or investigative capacity. Closure metrics dominate reporting. Jurisdiction is segmented across multiple bodies. Procedural compliance operates as a gateway to substantive review. Behaviour during the response phase remains largely unmeasured.
A file that settles at mediation and a file withdrawn after extended delay both contribute to closure statistics. An inspection resulting in a compliance order and a reprisal complaint dismissed at adjudication appear as separate statistical events. An initial entitlement denial later overturned on appeal is recorded as two distinct decisions rather than a sequence.
The event may be singular. The reporting is fragmented.
From the perspective of institutional architecture, this fragmentation is predictable. Statutory design distributes authority across specialized bodies. Each evaluates a distinct legal question. Each records its own outcomes. None integrates the behavioural trajectory across the full response sequence.
From the perspective of measurement design, however, something else occurs. Closure categories compress meaningfully different pathways into uniform endpoints. Time-to-first-response, duration before hearing, frequency of forum transfer, abandonment prior to review, and appeal-layer variance are not typically foregrounded as integrity indicators.
What becomes visible are counts. What remains obscured is sequence. This distinction is particularly relevant when institutions adopt the language of trauma-informed or psychologically safe practice. Vocabulary can evolve independently of structure. Empathy training can coexist with opaque timelines. Statements of care can coexist with procedural narrowing.
When trauma-informed principles are embedded only at the level of language, the system’s architecture remains unchanged. When they are embedded at the level of design—through bounded discretion, predictable sequencing, transparent reasoning, and documented response timelines—the measurement model changes as well.
Outcome metrics are especially susceptible to distortion. A file may close within prescribed timelines and be recorded as resolved. Yet the individual may have experienced prolonged uncertainty, repeated evidentiary escalation, or jurisdictional transfer during the active phase of the matter.
Completion is measured. Response behaviour is not. This does not imply systemic bad faith. It reflects the consequences of evaluating governance at the endpoint rather than across the full response trajectory.
From Scale to Design
The numbers are large. Three thousand labour board matters closed in a year. Seventeen thousand seven hundred seventy-nine active human rights files. Sixty-four thousand compensation claims registered. Nearly twelve thousand employment standards investigations. Tens of thousands of workplace inspections.
These systems are not peripheral. They operate continuously, at industrial scale. Yet across forums, the dominant performance indicators remain endpoint metrics — closures, allowances, denials, recoveries, orders. They tell us how matters conclude. They do not tell us how matters unfold.
The space between filing and disposition is where individuals actually experience governance. It is where delay accumulates. It is where jurisdiction is clarified, documentation is repeated, evidentiary thresholds are applied, and forum transfers occur. It is also where trust is formed or eroded.
None of this requires malicious intent. High-volume systems prioritize throughput. Statutory mandates divide authority. Specialized bodies evaluate distinct legal questions. Fragmentation is not the product of individual failure; it is the product of institutional design.
When closure becomes the primary indicator of success, procedural friction can appear statistically neutral. Withdrawal can resemble resolution. Settlement can resemble efficiency. Appeal reversals can signal instability only after extended delay.
Without behavioural metrics, systems risk evaluating themselves only at the finish line. The question is not whether oversight bodies are functioning within mandate. The question is whether governance integrity can be meaningfully assessed without measuring the response phase itself.
If response quality shapes lived experience — and long-term institutional confidence — then response behaviour must become measurable. Not adversarially - architecturally.
In the next article, I will turn from structural description to design. What would it mean to track time-to-first-response? To measure forum transfer frequency? To identify where attrition concentrates? To evaluate variance across decision layers rather than only at endpoints?
Governance reform cannot focus exclusively on how matters conclude. It must also examine how they unfold.
This article draws on publicly available 2024–2025 reporting from the Ontario Labour Relations Board, Tribunals Ontario (HRTO), the Workplace Safety and Insurance Board, Ontario Employment Standards enforcement statistics, and Ontario Ministry of Labour inspection publications.




Comments