How a $1,800 Repair Became a $3,000 Problem

When services don’t adapt, costs hit customers first and come back to the business

Over the December holidays, my car was broken into in New York City.
The rear glass was shattered. The car was exposed.

That context matters.

In NYC, a damaged car cannot just sit:

  • Garages will not take it

  • Street parking exposes it to the weather and theft

  • Winter turns a contained issue into something bigger, fast

Speed was not a convenience. It was damage containment, and delay was the cost.

This pattern shows up in services where conditions change faster than systems can adapt. When that happens, outcomes start to break.¹ ⁵

What Was Actually Happening

From the moment the claim was filed, the conditions looked like this:

  • No safe place to store the car

  • Active exposure increased damage risk

  • Holiday timing limited repair availability

At the same time, the system operated as designed—optimized for standard flow, not real-world variability:

  • A mobile-first flow built for delay, not urgency

  • Approved vendors with no near-term availability

  • A nearby dealer that could fix it immediately—but was blocked by policy

  • No case ownership, so every interaction resets the situation

The issue wasn’t the claim.
It was a system operating outside the conditions it was built for.

When real-world conditions fall outside a system’s design envelope, performance degrades predictably.¹ ⁵


Why It Escalated

Emotion isn’t noise.
It’s a real-time signal that the situation is no longer manageable.

People continuously appraise risk, control, and their ability to cope.² ³

As the car sat exposed, urgency increased, and confidence dropped.

There was no clear path forward.
No control over what would happen next.
The situation was getting worse over time.

This wasn’t just uncertainty. It was a loss of coping ability.

When people can’t see a credible path to resolution, they act to reduce risk.⁴


What Happened Next

The behavior was predictable. And measurable.

Customer behavior:

  • Repeated calls to establish urgency

  • Attempts to find faster solutions

  • Escalation as risk became visible

System behavior:

  • Adherence to vendor and policy constraints

  • Fragmented handling across agents

  • Delayed approvals while exposure continued

Each interaction restarted the case.
Context had to be rebuilt and reinterpreted every time.

This wasn’t resistance.
It was compensation for a system that couldn’t hold together.

Behavior shifts when capability, opportunity, or motivation are constrained.⁶


What the Delay Actually Cost

Original estimate:
~$1,805

Cost of delay:
~$2,500 to $3,400

More than the repair itself.

Where it showed up:

  • Customer time and lost wages

  • Rental extension

  • Duplicate inspections

  • Vendor churn and towing

  • Repeated handling and coordination

  • System overhead

This excludes additional damage from extended exposure.

None of these costs was part of the original problem. They were introduced by the system’s inability to adapt.

This is the cost of running a context-dependent need through a system designed for standard conditions.

The system protected the process.
The cost showed up everywhere else: operationally, financially, and experientially.


The Miss

Most teams diagnose this as a behavior problem:

  • The customer escalated too quickly

  • The process wasn’t followed

That misses the mechanism.

Escalation is not the problem. It’s the signal.

When systems cannot adapt to changing conditions:

  • Uncertainty rises

  • Control collapses

  • People compensate through workarounds and escalation.

What looks like friction is the system revealing its limits.


Where AI Actually Helps

This is not primarily a data problem. It’s a signal detection problem.

The data is already there:

  • Repeat contact

  • Rising urgency

  • Vendor mismatch

  • Incomplete resolution steps

The system isn’t reading the signals early enough to act.

Escalation is not an exception; it is a detectable pattern.⁷

AI’s role is to:

  • Detect when cases exit expected conditions

  • Identify loss of control and rising uncertainty

  • Predict escalation before breakdown⁸

  • Trigger earlier intervention and ownership

This shifts optimization from managing workflows to managing system stability.


What You Can Do Differently

These changes align systems with how people actually behave under uncertainty, drawing on research in service design, behavioral science, and system performance. (see Additional Reading)

  • Detect non-routine conditions early
    (exposure risk, time sensitivity, vendor mismatch → measurable signals)
    → e.g., flag any case with 2+ contacts and no scheduled resolution within 48 hours

  • Stabilize context through ownership
    (reduce rework, improve coordination, increase accountability)
    → e.g., assign one case owner once escalation signals appear

  • Enable controlled exceptions
    (optimize total cost, not local compliance)
    → e.g., allow expanded vendor use when approved options exceed wait thresholds

  • Reduce uncertainty immediately
    →(clear next steps, timing, fallback)
    e.g., provide a same-day plan with timeline and fallback

  • Measure total system outcomes
    (not just process adherence, but downstream cost and experience)
    → e.g., track cost of delay alongside claim cost


The Real Opportunity

This isn’t specific to insurance.

It shows up anywhere a system encounters real-world variability:

  • Healthcare

  • Financial services

  • Customer support

When conditions change, and a service doesn’t adapt:

  • emotion escalates

  • behavior shifts

  • outcomes degrade

These outcomes are not driven by people. They are produced by the system.

This is a system design problem.


The Point

We don’t fix behavior.  We fix the conditions that drive it.

hello@stickwithglue.com

Methods note. This article uses established frameworks from health systems, emotion theory, uncertainty, and behavior change to explain how service conditions drive outcomes.¹–⁶ These references inform the framing and mechanisms described here; the cost figures and operational details are drawn from a single, real-world claim experience rather than a formal empirical study.¹–⁶

References

  1. Donabedian, A. (1966). Evaluating the quality of medical care. Milbank Memorial Fund Quarterly, 44(3), 166–203.

  2. Lazarus, R. S. (1991). Emotion and adaptation. Oxford University Press.

  3. Scherer, K. R. (2005). What are emotions? And how can they be measured? Social Science Information, 44(4), 695–729.

  4. Carleton, R. N. (2016). Fear of the unknown: One fear to rule them all? Journal of Anxiety Disorders, 41, 5–21.

  5. World Health Organization. (2021). Health system performance assessment: A framework for policy analysis. World Health Organization.

  6. Michie, S., van Stralen, M. M., & West, R. (2011). The behaviour change wheel: A new method for characterising and designing behaviour change interventions. Implementation Science, 6, 42.

  7. Anirudh K., et al. (2020). Customer Support Ticket Escalation Prediction using Feature Engineering. arXiv:2010.06145.

  8. Zhao Y., et al. (2023). Developing an Artificial Intelligence-Guided Signal Detection System. Frontiers in Pharmacology.

Additional Reading

The works below are relevant to the broader themes (claims friction, self‑service failure, escalation, and system design) but are not directly cited in the main text above.

Not directly cited in the text, but relevant to the themes and empirical backdrop:

  • Dahle, L. H. (2016). Designing for people in crisis: Service design for an emergency room. Department of Product Design, Norwegian University of Science and Technology (NTNU).

  • Manderson, K., Taylor, N. F., Lewis, A., & Harding, K. E. (2025). Service-level interventions to reduce waiting time in outpatient and community health settings may be sustained: A systematic review. BMJ Open Quality, 14(1).

  • McKinsey & Company. (2016). The growth engine: Superior customer experience in insurance.

  • In2. (2024). Inefficiencies in insurance claim management and the legacy systems dilemma.

  • Insurance Information Institute. (2024). Auto insurance claims and loss severity.

  • Buell, R. W., Campbell, D., & Frei, F. X. (2025). Are self-service customers satisfied or stuck? Production and Operations Management.

  • Moon, Y., & Frei, F. X. (2000). Exploding the self-service myth. Harvard Business Review, 78(3), 26–27.

  • Gallup. (2014). Why great managers are so rare. Gallup Workplace.

  • Kahneman, D. (2011). Thinking, fast and slow. Farrar, Straus and Giroux.

  • Tversky, A., & Kahneman, D. (1991). Loss aversion in riskless choice: A reference-dependent model. Quarterly Journal of Economics, 106(4), 1039–1061.

  • Weick, K. E. (1995). Sensemaking in organizations. Sage Publications.

  • Batalden, P. B., & Davidoff, F. (2007). What is “quality improvement” and how can it transform healthcare? Quality and Safety in Health Care, 16(1), 2–3.

  • Solar, O., & Irwin, A. (2010). A conceptual framework for action on the social determinants of health. World Health Organization.

  • Robotham, D., et al. (2016). Appointment reminder systems are effective but not optimal: Results of a systematic review and evidence synthesis. BMC Health Services Research, 16, 394.

  • Scherer, K. R. (2013). Driving the emotion process: The appraisal component. In M. D. Robinson, E. R. Watkins, & E. Harmon-Jones (Eds.), Handbook of cognition and emotion (Chapter 12). Oxford University Press.

  • West, R. (2020). A brief introduction to the COM‑B model of behaviour and the PRIME theory of motivation. Prevention Collaborative / University College London.

Next
Next

Escalation has a Logic. So Does Resolution.