Resilient Data Futures
MethodM-0004draft

The 1:10:100 Cost Heuristic (Labovitz & Chang)

§72026-05-036 out · 6 in

A widely used heuristic in quality and reliability engineering, originally documented by Labovitz and Chang in 1992. The heuristic states:

  • $1 spent on prevention at the source
  • $10 to detect and correct after bad data propagates
  • $100 to handle once bad data has driven decisions

The ratio is order-of-magnitude rather than exact. It expresses the geometric structure of cost when defects are absorbed downstream rather than prevented at the point of origin.

Applied to research data infrastructure:

  • Prevention is the Tier 3 deployment at near-zero marginal cost on existing institutional infrastructure (S-0080, S-0006, S-0007, S-0008).
  • Detection and correction is forensic recovery after loss — re-collection (when feasible), reconstruction from fragments, manual chasing of data through emeritus PI archives.
  • Handling once decisions have been driven is the FCA exposure, retraction cascade, faculty-flight cost, and reputational and competitive cost the paper quantifies in §5.

The heuristic gives the rest of §7 its structure. The paper prices Tier 1 ($1 per dataset on cloud archive), Tier 2 ($1K-$25K/year per institution for coordinated preservation), and Tier 3 (effectively zero marginal cost on existing institutional infrastructure), then prices the consequences of architectural failure (the ~$1.1B/year representative-R1 latent liability quantified by M-0003).

The heuristic does not require the exact 1:10:100 ratio to hold for the argument to bind. It only requires that prevention be substantially cheaper than the realized cost of failure — a property the paper documents directly through the cost-side numbers in §7 against the liability-side numbers in §5, with a multiple closer to 1:1,000,000 than 1:100 in the representative-R1 case.