Moral Hazard
A person with fire insurance might be less careful with candles. A bank expecting government bailouts might make riskier loans. A driver with comprehensive coverage might park in sketchy neighborhoods. When you’re protected from consequences, your behavior changes. This is moral hazard: the tendency to take more risks when someone else absorbs the downside.
The term originated in insurance. Insurers noticed that the insured behave differently than the uninsured — not because insurance attracts reckless people (that’s adverse selection), but because insurance changes incentives. The same person, given protection, becomes less cautious. Skin in the game matters.
Moral hazard is everywhere protection meets decisions. Managers risking shareholder money. Politicians spending taxpayer money. Employees with job security. Lenders expecting borrowers to default onto guarantors. The structure isn’t “bad people do bad things” but “rational people respond to incentives.” Reduce consequences, expand risk-taking.
This doesn’t mean all insurance is bad — the value of protection can exceed the cost of behavior change. But pretending moral hazard doesn’t exist leads to naive system design. Every protection creates some incentive to exploit the protection. The only question is how much and whether the benefits outweigh the costs.
Controlling moral hazard requires reinstating skin in the game. Deductibles make the insured share in losses. Covenants restrict borrower behavior. Clawbacks recover executive bonuses when risks materialize. skin in the game is the antidote: those who make decisions should bear consequences.
The deeper insight: systems that fully separate decision-making from consequence-bearing will be exploited. The exploiters might not even be malicious — they’re just responding to incentives. Good incentives design accounts for the behavior change that protection induces.
Related: skin in the game, incentives, adverse selection, principal agent, antifragility