Risk vs Uncertainty
Frank Knight drew the distinction in Risk, Uncertainty, and Profit (1921). Risk involves known probability distributions — you can calculate expected values, price insurance, optimize choices. Uncertainty involves unknown distributions — you don’t know the odds, maybe don’t even know the possible outcomes.
A roulette wheel presents risk. The probabilities are physical, observable, stable. A new market entry presents uncertainty. Will customers want this? Will competitors respond? What regulations will emerge? No reference class gives reliable frequencies. Each startup is, in meaningful ways, unprecedented.
Knight argued that profit comes from uncertainty rather than risk. Risks can be insured, hedged, diversified away. Insurance companies profit from risk by pooling it. But no one can insure truly uncertain outcomes because no one can price them. The entrepreneur who acts under uncertainty — where others can’t calculate odds — earns the profit that uncertainty creates.
This explains why entrepreneurship exists as a distinct function. If outcomes were calculable, capital markets would fund optimal portfolios and no one would need to bear uncertainty personally. It’s precisely because outcomes can’t be calculated that someone must exercise judgment, stake resources, and either reap rewards or absorb losses.
The distinction challenges the foundations of decision theory. Expected utility maximization requires probability distributions. Without them, you can’t compute expectations. Decision theorists developed alternative frameworks: maximin (maximize the minimum possible outcome), minimax regret (minimize worst-case regret), Ellsberg-style ambiguity aversion. None fully satisfies.
Keynes emphasized uncertainty in economics. Investment decisions depend on expectations of the future, but the future is genuinely uncertain. “Animal spirits” — confidence, optimism, willingness to act despite not knowing — drive economic activity more than rational calculation. When confidence collapses, economies freeze regardless of fundamentals.
The practical implication: don’t pretend uncertainty is risk. Models that assign probabilities to genuinely uncertain outcomes create false confidence. Value-at-risk calculations before 2008 treated tail events as quantifiable. They weren’t. The models substituted computed numbers for admitted ignorance.
Better to acknowledge what you don’t know. Build systems that survive surprise rather than optimize for calculated scenarios. Maintain optionality for outcomes you can’t foresee. Uncertainty isn’t risk you haven’t yet measured — it’s a fundamentally different epistemic state requiring different responses.
Related: [[risk]], [[fat-tails]], [[ergodicity]], [[satisficing]]