Local Optima
Imagine searching for the highest point while blindfolded. You feel which direction goes up and walk that way. Eventually you reach a summit where every direction goes down. You’ve found a local optimum — the best point in your neighborhood, but not necessarily the best point overall. Higher peaks may exist across valleys you can’t see.
This is the hill-climbing trap. Greedy optimization — always moving toward immediate improvement — finds local optima quickly but can miss global optima entirely. The algorithm converges but converges to the wrong place. path dependence matters: where you start determines which peak you find.
Evolution faces this constantly. Natural selection can only see one generation ahead. A species can’t get worse to eventually get better; it must improve at every step or die. This traps lineages on local fitness peaks, unable to cross valleys to higher peaks. The eye evolved multiple times independently because each lineage climbed its own hill.
Organizations face the same trap. Processes that were optimal once become suboptimal as conditions change — but small adjustments keep the current system good enough. The pain of fundamental change exceeds the gain from incremental improvement. So companies optimize locally while competitors find better peaks entirely.
Escaping local optima requires accepting temporary regression. Simulated annealing adds randomness: occasionally make worse moves to explore the landscape. Genetic algorithms maintain population diversity: don’t converge too fast. Strategic pivots abandon current position entirely: if you’re on the wrong hill, stop climbing.
The practical wisdom: if you’ve optimized for a while but results feel stuck, you might be on a local peak. The path forward might require going down first. The best isn’t always adjacent to the good; sometimes it requires crossing territory that looks like failure.
Related: path dependence, selection, satisficing, constraints, phase transitions