The butterfly effect is the observation that extremely minute changes in the initial state of a deterministic system can produce wildly different results. Before this observation was noted, it was reasonably assumed that a minute change would have only a tiny effect.
Edward Lorenz recognized this effect while trying to run a weather simulation. At some point he had to stop the program, so he printed out results so he could enter them when he was able to restart the program. When he restarted the program, he entered number that were rounded to three decimals. Doing this, he reasoned, wouldn’t make much difference in the simulation. Turns out, he was wrong.
At first the simulation behaved more or less as expected. However, as it continued, the results grew increasingly unpredictable, until they appeared to be completely random. They were not random – the program was deterministic. But there was no way to predict the outcome except by running the simulation.
Lorenz wrote a couple of papers about this, including the one that gave the name: Predictability: Does the Flap of a Butterfly’s Wings in Brazil Set Off a Tornado in Texas? It’s a startling notion. The tiniest difference at the beginning of a system will eventually make the results unpredictable.
The importance of this idea is difficult to overstate. Previously, it was assumed that small changes had small effects. Some chaotic systems had been recognized, but were mostly considered exceptions to the rule. Lorenz showed that these were the rule.
This is one reason why weather predictions suck so badly so often. The computer models are OK. The problem is that as the model progresses, the tiny variations of starting values eventually cause the results to become useless.
Lorenz’s seemingly simple observation has opened up several scientific investigations, including Chaos Theory and the study of fractals.