# Footnotes

^{i} You might as well expand that to the relationship between story and science. It’s a vexing question. See, for example, Gelman and Basbøll.^{84}

^{ii} The classic discussion of the human creation of categories is Sorting Things Out: Classification and Its Consequences.^{85}

^{iii} For a thorough discussion of race on the census, see Snipp.^{86}

^{iv} For a fantastic list of 20 reasons why quantification is difficult in psychology, see Meehl.^{87}

^{v} For a really excellent exposition of the problems of counting “mass shooting,” see Watt. ^{88}

^{vi} Nehemiah 11:1.

^{vii} For more on these two unemployment surveys and the difference between them, see U.S. Bureau of Labor Statistics. ^{89}

^{viii} Actually 60,000 randomly chosen households, which is about 150,000 people. See U.S. Census Bureau. ^{90}

^{ix} Similar, but not identical, because Bernoulli initially considered sampling “with replacement,” where each person might be chosen more than once. This is probably because sampling with replacement is mathematically simpler, and Bernoulli worked with approximate formulas that become more accurate as the number of samples increases, rather than the very large numbers involved in calculating the number of possibilities directly, which require computers.

^{x} I’m indebted to Mark Hansen for the phrasing of these two key sentences.

^{xi} Before I get hate mail: Yes, it is wrong to say that there is a 90 percent chance that the true value falls within a 90-percent confidence interval. The contortions of frequentist statistics require us to say instead that our method of constructing the confidence interval will include the true value for 90 percent of the possible samples, but we don’t know anything at all about this particular sample. The distinction is subtle but real. It’s also usually irrelevant for this type of sampling margin of error computation, where the confidence interval is numerically very close to the Bayesian credible interval, which actually does contain the true value with 90 percent probability. See e.g. Vanderplas.^{91}

^{xii} Whether or not anything is “truly” random is a metaphysical question. Perhaps the universe is fully deterministic and everything is fated in advance. Or perhaps more data or better knowledge would reveal subtle connections. But from a practical point of view, we only care if these fluctuations are random to us. Randomness, chance, noise: There is always something in the data which follows no discernable pattern, caused by factors we cannot explain. This doesn’t mean that these factors are unexplainable. There may be trends or patterns we aren’t seeing, or additional data that might be used to explain what looks like chance. For example, we might one day discover that the number of assaults is driven by the weather. But until we discover this relationship, we have no ability to predict or explain the variations in the assault rate so we have little choice but to treat them as random.

^{xiii} For a fantastic history of these ideas, see Ian Hacking’s The Emergence of Probability.^{92}

^{xiv} Although the mathematics turn out the same, there’s a useful distinction between something which we must treat as random because we don’t know the correct answer (epistemic uncertainty) and something which has intrinsic randomness in its future course (aleatory uncertainty). The difference is important in risk management, where our uncertainty might be reduced if we did more research, or we might be up against fundamental limits of prediction.

^{xv} Peirce’s simple argument assumes complete statistical independence between the positions of every stroke in a signature. That’s dubious, because if you move one letter while signing, the rest of the letters will probably have to move too. A more careful analysis^{93} shows that an exact signature match is much more likely than one in 5^{30} but still phenomenally unlikely to happen by chance.

^{xvi} For a baggage-free introduction to applied Bayesian stats I recommend McElreath’s *Statistical Rethinking*, or his marvelous lecture videos.^{94}

^{xvii} I’m referencing the butterfly effect, the idea that the disturbances from a butterfly flapping its wings might eventually become a massive hurricane. More generally, this is the idea that small perturbations are routinely magnified into huge changes. The early chaos theorist Edward Lorenz came up with the butterfly analogy while studying weather prediction in the early 1960s. In practice, this uncertainty amplification effect means there will be random variations in our data, due to specific unrepeatable circumstances, that we cannot ever hope to understand.

^{xviii} This type of independent events model is also called a *Poisson distribution*, after the French mathematician Siméon Denis Poisson, who first worked through the math in the 1830s. But the nice thing about using a simulation of our intersection is that it’s not necessary to know the mathematical formula for the Poission distribution. Simply flipping independent coins gives the same result. Simulation is a revolutionary way to do statistics because it so often turns difficult mathematics into easy code.

^{xix} Maybe both of your hypotheses are wrong, and something else entirely happened. Maybe your models, which are pieces of code, aren’t good representations of your hypotheses, which are ideas expressed in language. Maybe your data is the result of both a working stoplight and some amount of luck. Maybe the intersection was rebuilt after the second year with wider lanes and a new stoplight, and it’s really the wider lanes that caused the change. Maybe the bureaucracy that collects this data changed the definition of “accident” to exclude smaller collisions. Or maybe you added up the numbers wrong.

^{xx} Unemployment versus investment chart from Mankiw.^{95}

^{xxi} But sometimes it *is* possible to tell which of two variables is the cause and which is the effect just from the data, by exploiting the fact that noise in the cause shows up in the effect but not vice versa. See Mooij et al.^{96}

^{xxii} Michael Keller, private communication.

^{xxiii} I found this circulating on the Internet, and was unable to figure out who made it. Much love to the unknown creator.

^{xxiv} It probably wasn’t Bertrand Russell who first said, “The mark of a civilized human is the ability to look at a column of numbers, and weep.” But per http://quoteinvestigator.com/2013/02/20/moved-by-stats/ there is a history of quoting and misquoting a similar phrase. The original text is Russell’s The Aims of Education:

```
> The next stage in the development of a desirable form of sensitiveness is sympathy. There is a purely physical sympathy: A very young child will cry because a brother or sister is crying. This, I suppose, affords the basis for the further developments. The two enlargements that are needed are: first, to feel sympathy even when the sufferer is not an object of special affection; secondly, to feel it when the suffering is merely known to be occurring, not sensibly present. The second of these enlargements depends mainly upon intelligence. It may only go so far as sympathy with suffering which is portrayed vividly and touchingly, as in a good novel; it may, on the other hand, go so far as to enable a man to be moved emotionally by statistics. This capacity for abstract sympathy is as rare as it is important.
Many others attribute the pithier quote to Russell, but the original source for that is nowhere to be found. I really like the shorter quote no matter where it ultimately came from; it’s a fine string of words.
```

^{xxv} I’ll use *reader* as a generic name for the consumer of a story, with apologies to reporters working in other formats.

^{xxvi} Totally fun to say.

^{xxvii} Lifetime odds of being struck by lightning estimated at 1 in 12,000 by NOAA, based on 2004–2013 averages.^{97}