Autocorrections on the iPhone
Another example of surfacing editorial criteria in algorithms comes from Michael Keller, now at Al Jazeera, but who at the time was working at The Daily Beast.27 He dove into the iPhone spelling correction feature to see which words, like “abortion” or “rape,” the phone wouldn’t correct if they were typed incorrectly.
Michael’s first attempt to sample this phenomenon was an API on the iPhone, which he used to identify words from a large dictionary that weren’t getting corrected, essentially pruning down the space of inputs to see what the algorithm “paid attention” to. Eventually he noticed that some of the words the API did not correct were getting corrected when they were typed directly on an iPhone. There was a mismatch between what the API was reporting and what the user was experiencing. In order to mimic the real user experience he had to run an iPhone simulator on a number of computers, scripting it to act like a human typing in the word and then clicking the word to see if spelling corrections were presented.
This example raises an important caveat. Sometimes algorithms expose inputs and make it possible to record outputs, but those outputs are then further transformed and edited before they get to the user interface. What really matters in the end is not just the output of the algorithm, but how that output is made available to the user. While this case is again an instance of full observability, it reminds us that we need to consider the output in context in order to understand and report on the algorithm’s consequences.