The Thinking Behind Restricted Access to Metrics

At a time when even legacy newspapers like The Washington Post have screens showing traffic numbers in the newsroom, The Times newsroom is notable for the conspicuous absence of such displays. While editors had access to analytics tools, including Chartbeat, reporters did not. Some reporters I spoke with were indifferent about how metrics rated their work, but many expressed a desire to see traffic data:

I don’t easily know how many people click on my stories. I would be curious to know that but I don’t have a way of easily knowing.

I would love to know [by] what paragraph my readers start to give up on me, because you know they’re not reading ’til the end, but we write it like they are, right? …And traffic could unlock those answers.

Why did The Times restrict reporters’ access to metrics? Two answers emerged in the course of my research. First, there was a concern that seeing metrics could lead reporters astray from their independent news judgment. Instead of covering topics that are important or newsworthy, they would start to focus on more frivolous subjects that are guaranteed to be popular. Said one reporter:

It would be a bad idea for us to be choosing stories based on how many people were reading them …I mean, if you go down that road, then you end up writing a lot about, you know, Angelina Jolie or whatever.

While interviewees often invoked this fear, it was usually voiced abstractly, as a hypothetical, worst-case scenario. The Times’s history, single-family majority ownership, and longstanding organizational culture made the adoption of purely metrics-driven decision-making seem highly unlikely to many staffers. “There’s no danger of that at The Times,” said one reporter, articulating a sentiment I commonly heard in interviews, “because the entire philosophy of the place and …woven deep, deep, deep into the fabric of the place is opposed to that.”

The second—and in my view, more important—reason The Times restricted reporters’ access to metrics was because of concerns that they would misinterpret the data. We tend to assume that the meaning of metrics is relatively straightforward: Story A got more page views than Story B; Story B got the most Facebook shares of the week, and so on. But several Times staffers commented that they found metrics quite difficult to make sense of, let alone act upon. This is not because most journalists are “bad at math,” as the late Times media columnist David Carr put it,25are trying to optimize for multiple and sometimes conflicting aims. They want to attract large audiences and grow their subscriber base, but they also want to bring about outcomes that are far more difficult to measure and quantify, such as having impact and causing change.xiii Add to this the fact that each story is arguably a qualitatively different entity, and interpretation of metrics becomes even more fraught. The result is that metrics at The Times are nowhere near as straightforward as, say, baseball statistics, which are collected and interpreted in the service of maximizing one end—winning as many games as possible. As a Times reporter put it:

If one story gets 425,000 hits and another one gets 372,000, is that meaningful, that difference? Where does it become important and where doesn’t it?

An editor echoed this theme of interpretive ambiguity:

When you’re looking at a raw number, it’s hard to know how that fits into what you would expect …It’s almost like, you rarely have an apples to apples comparison…. There’s so many other things kind of confounding it.

The fact that audience metrics could be interpreted in multiple ways, depending on who was doing the interpreting, was a source of concern for editors. Some worried reporters would use metrics to challenge their decisions. For instance, when asked why The Times newsroom restricted access to analytics, one editor described his annoyance at what he saw as reporters’ misreading of the most-emailed list (which, by virtue of its place on The Times home page, is one of the only metrics to which reporters had regular access):

People in here will say, “oh my gosh, look, my story’s number one on the most-emailed list, you should put it on the home page!” Well, no, we’re not making judgments based on that. We’re making judgments based on …what are the most interesting, or the most important stories for our readers.

To this editor, the fact that reporters drew incorrect conclusions from the most-emailed list meant that they should not have access to more data. A Chartbeat employee had encountered a similar line of thinking among clients. While creating an earlier version of the company’s dashboard, she had worked with a number of legacy news organizations that didn’t provide universal access to metrics; they gave it instead only to high-level editors for whom there was:

no fear about them misusing data, abusing data. And “abusing” means that they don’t know how to read it, therefore don’t understand it, therefore are …gonna make the wrong, like incorrect assumptions, or use it to their advantage.

There was also a concern among editors that metrics could demoralize reporters by disabusing them of common (though incorrect) print-era assumptions about their audience. A member of the internal team that spent six months studying the newsroom to produce The Times’s Innovation Report said the group had come across this fear:

Reporters …in the print universe, they’ve had circulation numbers. And you push them on this and they know it’s not true, but they all believe that the circulation number is sort of how many people read their story …They kind of really do have this inflated sense of readership. So there’s a real worry that delivering them hard data on digital readers will be demotivating.

In sum, metrics are a source of anxiety at The Times, not only because of their power to influence content, but also because of their potential impact on the organization’s internal dynamics. Metrics provided an alternative yardstick—aside from editors’ evaluations—by which reporters could judge the worthiness of their stories and their job performances more broadly. The data therefore threatened to undermine not only news judgment, but also the traditional hierarchical structure of The Times newsroom, in which editors were the final arbiters of the nebulous quality that is “newsworthiness.” If editors alone had access to metrics, they alone could control the way in which the data was interpreted and mobilized.

It is a common conception that data analytics will displace established “experts” who base decisions on their own experience, intuition, and judgment. Economist and Yale Law professor Ian Ayres concisely articulates this view: “We are in a historic moment of horse-versus-locomotive competition, where intuitive and experiential expertise is losing out time and time again to number crunching.”26baseball scouts in Moneyball; it’s what happened to political pundits during the 2012 election with the rise of Nate Silver.

It is not hard to see how a version of this narrative might apply to journalism. Editors could (and, at many organizations, do) find themselves increasingly displaced by metrics that demonstrate what content is winning large audiences and, in some cases, make suggestions about placement and story assignment. An online editor at The Times succinctly voiced this anxiety:

Really the only thing an editor has—like their full job is based on their judgment, ’cause that’s really what they do, is they just sit and use their judgment to edit stories and decide how important they are and where they should go on the site. And so, replacing that with metrics is some sort of massive threat to their livelihood and value in the job.

Thus, Times editors restricted access to metrics in order to minimize the perceived danger presented by the data. At the same time, it was clear to editors that metrics could be quite useful as a management tool, and many reported employing it in this way.

results matching ""

    No results matching ""