Executive Summary

In a 2010 New Yorker profile, founder and CEO of Gawker Media Nick Denton argued, “probably the biggest change in Internet media isn’t the immediacy of it, or the low costs, but the measurability.” 1Digital media scholars and commentators could debate this claim exhaustively (and have), but there is little doubt that the ability to extensively track news readers’ behavior online is indeed a profound shift from the pre-Internet era. Newsrooms can now access real-time data on how readers arrive at a particular site or article, how often they visit, and what they do once they get there (e.g., how long they spend on a page, how far they scroll, and whether they are moving their mouse or pressing any keys).

What does all this data mean for the production of news? In the earlier days of web analytics, editorial metrics had both enthusiastic proponents and impassioned detractors. Nowadays the prevailing view is that metrics aren’t, by definition, good or bad for journalism. Rather, the thinking goes, it all depends what is measured: Some metrics, like page views, incentivize the production of celebrity slide shows and other vapid content, while others, like time on a page, reward high-quality journalism. Still, there are some who doubt that even so-called “engagement metrics” can peacefully coexist with (let alone bolster) journalistic values.

This report’s premise is that it will be impossible to settle these debates until we understand how people and organizations are producing, interpreting, and using metrics. I conducted an ethnographic study of the role of metrics in contemporary news by examining three case studies: Chartbeat, Gawker Media, and The New York Times. Through a combination of observation and interviews with product managers, data scientists, reporters, bloggers, editors, and others, my intention was to unearth the assumptions and values that underlie audience measures, the effect of metrics on journalists’ daily work, and the ways in which metrics interact with organizational culture. Among the central discoveries:

  • Analytics dashboards have important emotional dimensions that are too often overlooked. Metrics, and the larger “big data” phenomenon of which they are a part, are commonly described as a force of rationalization: that is, they allow people to make decisions based on dispassionate, objective information rather than unreliable intuition or judgment. While this portrayal is not incorrect, it is incomplete. The power and appeal of metrics are significantly grounded in the data’s ability to elicit particular feelings, such as excitement, disappointment, validation, and reassurance. Chartbeat knows that this emotional valence is a powerful part of the dashboard’s appeal, and the company includes features to engender emotions in users. For instance, the dashboard is designed to communicate deference to journalistic judgment, cushion the blow of low traffic, and provide opportunities for celebration in newsrooms.

  • The impact of an analytics tool depends on the organization using it. It is often assumed that the very presence of an analytics tool will change how a newsroom operates in particular ways. However, the report finds that organizational context is highly influential in shaping if and how metrics influence the production of news. For instance, Gawker Media and The New York Times are both Chartbeat clients, but the tool manifests in vastly different ways in each setting. At Gawker, metrics were highly visible and influential. At The Times, they were neither, and seemed—to the extent they were used at all—primarily to corroborate decisions editors had already made. This suggests that it is impossible to know how analytics are affecting journalism without examining how they are used in particular newsrooms.

  • For writers, a metrics-driven culture can be simultaneously a source of stress and reassurance. It is also surprisingly compatible with a perception of editorial freedom. While writers at Gawker Media found traffic pressures stressful, many were far more psychologically affected by online vitriol in comments and on social media. In a climate of online hostility or even harassment, writers sometimes turned to metrics as a reassuring reminder of their professional competence. Interestingly, writers and editors generally did not perceive the company’s traffic-based evaluation systems as an impediment to their editorial autonomy. This suggests that journalists at online-only media companies like Gawker Media may have different notions of editorial freedom and constraint than their legacy media counterparts.

The report calls for more research on analytics in a number of areas. More information is needed about readers’ responses to metrics. Are they aware that their behavior on news sites is being tracked to the extent that it is? If so, how (if at all) does this affect their behavior? The report also advocates for more studies using systematic content analysis to determine if and how metrics are influencing news content. Finally, I suggest further ethnographic research on the growing movement to create so-called “impact metrics.”

The report also makes three recommendations to news organizations. First, news organizations should prioritize strategic thinking on analytics-related issues (i.e., the appropriate role of metrics in the organization and the ways in which data interacts with the organization’s journalistic goals). Engagement with these big-picture questions should be insulated from daily traffic and reporting pressures but otherwise can take various forms; for instance, newsrooms that are unable to spare the resources for an in-house analytics strategist may benefit from partnerships with outside researchers. Second, when choosing an analytics service, newsroom managers should look beyond the tools and consider which company’s strategic objectives, business imperatives, and values best complement those of their newsroom. Finally, though efforts to develop better metrics are necessary and worthwhile, newsrooms and analytics companies should be attentive to the limitations of metrics. As organizational priorities and evaluation systems are increasingly built on metrics, there is danger in conflating what is quantitatively measurable with what is valuable.

results matching ""

    No results matching ""