On August 23, 2013, the satirical news site The Onion published an op-ed purporting to be written by CNN digital editor Meredith Artley, titled “Let Me Explain Why Miley Cyrus’ VMA Performance Was Our Top Story This Morning.” The answer, the piece explained matter-of-factly, was “pretty simple.”
It was an attempt to get you to click on CNN.com so that we could drive up our web traffic, which in turn would allow us to increase our advertising revenue. There was nothing, and I mean nothing, about that story that related to the important news of the day, the chronicling of significant human events, or the idea that journalism itself can be a force for positive change in the world …But boy oh boy did it get us some web traffic.2 The piece went on to mention specific metrics like page views and bounce rates as factors that motivated CNN to give the Cyrus story prominent home page placement.
Of course, Artley did not actually write the story, but it hit a nerve in media circles nonetheless—especially since a story on Cyrus’s infamous performance at the MTV Video Music Awards had occupied the top spot on CNN.com and, as the real Meredith Artley later confirmed, did bring in the highest traffic of any story on the site that day.3The fake op-ed can be interpreted not only as a condemnation of CNN, but also as a commentary on the sorry state of news judgment in the era of web metrics.
Media companies have always made efforts to collect data on their audiences’ demographics and behavior. But the tracking capabilities of the Internet, as well as the ability to store and parse massive amounts of data, mean that audience metrics have grown far more sophisticated in recent years. In addition to the aforementioned page views and bounce rates, analytics tools track variables like visitors’ return rates, referral sites, scroll depths, and time spent on a page. Much of this data is delivered to news organizations in real time.i
The widespread availability of audience data has prompted fierce debates in the journalism field. The Onion op-ed succinctly encapsulates one common position in this conflict, which is that metrics—or, more specifically, the desperate quest for revenue they represent—are causing journalists to abdicate their highest duties: to inform their audiences about the most important public issues of the day and to hold the powerful accountable. The more attention journalists pay to audience clicks, views, and shares, the more Miley Cyrus slide shows will beat out stories on important, difficult subjects like Syria or climate change. Proponents of an opposing view argue that the increased prominence of metrics in newsrooms is a powerful force of democratization in the media, offering a welcome end to the days when editors dictated which world events were important enough to be newsworthy. Differing views on metrics have manifested in a range of organizational policies for distributing and using the data: Many news sites make metrics widely available to editorial staff; some, such as Gawker Media and The Oregonian, have even paid writers partly based on traffic. Still, a (smaller) number of news sites, including The New York Times and Vox Media’s The Verge, actively limit reporters’ access to metrics.4 It’s not surprising that metrics have become a hot-button issue in journalism. Their presence invites a number of ever-present tensions in commercial news media to come crashing into the foreground. Among them: What is the fundamental mission of journalism, and how can news organizations know when they achieve that mission? How can media companies reconcile their profit imperative with their civic one? To the extent that the distinction between journalist and audience is still meaningful, what kind of relationship should journalists have with their readers?
In the midst of these normative questions, collective anxiety, and tech-evangelist hype, there must be more empirical research into the role that audience metrics actually play in the field of journalism.ii To know what metrics mean for the future of news, we need to know how journalists interpret them and what exactly they do with this data in daily work. Perhaps even more importantly, we need to know more about the values, assumptions, and motivations of the companies that collect and market data.
This report aims to help fill these gaps. I undertook an ethnographic study of three companies—Chartbeat, the prominent web analytics startup, and two of the media organizations that use its tools, Gawker Mediaiii and The New York Times. I conducted interviews with staff members at these companies and observed meetings and interactions when possible.iv While these organizations are influential enough to merit study in their own right, they are meant to serve primarily as case studies that shed light on broader dynamics, work routines, and ideas surrounding metrics. For that reason, while I am attentive to their specificities, my primary interest is in the ways in which they are not unique—in other words, the ways in which the dynamics I observed might be extreme versions of those occurring at similar organizations.
Ethnographic research is, almost by definition, slow; it takes time to get to know how a workplace operates and establish an open and trusting rapport with subjects. The digital media field, which continues to change at a dizzying speed, poses particular challenges for this kind of slower-paced research. As I discuss in the conclusion, the three companies I studied have changed in terms of personnel and, in the case of Gawker and The Times, organizational structure since I concluded my research; they undoubtedly will continue to do so. Even so, this research is intended as more than a snapshot of these companies’ orientation toward metrics at a particular point in time. The ever-changing nature of the digital media field presents a challenge, but also a valuable exercise: It forces researchers to zoom out from particular details (the newest metric, the latest newsroom shake-up) and identify the bigger analytic themes that characterize the creation, interpretation, and use of news metrics. That is, above all, what this report aims to accomplish.
In conducting this research, I was interested in three big questions. First, how are metrics produced? At a time in which “let the data speak” is a common refrain in popular media, and judgments made on the basis of “number crunching” are widely considered more objective and reliable than those made using other methods, it is easy to forget that numbers are socially produced—that is, they are made by particular groups in particular contexts. There is substantial value in studying what sociologists Wendy Espeland and Mitchell Stevens call “the work of quantification.”5values, and motivations of the individuals and groups performing this work, we can develop a deeper, richer understanding of the numbers they produce.
In the case of metrics, researchers know quite a lot about the interests and principles of the journalists using analytics tools, but not much about the programmers, data scientists, designers, product leads, marketers, and salespeople who make and sell these tools. How do they decide which aspects of audience behavior should be measured and how to measure them? What ideas—about both those whose behavior they are measuring (news consumers) and those who will be using their tool (journalists)—are embedded in these decisions? How do analytics firms communicate the value of metrics to news organizations?
My second big question: how are metrics interpreted? Despite their opposing stances, arguments that metrics are good or bad for journalism have one thing in common: They tend to assume that the meaning of metrics is clear and straightforward. But a number on its own does not mean anything without a conceptual framework with which to interpret it. Who makes sense of metrics, and how do they do it?
Finally, I wanted to know how metrics are used in news work. Does data inform the way newsrooms assign, write, and promote stories? In which ways, if any, is data a factor in personnel decisions such as raises, promotions, and layoffs? Does data play more of a role in daily work or long-term strategy? And how do answers to these questions differ across organizational contexts?
As the report ventures answers to these questions, it sidesteps a more familiar (though, I would argue, less fruitful) one: do metrics represent a healing salve for the troubled field of journalism, or a poison that will irrevocably contaminate it? There is little point in debating whether or not metrics have a place in newsrooms. They are here, and they don’t seem to be going anywhere anytime soon. At the same time, we must not unthinkingly adopt a technologically determinist view, in which the very existence of metrics will inevitably cause certain norms, practices, and structures to emerge. What metrics are doing to—and in—newsrooms is an empirical question, not a foregone conclusion, and it is the one this report aims to address.