NewsLynx

What Do Newsrooms Measure and Why?

One of the most surprising sentiments we heard echoed throughout our research was the importance still placed on quantitative measurements such as page views. The reasons for this generally fell into two categories: journalists collected it because donors asked for it, or they measured it because they did see some utility in growth trends, as explained in the introduction.

For organizations that rely heavily on syndication (newsrooms that allow others to copy their articles whole-cloth), “reach” was also a big sticking point. While techniques for measuring this differ greatly, it is generally calculated by taking the organization’s circulation or home page traffic multiplied by a varying, unscientifically derived percentage. This practice might seem blasphemous until one considers Google Analytics—one of the biggest and most popular platforms developed and maintained by arguably the largest and most powerful technology company in the world—which only returns estimates of any given metric. In our experience Google Analytics returns metric values only in multiples of 12, for example. Google Analytics can return more precise values with its enterprise product, but that cost is outside the budget of all but a small number of news organizations, leaving imprecision as often the norm.

Even when organizations acknowledged that both quantitative and qualitative measures were valuable, quantitative measures were still more closely tied to their business models. “We are working to diversify revenue sources and need strong metrics to buttress our qualitative measures,” wrote one growing investigative organization. One broadcast organization summarized this conundrum of revenue versus mission:

In some ways, audio-listens are the single most important thing we can track because that drives underwriting and donations. But we are also mission-driven, so journalism that affects laws and people’s lives and sense of themselves and their relations to others can be equally important.

Although many organizations use quantitative measures, the lack of insight they provide is frustrating. Organizations expressed a desire to find a new metric that could satiate the hunger for quantitative simplicity while offering useful insight, usually in terms of more information about the audience’s relationship to their articles.

In response to the question, “how could measurement help your business or content strategy?” one organization wrote: “A qualitative metric we could present to shareholders showing the ROI of our investment in social media outreach, our marketing efforts, and our dedication to usability.”

To construct such a metric, you would need to agree on some proxy for popularity or discussion level on social platforms (likes, shares, mentions, retweets all come with their own caveats) while taking into account promotional efforts on individual articles. You would then need to segment these results across devices and, if traffic or behavior patterns differ, be able to attribute those differences to either your internal efforts or external factors. This analysis would more realistically be shown in multiple metrics, but the desire for it reflects the need to understand how audiences are reached, along with the pressure to explain what, if anything, is having a demonstrable effect.

The desire to know more about how the audience engaged was echoed elsewhere as well:

From a content perspective, [impact measurement] could help us figure out where to focus our energies, in theory. Given that we are a cash-strapped, resource-strapped nonprofit, should we be spending so much time making a piece of radio and then also adding stunning visuals and writing a compelling text story—or are those just bells and whistles that will get us minimal ROI? What’s the difference between the users who get our stuff on the radio, on the web, and via their phones (on our app or other apps), and are they significantly interested in different kinds of content and do they have different time constraints? What, if anything, might drive people who encounter us out of nowhere on social media to explore our other content, like it, and maybe one day become not just a return visitor but a member? What messaging and coverage encourages participation in user-generated stories (and are those things which can actually help us AND serve the public good?) as well as become part of the public radio family? This is just a start.

Echoed in another organization:

We deeply distrust the page view stat and we see other organizations with more tech resources develop their own fancy metrics such as Medium’s Total Time Reading or Upworthy’s Attention Minutes and can’t help but feel we’re missing out on essential things about our audience. Google Analytics feels both too complicated and not powerful enough for the questions we want to answer about readers. It doesn’t help that Google, Facebook, Twitter, Quantcast, Comscore, and anything else we’ve used never agree on anything. And of course quantifying impact is tough and while we try very hard, some recognized external standards, if wise, could be useful.

We repeatedly encountered the sentiment that existing analytics platforms are “both too complicated and not powerful enough” at other organizations. By “not powerful enough,” users mean that they don’t help answer sophisticated questions that could bolster arguments around, for example, content strategy. Should a radio station continue putting resources into text versions of their stories for the web? Are people not scrolling all the way down the page because the headline and first three grafs were succinctly written and the reader “got it” or because the story wasn’t interesting? Or, is the website’s design—not the journalism—contributing to a high bounce rate? Many organizations would like answers to complex questions like these but, for the moment, data in simpler forms is what they are being asked to report, and technology platforms can’t answer these questions out of the box.

It’s important to point out that some newsrooms completely disregard quantitative metrics or see them as only potentially valuable. As one small investigative organization wrote: “Our mission is to have impact and improve the public interest. For a while we chased traffic and found it negatively impacted our work, and brought no results.”

The pressure to provide quantitative metrics can also be a bit of a moving target—driven by the shifting tastes of funders or changing understanding of what constitutes meaningful measurement. In fact, the Media Impact Project is currently developing a two-sided booklet addressing this very dynamic—what newsrooms are currently measuring on one side and what information funders are requesting (or should be requesting) on the other.iii

Nevertheless, many of these responses influenced our decision to keep a number of quantitative metrics in our system and augment their usefulness through comparison points and context.