NewsLynx

Conclusion

After over two years of thinking about and, in part, building impact tools, we’re happy to see a markedly different landscape from when we started. Ideas that were then hypothetical are now being put into practice. In reviewing some of the older literature while preparing this report, we came across a 2012 Nieman Lab article by Jonathan Stray that concluded with a picture of which kind of technology could help guide the way through understanding the messy world of impact. “Ideally, a newsroom would have an integrated database connecting each story to both quantitative and qualitative indicators of impact: notes on what happened after the story was published, plus automatically collected analytics …”69that NewsLynx attempts to make more concrete what was previously hypothetical and see how that idea played out in practice, what needed improvement, and how we can move forward.

With the platform, newsrooms were able to streamline their workflow and surface insightful elements of impact that would have otherwise been missed. They could tell stories of their journalism’s audience exposure to stakeholders much more quickly and with reliable data to back-up those assessments.

In the larger field of media impact measurement, the amount of experimentation taking place and the fact that the conversation has moved past the less interesting problems—trying to find the holy grail of a universal taxonomy being one of them—makes it an extremely exciting time for impact measurement. It has never been easier for a newsroom to design its own analytics geared toward questions it wants answered. And here lies the next challenge, which was really the challenge all along.

These technological advancements and the democratization of the data pipeline are most helpful, paradoxically, in that they drive us back to base assumptions and away from technology. “Tool-wishing”—phrases that start with “if only we just had a platform to do X”—can be a blinder for the real hurdles at play. No tool, no matter how well designed or implemented, can tell a news organization what impact is or should be. As Stray continued in his piece, “but nothing so elaborate [as this proposed platform] is necessary to get started. Every newsroom has some sort of content analytics, and qualitative effects can be tracked with nothing more than notes in a spreadsheet.”70 Indeed, the newsrooms that got the most out of NewsLynx were those that had already started with “notes in a spreadsheet” and previously worked through the harder problems of deciding what they care about measuring. In the end, computers are better, faster, and (sometimes) more reliable notebooks; but, just as is true in the physical world, fancy pens can’t make a writer tell a good story.

Going forward, we see a few trends, or if not yet trends, then helpful directions:

  • Automate more. We made the Approval River because newsroom staff has better things to do than search through multiple clipping services and other lists for hours each week. We still imagine a “human in the loop” system, but the more of these kinds of services that can be automated to put ready-to-input, structured information into an article’s timeline, the better.

  • More context in metrics. By showing numbers in relationship to newsroom or topic averages, NewsLynx users were able to quickly get a sense of where each article stood. Efforts, like NPR’s Carebot, to contextualize metrics in terms of “what percentage of people shared this story” are a great way forward in this vein of experimentation.

  • Defining expectations. Similarly, we have known for years that not all articles are created equal, nor are they all expected to perform equally. Operationalizing this idea has been slow-going, however, because it’s hard to admit that not every article will be a star. Developing mission-driven metrics will be crucial to sell this kind of measurement to management.

  • Quantitative metrics aren’t going anywhere. Numbers will continue to be useful because they provide value for many organizations. Their emotional utility is not to be underestimated. As Caitlin Petre recently examined in her Tow report’s chapter on the design and use of Chartbeat, even if you’re not a traffic-driven site, it feels great when you hit record figures.71

  • Impact measurement needs to know how to market itself to news organizations. This concern is smaller at organizations where impact is a part of their business model. But at larger organizations interested in this field, how do you convince management to commit resources to something with generally only mid- to long-term benefits? Folded into this question is how to approach a wary audience of journalists who view impact measurement as at odds with impartiality. Again, this idea is tied back to an organization’s goals. What are we here to do and how can we measure that? Impact measurement with no objective can come across as purely self-congratulatory with no organizational benefit.

In the future, we think the practices of impact measurement align with healthy processes of understanding how one’s newsroom operates and, importantly, why it operates at all. Whatever that why turns out to be, whether it is purely to inform readers or hold power to account, finding out what is required to get there should be an instrumental part of an organization’s mission and achieving that mission a strong part of the newsroom culture. We hope that NewsLynx, or future NewsLynx-like systems, can help organizations year after year to keep filling those impact envelopes.