i. While this report focuses on tools that track audiences’ online
actions, it is important to note that there is a burgeoning movement
to measure offline media effects. Such results, like changes in laws
or increased civic participation, are also crucial aspects of
journalism’s impact and should not be neglected in conversations
about news metrics. The Center for Investigative Reporting and
ProPublica’s Richard Tofel have done valuable work to catalogue and
measure these offline impacts.

ii. For important exceptions, see work by C.W. Anderson, Pablo
Boczkowski, Angèle Christin, and Nikki Usher.

iii. To limit unwieldy terminology, from now on I will use the term
Gawker to refer to Gawker Media; the blog of the same name will be
referred to as

iv. In some instances quotes from interviewees that appear in this
report have been edited for readability, always with an eye toward
maintaining their original intent and meaning.

v. Some employees were interviewed multiple times over the course of
the fieldwork.

vi. The numbers appearing here are pseudonumerals, not the client’s
actual metrics. Part of my access agreement with Chartbeat was that
I would not disclose client names or data. The numbers I use here,
however, are proportionally similar to the actual ones.

vii. For instance, if a site surpassed its target by 12 percent, the
site’s bonus amount for that month would be 12 percent of the site’s
monthly budget. Bonuses were capped at 20 percent of the site’s
monthly budget, though some sites routinely surpassed their growth
target by far more than 20 percent.

viii. As is discussed in greater depth in the conclusion, Gawker shifted
its policies around metrics as I was writing this report. Under the
leadership of newly appointed executive editor Tommy Craggs (who
formerly edited Deadspin, Gawker’s sports site) the company has, at
least for the moment, abandoned some of its traffic incentives (such
as the uniques-based bonus system) and made metrics less prominent
in the newsroom. However, some remnants of the previous system
remain, such as the Big Board and traffic counts on individual
posts. It is too soon to tell whether Gawker’s diminished emphasis
on metrics represents a permanent shift or merely a short-term
experiment. Either way, the metrics-driven period I studied makes
for a valuable case study, as analytics become increasingly
prominent, public, and powerful at a wide range of news

ix. This sentiment was voiced at all levels of the editorial
hierarchy, from editorial fellows (Gawker’s new term for people who
are, essentially, paid interns) to site leads. Several site leads
said they did not want their teams to be overly focused on metrics
and that they took care to shield writers from their own anxieties
about traffic. During the days I spent sitting in on two sites’
group chats, metrics were never openly discussed. Still, the broader
organizational culture of Gawker, where metrics were on the wall, on
each individual post, and available to all editorial employees
through tools like Quantcast and Chartbeat, undercut site leads’
attempts to buffer their writers from traffic pressures. Said one
site lead about his writers consulting Chartbeat: “I kind of wish
they would be at peace with the fact that while it’s available, they
shouldn’t look at it, because please just do a good job and let me
stress about that …But I can’t say, ’forget that password that you
found out’ …How would I tell them not to [look at metrics]?”

x. The Kinja bonus was capped at 2 percent of a site’s monthly
budget, far short of the 20 percent cap for the uniques bonus. To
many employees, this indicated that despite Denton’s insistence to
the contrary, traffic—measured in uniques—was still the company’s
true priority.

xi. This issue came to a head when anonymous Kinja users began
posting GIFs of violent pornography in the comments sections of
Jezebel posts. In the interest of guaranteeing tipsters’ anonymity,
Gawker does not save commenters’ IP addresses, which meant that
those banned by the Jezebel staff simply returned using new aliases.
This went on for months, until in August of 2014 the Jezebel staff
published a post entitled “We Have a Rape Gif Problem and Gawker
Media Won’t Do Anything About It.” (Jezebel staff, [“We Have a Rape Gif Problem and Gawker Media Won’t Do Anything About It,”]( Jezebel, 11 Aug. 2014, <>.) In response, the company
reintroduced a commenting system it had once abandoned, by which
only comments from Kinja accounts staff members had previously
approved would be automatically visible under posts. (J. Coen, [“What Gawker Media Is Doing About Our Rape Gif Problem,”}( Jezebel, 13 Aug. 2014, <>.) The saga
highlighted several of the key challenges for media companies trying
to build an interaction-focused model, from the often frightening
harassment women writing online endure to the role of anonymity in
enabling both valuable free expression and trolling.

xii. My conversation with Craggs suggested that a similar dynamic
might come into play as he and the other members of the newly formed
Gawker “Politburo”—a team of top editorial staffers—begin to
evaluate which sites merit monthly bonuses based on their content.
When some editors questioned the Politburo’s ability to fairly and
accurately assess posts on subjects in which they were not expert,
Craggs explained his response: “My case to them was, ‘look, last
year, Facebook determined your bonus. Do you trust Facebook more
than you trust me and the members of the Politburo, who’ve been
working at Gawker Media for a while, and who know what kinds of
stories are good?’ And the funny thing is, I think some people
privately, to themselves, probably said, ‘yeah we probably trust
Facebook more than the sports guy.’ ”

xiii. While they are beyond the scope of this report, it should be
noted that there are several efforts currently underway to
systematically measure forms of journalistic impact other than with
traffic: most notably the Media Impact Project at the University of
Southern California, ProPublica’s Tracking Reports, and the Tow
Center’s Newslynx project.

xiv. *The Times* continues to use metrics in this way as the newsroom
tries to expand staffers’ online focus beyond just the home page.
For instance, the Innovation Report team collected data from the
organization’s business side about how many people receive *Times*
news alerts with hopes that sharing the large number would persuade
more reporters to file them.

xv. In 2012, Denton wrote a post explaining that employee evaluations
now looked not only at an “individual’s audience appeal but at their
reputation among colleagues and contribution to the site’s
reputation” because “relentless and cynical traffic-trawling is bad
for the soul.” Yet the company’s continued focus on individual eCPM
numbers in personnel decisions, as well as the installation of the
individual leaderboard just before the start of my research,
indicates that Denton’s statement did not amount to much.(F. Kamer, [“Nick Denton’s ‘State of Gawker 2012’ Memo: ‘Relentless and cynical traffic-trawling is bad for the soul.’ ” *New York Observer*, 5 Jan. 2012, <>.) 

xvi. This is not to say that all analytics are created equal, nor to
suggest that efforts to make better metrics are for naught. An
organization that emphasizes so-called “engagement metrics,” such as
a user’s time spent on a site, number of visits, and number of
consecutive pages visited, is likely to have a much better user
experience than one that focuses primarily on page views. However,
it is dangerous to assume that a metric like time spent necessarily
incentivizes the production of more important or serious content.
Chartbeat recently found that stories about a cocktail dress that
appeared to be a different color depending on the viewer garnered
more clicks and more attention than stories about a federal court’s
net neutrality ruling. While the difference in attention between the
two stories was smaller than the discrepancy in clicks, it was still
substantial: stories about the dress gained 2.5 times the amount of
attention as stories about the net neutrality ruling.(A.C. Fitts, [“Can Tony Haile Save Journalism by Changing the Metric?”]( *Columbia Journalism Review*, 11 Mar. 2015, <>.)

xvii. As C.W. Anderson puts it, “in our rush to capture audience data,
we run the risk of oversimplifying the notion of informational
desire.” (C.W. Anderson, [“‘Squeezing Humanity Through a Straw: The Long-term Consequences of Using Metrics in Journalism,”]( Nieman Lab, 14 Sep. 2010, <>.)  For example, the headline on a piece from *The*
*Atlantic*’s Derek Thompson about this reads, “Why Audiences Hate
Hard News—and Love Pretending Otherwise.” Addressing readers,
Thompson continues: “If we merely asked what you wanted, without
*measuring* what you wanted, you’d just keep lying to us—and to
yourself.” (D. Thompson, [“Why Audiences Hate Hard News—and Love Pretending Otherwise,”]( *The Atlantic*, 17 Jun. 2014, <>.)

xviii. For example, see A. Lee, S.C. Lewis, and M. Powers, [“Audience
Clicks and News Placement: A Study of Time-Lagged Influence in
*Communication Research* XX(X) (2012), 1–26, accessed at

xix. Examples include the Media Impact Project at the University of
Southern California, the Newslynx project at Columbia University’s
Tow Center for Digital Journalism, and ProPublica’s Tracking

xx. For, as Brian Abelson has pointed out, even widely lamented
metrics tend to have considerable staying power; once organizational
systems are built around a particular measure, it can be quite hard
to change them. (B. Abelson, [“Whither the Page View Apocalypse?”](, 10 Oct. 2013, <>.)

results matching ""

    No results matching ""