Journalism, Terror, and Digital Platforms

This chapter will examine the increasingly important role of platforms such as Facebook, Google, Apple, and Twitter in providing information, connecting to journalism, and framing narratives around terror news events.

The Power of the Platforms

The major platforms are now increasingly the way the Western public accesses news about terror. Twitter, Facebook, Google, and Apple provide the infrastructure for mainstream news media to disseminate their material. Sixty-two percent of Americans now say they get news via social media. Sixty-three percent of American Twitter and Facebook users say they get news from those platforms, with Twitter especially popular for breaking news (59 percent). Facebook also owns the hugely popular social messaging apps Instagram and WhatsApp. Snapchat is increasingly used by news brands like CNN and Vice, who push content to users through Snapchat Discovery.

Platforms also aggregate news stories through Apple News, Google News, and Twitter Moments. They make deals with news organizations to feature journalism, further shaping the dissemination and consumption of news. They are also starting to provide new production tools for journalists such as livestreaming on Facebook and YouTube or through apps such as Twitter’s Periscope. Journalists have lost control over the dissemination of their work. This is a crucial challenge for the news media overall, but the issue is especially acute when it comes to reporting on terror.

The platforms provide an unprecedented resource for the public to upload, access, and share information and commentary around terror events. This is a huge opportunity for journalists to connect with a wider public. But key questions are also raised: Are social media platforms now becoming journalists and publishers by default, if not by design? How should news organizations respond to the increasing influence of platforms around terror events? Facebook is becoming dominant in the mediation of information for the public, which raises all sorts of concerns about monetization, influence, and control over how narratives around terrorist incidents are shaped.

As Guardian editor Katharine Viner points out, we live in a world of information abundance, a world where “truth” is often harder to establish than before, partly because of social media:

Now, we are caught in a series of confusing battles between opposing forces: between truth and falsehood, fact and rumor, kindness and cruelty; between the few and the many, the connected and the alienated; between the open platform of the web as its architects envisioned it and the gated enclosures of Facebook and other social networks; between an informed public and a misguided mob.

It is in the public interest for these platforms to give people the best of news coverage at critical periods. But will that happen?

Facebook’s role in the dissemination of news is concerning because it is not an open and accountable organization. Recently, a Facebook moderator removed a story by Norwegian newspaper Aftenposten that featured the famous “napalm girl” image of a girl running from an attack during the Vietnam War. It was removed because the image violated the platform’s Community Standards on showing naked children. When Facebook deleted the image, Aftenposten’s editor accused Facebook’s Mark Zuckerberg of “an abuse of power”:

I am upset, disappointed – well, in fact even afraid – of what you are about to do to a mainstay of our democratic society.

However, initially, even when the historic context of the image was pointed out along with its importance to the news story, Facebook stood by its stance:

While we recognize that this photo is iconic, it’s difficult to create a distinction between allowing a photograph of a nude child in one instance and not others.

Following a global outcry—including thousands of people posting the image on Facebook—they backed down and said they would review their policy and consult with publishers.

This case was more than a one-off failure of judgment by Facebook. It is a symptom of a systematic, structural problem. CEO Mark Zuckerberg insists that Facebook is “a tech company, not a media company….we build the tools, we do not produce any content.” Yet Facebook aggregates news, and its algorithms and moderation teams influence what news appears in people’s streams. It recently reviewed its procedures in response to fears that human editors on the trending team might have a “liberal” bias. An internal inquiry did not find evidence of bias, but it did make clear that both algorithms and human curators are making judgments in a similar way to how a news organization filters information. Other platforms that curate news content, such as YouTube, face similar issues. They may not call themselves news or media companies, but they are editors of journalism.

This is a pressing policy problem, and the platforms are eager to engage in a dialogue. Tackling this is critical to them partly because it might drag them into regulatory oversight that will limit their control over their own platforms. However, there is a fundamental clash of interests between the publishers and platforms, which makes it hard to establish such policies. News is a good way of getting people to come to their platform, but it is a relatively minor part of their business (more so for Twitter than Facebook). How the platforms deal with this in regard to terrorism is an extreme case of a wider problem, but it brings the issues into sharp focus and reminds us of what is at stake.

The Platforms and Breaking News

When two men murdered off-duty British soldier Lee Rigby in Woolwich, London, in 2013, it was a precursor of the attacks of summer 2016 in America and Europe. The attackers used the incident to promote their extremist Islamist ideologies. It provoked a limited anti-Muslim backlash, such as an attack on a mosque, two potential copycat incidents, and at least one white supremacist “revenge” attack. The British government responded by setting up an anti-extremist task force.

As Rigby was being attacked in the street, bystanders were tweeting about it. One person recorded a video of one of the attackers—with blood still on his hands—talking about why he had carried out the killing. A research project by Britain’s Economics and Social Research Council that looked at the Woolwich incident concludes that social media is the place where this kind of news breaks with important implications for “first responders.” The report also says social media is now a key driver of public understanding. This has implications for the authorities, the study states, but also for the platforms, who must consider their role in mediating the public reaction to avoid negative outcomes in terms of both further incidents and community relations.

For mainstream media, this was a test case of how to handle user-generated content in a breaking terror news situation. As The Sun Managing Editor Richard Caseby said:

This was very graphic and disturbing content. Would it only serve as propaganda fueling further outrages? These are difficult moral dilemmas played out against tight deadlines, intense competition, and a desire to be respectful to the dead and their loved ones.

The video first appeared on YouTube in full. News channels such as Sky carried the footage of Michael Adabalajo wielding a machete and ranting at onlookers. ITN obtained exclusive rights to run it on the early evening bulletin, just hours after the incident and before 9 p.m., known in the UK as the "watershed," after which broadcasters are permitted to air adult content. Those reports, unlike the YouTube footage circulated on social media, were edited and contextualized, and warnings were given. But there were still more than 700 complaints from the public about the various broadcasts, including on radio. The UK’s broadcasting regulator Ofcom cleared the broadcasters and said their use of the material was justified, although it did have concerns about “health warnings” and published repeated guidelines.31

For the platforms, it brought up two issues. Firstly, it was through the platforms that the news broke, raising questions about their responsibility for content uploaded to their networks. Second, the incident raised a problem about the platforms’ reporting of users who post inflammatory material. This second issue emerged during the trial of the second attacker, Michael Adebowale. Adebowale had posted plans for violence on Facebook, and its automated monitoring system had closed some of his accounts. This information was not forwarded to the security services. Facebook was accused of irresponsibility, including by the then UK Prime Minister David Cameron:

If companies know that terrorist acts are being plotted, they have a moral responsibility to act. I cannot think of any reason why they would not tell the authorities.

Facebook’s standard response is that it does not comment on individual accounts but that it does act to remove content that could support terrorism. Like all platforms, it argues that it cannot compromise the privacy of its users.

The three main platforms—Facebook, Twitter, and YouTube—all have broadly similar approaches to dealing with content curation during a terror event. All have codes that make it clear they do not accept content that promotes terrorism, celebrates extreme violence, or promotes hate speech. Twitter’s stance is typical:

We are horrified by the atrocities perpetrated by extremist groups. We condemn the use of Twitter to promote terrorism and the Twitter Rules make it clear that this type of behavior, or any violent threat, is not permitted on our service.

Twitter has taken down over 125,000 accounts since 2015, mainly connected to ISIS. It has increased its moderation teams and use of automated technology such as spam-fighting bots to improve its monitoring. It collaborates with intelligence agencies and has begun a proactive program of outreach to organizations such as the Institute for Strategic Dialogue to support online counter-extremist activities.

As Twitter has stated, these platforms are in a different situation to news organizations. They are open platforms dealing with a vast amount of content that can only be filtered post-publication. They are still developing the systems to manage the problem:

There is no “magic algorithm” for identifying terrorist content on the internet, so global online platforms are forced to make challenging judgment calls based on very limited information and guidance. In spite of these challenges, we will continue to aggressively enforce our Rules in this area, and engage with authorities and other relevant organizations to find solutions to this critical issue and promote powerful counter-speech narratives.

Google says the public assumes there is a technical fix, but in practice the volume and diversity of material (40 hours of video are uploaded every minute to YouTube) make it impossible to automate a perfect system of instant policing of content. Artificial intelligence and machine learning can augment systems of community alerts. But even if a piece of content is noted, a value judgment has to be made about its status and what action to take. Should the material be removed, or a warning added?

This puts the platforms in a bind. YouTube, for instance, wants to hang onto its status as a safe harbor for material that might not be published elsewhere. When video was uploaded of the results of alleged chemical weapons attacks on rebels in Syria, YouTube had to make a judgment about their graphic nature and impact. Much of the material was uploaded by combatants, but YouTube had to make judgments about how authentic or propagandistic it was. YouTube says it generally makes such judgments case by case; in this instance, many mainstream news organizations were then able to use that material from YouTube in their own reporting.

Live Streaming

This problem of balancing protection of the audience, security considerations, and social responsibility with privacy and free speech becomes even more acute with the arrival of new tools such as live video streaming from Facebook Live, Twitter Periscope, YouTube, and even Snapchat and Instagram. This affords ordinary citizens the opportunity of broadcasting live. Many people welcome it as an example of the opening up of media. But what happens when a terrorist like Larossi Aballa uses Facebook Live to broadcast himself after murdering a French policeman and his wife, holding their 3-year-old child hostage, broadcasting threats, and promoting ISIS? The Rigby killers relied on witnesses to broadcast them after the incident, but Aballa was live and in control of his own feed. That material was reused by news media but edited and contextualized.

There is a case for allowing virtually unfettered access that gives citizens a direct and immediate, unfiltered voice. Diamond Reynolds filmed the shooting of her boyfriend Philando Castile by a police officer in St. Paul, Minnesota, live on Facebook. The video was watched by millions, shared across social media as well as re-broadcast on news channels and websites. It attracted attention partly because it was the latest in a series of incidents where black people were subject to alleged police brutality. In this case, Facebook Live made a systematic injustice visible through the rapid reach of the platforms. Local police contested her version of events, but the live broadcast and the rapid spread of the video meant her narrative had a powerful impact on public perception. It was contextualized to varying degrees when re-used by news organizations, but the narrative was driven to a large extent by Reynolds and her supporters.

News organizations need to consider how to report these broadcasts and what to do with the material. Research shows varying approaches to dealing with this kind of graphic footage, even when not related to terrorism. Should news organizations include direct access to live video as part of their coverage, as they might from an affiliate or a video news agency? In principle, they all resist becoming an unedited, unfiltered platform for live video broadcasts by anyone, with no editorial control.

Emily Bell, Director of the Tow Center for Digital Journalism at Columbia’s Journalism School, points out that this reflects a difference between news organizations and the digital platforms:

When asking news journalists and executives “if you could develop something which let anyone live stream video onto your platform or website, would you?” the answer after some thought was nearly always “no.” For many publishers the risk of even leaving unmoderated comments on a website was great enough, the idea of the world self-reporting under your brand remains anathema. And the platform companies are beginning to understand why.

Media organizations are having to negotiate with the platforms about how to inhabit the same space when these dilemmas arise. Sometimes, they have to act unilaterally. For example, CNN has turned off auto play for video on its own Facebook pages around some terror events and routinely puts up warning slates for potentially disturbing content.

How the Platforms Handle Risk and Responsibility

The platforms are acting to protect users from harmful content, as well as to comply with security considerations. Facebook, for example, deactivated the account of Korryn Gaines (who was later shot and killed) during a standoff with police. A mainstream media organization might well have complied with a similar request. However, it raised questions as to why that particular action was taken, but not others. The perceived inconsistency of the platforms’ policies comes from a lack of clarity and transparency. Twitter has removed ISIS-related material, but it does not always do the same for homophobic or racist tweets. In the wake of the Dallas shooting of police officers, there was a spate of extremist messaging that Twitter struggled to moderate. The company accepts it has a problem:

We know many people believe we have not done enough to curb this type of behavior on Twitter. We agree. We are continuing to invest heavily in improving our tools and enforcement systems to better allow us to identify and take faster action on abuse as it’s happening and prevent repeat offenders. We have been in the process of reviewing our hateful conduct policy to prohibit additional types of abusive behavior and allow more types of reporting.

These platforms insist they are not publishers, let alone journalistic organizations. Their business is built upon providing an easy-access, open channel for the public to communicate. The terms and conditions of use, however, allow them to remove content, including shutting off live video. This is now done according to a set of criteria that are enforced through a combination of automated systems that identify key words, flagging of offensive content by users, and decisions by platform employees to remove or block the content or to put up a warning. This sort of post-publication filtering is not the same process as a journalist selecting material pre-publication.

However, it is editing. It involves making calculations of harm and judgments about taste. Monika Bickert, Facebook's head of policy, has said the platform does not leave this decision to algorithms. Instead, decisions are made on the basis of what is uploaded and how it is shared. Someone condemning a video of hate speech might not, for example, have his or her account suspended, but someone sharing the same video in a way that incites further hatred might:

Was it somebody who was explicitly condemning violence or raising awareness? Or was it somebody who was celebrating violence or not making clear their intention or mocking a victim of violence?

The obvious, critical difference with a news organization is that platforms do not have control over the content creators as they create and publish material.

Because of their much wider structural role, platforms have agreed to co-operate more extensively with the authorities on counterterrorism than news organizations and journalists often do. In the UK, there is the formal D Notice process that allows authorities to make one-off arrangements with news organizations to delay publication of security-sensitive material. When The Guardian was preparing publication of the Snowden revelations, its Editor Alan Rusbridger had conversations with British intelligence. However, the relationship between the authorities and the news media is always ad hoc and built on the idea of journalistic independence, even hostility. The Guardian ended up with British intelligence officers coming into its newsrooms to destroy hard-drives that carried the classified information. Technology companies have also resisted attempts to allow the authorities more access to their data and to preserve the privacy of their users. But the Snowden revelations suggest that intelligence agencies have been successfully targeting online communications covertly.

Shaping the Narrative: Filter Bubbles and Polarization

It is important for journalists to understand how platforms shape the framing of issues and the public’s response. Posting on social media has a performative element; people say things because they are feeling emotional or signaling a point of view. Especially during the coverage of terror events, the reaction of the online public will be instinctive, and not necessarily representative. That does not mean it should not be noted and taken into account. But the danger of narratives built on social media content or that use social media as a proxy for what people are saying is that it privileges a highly selective sample.

Currently, the platforms’ algorithms are tuned to bring personalized content that heightens engagement. The danger of this approach is that it clearly shapes the distribution of content to what people like, and users may be more likely to see political content they agree with rather than a broad spectrum of opinions. This is particularly relevant to terror events because evidence shows the greatest polarization of opinion online happens with divisive issues around ideology and race. Research on online echo chambers has mixed results. But it does suggest that the polarization of politics is partially reinforced by social media, particularly by certain platforms such as Twitter.

Sometimes this has a positive motivation. After the Paris attacks, Facebook encouraged people to add the French flag to their profiles to demonstrate solidarity with the people of France. That immediately raised the question of whether it would do the same for every country that suffered a terrorist incident. Facebook would prefer this to be done by algorithms that are more powerful, faster, and cheaper than humans. Indeed, it has reportedly shifted further away from human curation on its trending online news streams, partly because of allegations of a liberal human bias. Algorithms are ultimately programmed by humans, but the main work of selection and personalized dissemination of content will be done automatically. This is of particular concern when the subject is political. During the UK’s European Union referendum campaign, the social media activist Tom Steinberg, who founded MySociety, said that he found it almost impossible to find a different view on the issue from his personal opinion on his Facebook feed even when he actively sought a more diverse diet:

The polarization of opinion around terror is also potentially worrisome.

One of the great advantages of the internet was the possibility of connecting to a greater range of sources and perspectives, but the algorithms of search and social counter this. This raises serious questions about the public formation of opinion around terror events, whether minority views will be excluded and a diverse debate on terrorism homogenized. Part of the role of a healthy news media is to provide that wider and deeper perspective and to include challenging as well as reassuring views. The platform algorithms seem to militate against that.

Should Platforms Become More Like Media Organizations?

The platforms are in a difficult place in terms of the competing pressures of corporate self-interest, the demands of their consumers for open access, the public interest involved in supporting good journalism, and fostering secure and cohesive societies. They are relatively young organizations that have grown quickly, and are still accreting institutional knowledge on these issues.

The platforms have accepted they have a public policy role in combatting terrorism. Facebook now has a head of policy for counter-terrorism. They have gone further than most Western news media in allowing themselves to be co-opted into counter-terrorism initiatives. Yet any intervention raises questions. For example, Facebook offered free advertising for accounts that post anti-extremist content. But which ones and how far should it go? The platforms all say this is a developing area, and they are still consulting to see what is most effective and most consistent with the goal of being politically neutral. Platforms like Google argue they are only part of an existing conversation with governments and international bodies. They point out it is not for them to push a counter-narrative as it probably would not be credible or authentic. Instead they see their job as enabling the capacity of others.

The platforms do provide an opportunity for building social solidarity in the wake of these incidents far beyond the ability of news media. In the wake of the Lee Rigby killing, there was widespread reaction on social media expressing shock and disgust at the attacks including from many Muslims. There were also positive social media initiatives that sought to pay respect to the victim. But some reaction was incendiary and anti-Islamic. Some people faced charges for inciting racial hatred on social media. At the height of the European attacks in July 2016, one study recorded 7,000 Islamophobic tweets daily in English, compared to 2,500 in April. More could be done to police these conversations, but as we have seen, there is a limit at the moment to its efficacy. As Martin Innes, the author of a report on social media and terror, warns, this is still a nascent science:

Traditional “big data” science statistical methods can be misleading in terms of how and why events are unfolding after major terrorist incidents, due to the complex conflict and information dynamics. Theory-driven methods of data analysis need to be urgently developed to realize the potential of social media analytics.

British MPs recently criticized the platforms for not doing enough to counter ISIS. The then-Chairman of the Commons Home Affairs Select Committee, Keith Vaz said:

They must accept that the hundreds of millions in revenues generated from billions of people using their products needs to be accompanied by a greater sense of responsibility and ownership for the impact that extremist material on their sites is having.

However, the platforms say they are already doing much to remove incendiary content. As the radicalization expert Peter Neumann, from London’s Kings College, has pointed out, media is only a part of the extremist strategy:

The vast majority of ISIS recruits that have gone to Syria from Britain and other European countries have been recruited via peer-to-peer interaction, not through the internet alone. Blaming Facebook, Google, or Twitter for this phenomenon is quite simplistic, and I'd even say misleading.

The platforms (and the news media) cannot police these networks alone. There is also a responsibility for the authorities to monitor and engage with social media and to actively counter bad information and to provide reliable, real-time streams of information. Ultimately, the price of open access and exchange on these platforms might be an element of negative and harmful material.

However, just because these issues are complex does not mean platforms cannot adapt their policies and practices. It might be a virtual switch put in place to delay live feeds that contain violence. More “honest-broker” agencies such as Storyful or First Draft might emerge to act as specialist filters around terror events. One suggestion has been platforms like Facebook should hire teams of fact-checkers. Another is they should hire senior journalists to act as editors.

Of course, those last suggestions would make those self-declared tech companies more like news media. But we now inhabit what Andrew Chadwick32 calls a “hybrid media” environment where distinctions are blurred. News organizations have had to change to adapt to social networks, and platforms too must continue to develop the way they behave in the face of breaking news. Companies such as Facebook and Google are already reaching out to journalists and publishers to find ways of working that combine their strengths. Twitter and Facebook, for example, have created a coalition organized by verification agency First Draft with 20 news media organizations to find new ways to filter out fake news. Platforms and the news media both have much to gain in terms of trust by taking the initiative instead of waiting for angry governments to impose solutions that hurt creativity and freedom in the name of security. One only has to glance at more repressive regimes around the world to see the price paid for democracy when reactionary governments restrict any form of media in the name of public safety.

results matching ""

    No results matching ""