Amateur Footage: A Global Study of User-Generated Content

Verification

The theme of verification ran through all 64 interviews. Managers and senior editors were quick to emphasize the importance of verification in relation to user-generated content, as well as their fears about using material that turned out to be incorrect. Many people discussed famous examples of news organizations being faked by hoaxes, and their concerns about it happening to them.18stated, “I think the biggest issue for us is around verification because that’s where our reputation lies. If we end up putting stuff out which is wrong in any way, fabricated in any way, then our necks are on the line.” There was, however, little awareness about the specific techniques and processes associated with verifying UGC.19People knew it needed to be done but there was an acknowledgement from journalists on news desks that they didn’t feel like they knew enough about how to do verification properly. One very honest description of an editorial meeting by a senior editor revealed how the process of verification is considered in that newsroom: Verification is always an afterthought. It’s sort of like, “Let’s just get it on air and online and then not worry about it.” It’s always an afterthought. When someone puts something forward in an editorial meeting, you say: “Have you verified it?” And people groan. People are scared of the “v” word. They know it’s going to take a long time.

There was still a sense from many managers and senior editors (who don’t work with UGC every day) that with journalistic experience comes a gut instinct that enables you to know whether something can be trusted. There was also a sense that verifying a piece of content is something that is black and white—something is either true or false, accurate or inaccurate. When pushed to describe specific technical checks that journalists could run on social content, very few interviewees displayed any knowledge of the different ways geo-positioning and timestamps work on the different social networks, the power of mapping technology, or the information about a digital photograph available via EXIF data. It was rare to hear people talk about verification as a process, like building a legal case. They should be looking for clues that help build that case, and in almost all pieces of UGC there will be some doubt about one element of the material. Whether or not to run UGC rests then with an editor of a program, section, or article. The fact that the process of verification includes so many different factors and variables means these decisions are very rarely a simple case of true or false. Many people who didn’t regularly work with UGC described social media verification as having the same characteristics as any other type of factchecking; those who work in roles where social media verification is part of the job, however, talked about it very differently. Verification Processes The AP has a clear process whereby the uploader of the content has to be verified separately from the events being shown in the footage. Similarly, Storyful verifies the source, date, and location of each video separately— labeling each element as either confirmed, corroborated, or unconfirmed. This information is then shared with clients. Information about the checks carried out by these two agencies are detailed on dopesheets, allowing their clients to undertake their own verification checks if they so wish. Some

newsrooms carry out an additional layer of independent verification on material shared by the agencies, but the vast majority do not, believing that “is what they are paying agencies for.” Perhaps most alarming was the ignorance about the problem of scraping, the practice whereby people simply copy pictures and videos and upload them to their own accounts. As someone from an agency warned: A fundamental problem that the entire industry faces is that usergenerated content is used in its most available form as opposed to its original form. It is quite likely legitimate in the sense that it shows what happened, but I think that’s a major problem because it takes away all of the context and all of the original information that is connected to the video. I think it’s because people use technology to surface what’s essentially trending as opposed to finding the original piece of content or tunneling through to see what the original posting was. If there isn’t any original information and context, then what’s been added to it in the version you’re looking at may or may not be true. Even analyzing the tweets and messages journalists send to people who have uploaded footage to the social Web immediately after a breaking news event demonstrates how rarely journalists think about these dangers. They will ask for permission to use the picture, without asking whether the actual person took the photo or shot the video. This clearly has issues related to copyright, but it has even bigger issues related to verification. The question of checklists and systemized processes was asked of every interviewee. There was resistance about the need for standardized verification systems, with people arguing that every piece of content is different and on desks where UGC is regularly used, there was an acceptance that staff just knew which checks had to be completed.

However, the people who are making decisions about output displayed the most ignorance about the technical checks that could be run, and how these could be integrated with traditional verification and fact-checking techniques. They were the ones who were most likely to rely on a “gut instinct.” More UGC-savvy producers suggested the need for implementing clear flagging or traffic light systems whereby pressured output editors could quickly see which checks had been run, which elements had been confirmed, and which elements had been corroborated but not confirmed to visually represent the reality of the sliding scale of verification.