Now more than ever citizens are taking and sharing video of human rights abuse. In their efforts to secure justice, how do practitioners verify this video? Consider these questions below when sharing your comments in this discussion topic:
- What are the challenges of verifying citizen video?
- What are some tools and/or processes that can be utilized to verify that a video is what it says it is?
- What level of verification does the media need versus a human rights body versus a court?
- How can metadata be used to prove the authenticity of a video and what are the limitations of metadata?
Share your thoughts, experiences, questions, challenges and ideas by replying to the comments below.
For help on how to participate in this conversation, please visit these online instructions.
I wanted to sing some praises about a couple of resources to begin this vitally important strand of the conversation (see Beth's earlier post titled, Chain of Custody).
Here they are:
Much more soon! And if there are any other key resources out there we should all know about, please share.
To start with, I wanted to add two additional reads, which I found a good intro intro to verification field. These two resources come from journalists, from who human rights workers can learn a lot in terms of tools and techniques for verification:
I'll post additional resources more focused on the human rights field later - looking forward to seeing other people's suggestions
Sharing an excerpt of a recent blog post that announced a site to assist human rights workers in verification citizen videos. The focus is on YouTube videos, but the tools and steps can be used with any video:
The Citizen Evidence Lab – launched earlier this month – is the first dedicated verification resource for human rights workers, providing tools for speedy checks on YouTube videos as well as for more advanced assessment.
This is increasingly relevant for human rights organizations. Earlier this month, a video showing armed men shooting dead several men in their custody, reportedly in Syria, came across my desk. It had been uploaded to YouTube and shared on social media the previous day, 3 July. Using the Citizen Evidence Lab, it took me two minutes to determine that the footage was, in fact, old.
A few months earlier, a graphic video allegedly from South Sudan was circulated among human rights groups. It turned out that the same video was shared on YouTube several years earlier – with some claiming it was from Kenya and others saying it was from Myanmar.
In both these examples, a simple “reverse image search” of thumbnails of the videos – something that can now be done by anyone with our new DataViewer tool – unearthed previous versions of the footage. During crisis situations, speeding up the process of filtering out the old from the new is crucial.
However, we often have to do more in-depth research into footage. During the violent clashes in Cairo in August 2013, a video was widely shared that reportedly showed how protesters threw a police car off a bridge. Reviewing a second video showing the same incident from a different angle, it appeared that the police car had collided with another car, rolled back, and fell off the bridge. Additionally, the features seen in the videos—such as a soccer stadium next to the bridge—allowed me to confirm the exact location of the incident from my Washington, D.C. office by using online tools such as Google Earth.
Of course, checking for previous versions of the same video and confirming the location of an incident are only two steps in the verification process. A full list of steps and tools are now accessible through an interactive step-by-step guide – or a “Stress Test for YouTube videos“.
Sometimes we have to go beyond the standard video verification steps (source – date – location) but do a more in-depth content analysis that requires expert review. One example was the chemical weapons attack in Syria from August 21, 2013. Input from experts in this field allowed us to make some, although cautious, public statements on the incident in a relatively short period of time.
Here’s a video that gives some insight into this, and a public output that includes analysis of the videos:
Using video as legal evidence is more complex, which has been discussed in-depth in the first section here. I’ll encourage readers to especially look at Kelly’s write up on the chemical weapons attack in Ghouta.
Thanks for sharing this tip and example, Christoph! If anyone is interested in learning more about AAAS partnerships, and/or exploring the lessons-learned from these kinds of partnership, visit our previous online discussion that featured Jessica Wyndham of AAAS.
And here's a tip from Theresa Harris of AAAS from another online discussion:
One of the keys to building a strong, effective partnership is to clarify expectations at the outset of the relationship. Human Rights Projects: Guidelines for Scientists and Human Rights Organizations was created to help human rights advocates and scientists build strong partnerships, so some of the sections in this document are very specific to that type of relationship, but I think it is very useful for anyone building a new relationship with someone who doesn't share the same background or area of expertise.
Are there other examples out there of practitioners engaging experts to assist and bolster the content analysis of video?
- Kristin Antin, New Tactics Online Community Builder
As far as experts are concerned, Storyful has rightly developed a reputation in the last few years for verifying open source video (and other online citizen media, such as photos and tweets), and they are partners with WITNESS on our Human Rights Channel, sourcing and verifying the videos we feature. Most of their clients, however, are news outlets, which I imagine have a much lower bar for verification as the courts would. However, Storyful is increasingly working with human rights organizations, and their work has led innovation in verification techniques across industries.
These are all great resources, and it's fantastic to see so much work being done in this area. However, as Christoph noted, most of it has been led by the field of journalism, so expertise from those involved in ICJ and video as evidence in various stages of the criminal justice process would go a great way toward bridging this work with criminal justice and the legal process.
I would add one resource we developed at WITNESS, which is a tipsheet on Authenticating Open Source Video (Spanish version here). It is a short PDF that goes through many of the processes and tools discussed in the Citizen Evidence Lab, such as the Google Reverse Image Search and processes of looking into the uploader's online history and geolocating the video. While I'm not sure if video that has gone through this process could be admitted as evidence in a legal proceeding, it has certainly been helpful for human rights advocates who are monitoring a situation from abroad. For instance, one challenge for many people watching the protests in Venezuela earlier this year was a deluge of online footage that was not what it purported to be. (Here's one example of a video purporting to show police abuse, that actually appeared to be from Colombia). And yet, other videos did document abuses taking place in Venezuela. For observers monitoring or curating online video of abuse, the process of verification that these resources take one through is a necessary first step of knowing that what you're watching is what you think it is, or what it purports to be.
As far as metadata is concerned, I've sensed a lot of confusion from researchers about how metadata can be used to prove the authenticity of a video, especially when that footage is found online. While digital cameras and cell phones may (or may not, depending on the device and its settings) capture metadata related to when and where the footage was shot, once that file is uploaded to an online platform such as YouTube or Twitter, a new file is created, with new metadata reflecting the new file. If a mobile video is uploaded to YouTube, for instance, the metadata YouTube creates for that upload does not include the same metadata that was included in the original file. (There are some exceptions, like Vimeo Pro or the Internet Archive, that do not transcode videos, and thus leave the original metadata intact.)
This is why preserving the chain of custody of the original raw footage is so important. While a filmer might share a video publicly on YouTube, if there comes time to use that video as legal evidence, the metadata from the raw file can be critical to proving the video is authentic and untampered with.
One tool to allow filmers to capture valuable metadata that can be used by third parties to authenticate their footage is InformaCam, an Android app we developed at WITNESS with our partners at the Guardian Project. It allows a user to record a still photo or video on a cell phone, embedding into that file rich metadata that identifies where and when the file was taken, and the ability to securely and anonymously share the footage along with its metadata to a third party.