This important online dialogue featured Documenting Violations: Choosing the Right Approach from January 27 to February 2, 2010. This dialogue featured practitioners that have developed database systems to document human rights violations, organizations on the ground documenting violations, and those that are training practitioners on how to choose the right approach and system for their documentation. We looked at options for ways to collect, store and share your human rights data safely and effectively.
Featured resource practitioners for this dialogue include (click here for more biographical info):
- Vijaya Tripathi and Megan Price work with the Martus database developed by Benetech
- Bert Verstappen and Daniel D'Esposito work on the OpenEvSys database developed by HURIDOCS
- Nathan Freitas of the Guardian Project
- Jorge Villagran and Sofia Espinosa of the Guatemalan National Police Archive Team
- Patrick J. Pierce, head of the International Center for Transitional Justice - Burma Program
- Oleg Burlaca, utilizes HURIDOCS methodology and working on websites for World Organisation Against Torture and SOVA Center for Information and Analysis
- Patrick Stawski, Human Rights Archivist at Duke University and Seth Shaw, Duke's Libraries' Electronic Records Archivist
- Jana Asher, M.S., is the Executive Director of StatAid
- Agnieszka Raczynska of Red Nacional de Organismos Civiles de Derechos Humanos, Mexico
- Daniel Rothenberg is the Managing Director of International Projects at the International Human Rights Law Institute (IHRLI) at DePaul University College of Law
Summary of the Dialogue
In the dialogue titled Documenting Violations: Choosing the Right Approach, participants discussed the range of methods that can be used to thoroughly document human rights violations, and utilize them to motivate a response. Participants shared a myriad of powerful examples from their own work, proving the importance and vast range of impact that documentation has.
What is documentation?
Documentation is a process of strategic and systematic gathering of quantitative or qualitative data. This process consists of several activities, namely:
- determining what information is needed and establishing means for acquiring it;
- recording the discovered information and storing such in appropriate containers (called documents) or collecting already-existing documents containing the needed information;
- organising the documents to make them more accessible; and
- actually providing the documents to users who need the information.
Before starting data collection, it is important to have a concrete end goal for the data, as that will largely influence the type and scope of data collected, and determine the parameters of the collection process. Furthermore, it is essential to establish baseline data to which new data can be compared and contrasted with.
Documentation builds a strong platform for advocacy for it provides evidence that can oppose what governments or newspapers are reporting. Here is a 10-step plan on how to use documentation for human rights advocacy.
An important lesson learned is to review the impact of the documents on particular human rights efforts and store data safely.
Data Collection Software:
Two main kinds of software were mentioned throughout the dialogue – Martus and OpenEvsys. What tool for what purpose? The differences of the two documentation systems are discussed here.
Martus secures your data by encrypting it on your computer and (if you choose to) automatically backing it up to remote, dedicated secure servers around the world. If your computer is lost, destroyed or stolen, you can retrieve your information from the remote servers. Martus is a very good tool for use in countries with very repressive regimes, where you and your sources can get into serious trouble if your data is found.
OpenEvsys can be used both to collect and organize stories, but also to provide "who did what to whom" quantitative analysis of the violations in these stories: how many acts of torture by military in X province, what is the gender breakdown of the victims, etc.OpenEvsys is different, in that you can also record in as much detail as needed what happens inside these stories. You can record violations, link them to the victims, and the perpetrators, and the sources. It is a fully relational system, so you only enter perpetrator X once, and then you link perpetrator to all the acts that he or she has committed, in all your stories, and then you can get a "bio" of all the acts that perpetrator has committed.
Compiling Different Documents
Although different organizations will use different software, the contents of their documentation are likely related. Advocacy efforts benefit by compiling data and creating a bigger picture of human rights violations.
Metadata , or simply “data about data,” is a set of structured data or content types that characterize an information object. Metadata can be used to compile data from multiple databases, thus creating a larger document. Developing a useful metadata system for the human rights community could have tremendous impact for the human rights community for it would allow drawing connections between different data sets and discover greater patterns of abuse.
What data can be collected?
- testimonies – For example, the Iraq History Project collected thousands of testimonies documenting the destructive impact of political violence under the Saddam Hussein regime.
- monitoring indicators – particularly helpful for discrimination, ongoing oppression
- legal investigations & researching government data - archives of repressive regimes may contain important information. For example, the Guatemala Archive Project revealed that many government-supported atrocities were well documented in their own archives.
- scanning media
- documenting HR interventions
- anthropological research
- ecological studies
- realtime data - for example, a dynamic realtime geo-map of the post-election situation in Kenya and a range of projects on the use of mobile technology can be found here.
Qualitative or quantitative research?
A big challenge in the field of documentation is whether to rely on quantitative or qualitative data. Both are important, quantitative data draw the big picture for us and qualitative data supply the emotive, social, and political aspects of a person's experience. A related question – How structured should documentation be? - poses a challenge to field research. Narrowly defined questionnaires will likely omit a large portion of the person's experience, whereas powerful individual testimonies are difficult to summarize into big reports that ought to quantify impact. The advantages and disadvantages of each approach are discussed here.
Documenting civil and political & economic, social, and cultural rights
Some of the traditional approaches (such as documenting violations) have been used primarily in the case of civil and political rights. However, the human rights community is strengthening its focus on the documentation of economic, social and cultural rights. Three broad categories of approaches to ESC were mentioned in the dialogue:
- state violations resulting from government actions, policies, and legislation.
- violations related to patterns of discrimination.
- violations related to the state’s failure to fulfill minimum core obligations of enumerated rights.
- when released, some data can be harmful to the very individuals it aims to protect
- accuracy – it is important to be aware of our biases as those who collect documents, “record the story not your interpretation of the story.”
- activist vs. scientists – NGO documentation is sometimes not trusted by scientists. Cooperation between experts and activists is key to solid documentation
- security - Recognizing the need of organizations to combine their data to create greater impact, it is all the more important to ensure a secure transfer and storage of data that does not put people (both those documented and documenting) at risk
- Protection manual for HR defenders - tactics to reduce the risks that those who document HR violations face
- Information cycles in HR organizations - graphic representation
- Presentation on "Digital Democracy" - engaging technology at the local level to produce an international response
- Planning a large scale documentation project - "Who did What to Whom?" is a detailed resource describing the different aspects of a human rights campaign achieved through documentation.
- Core Concepts in Data Analysis
- Statistics Manual for Human Rights Research
- Methods, tools and framework in HR documentation - a set of tools developed by Metagora
Under this main theme, please address these kinds of questions:
Dans le domaine des droits humains, je pense que la documentation fait référence à la collecte des inforamtions relatives aux violations en vue d'aider les victimes dans les procédures de demande de réparation.
De mon point de vue, la documentation sert à argumenter le plaidoyer de même qu'elle nourrit la recherche.
La doumentation des violations de droits humains peut surtout être utile dans le contexte de l'Afrique subsaharienne où les populations les plus démunies, plus exposées à ces violations, sont en majeur partie des personnes anaphabètes, n'ayant pas la culture de l'écriture. D'où la complexité aussi de la documentation pour les organisations de défense des droits humains
for those who do not read the language of Molière, nthuman wrote:
"In the field of human rights, I believe that documentation refers to the collection of information related to violations, with the purpose to assist victims in the procedures for requesting reparations.
From my point of view, documentation helps to argue for advocacy and also nourishes research.
The documentation of human rights violations is especially useful in the context of sub-Saharan Africa, where the most poor populations, which are most exposed to violations, are mostly analphabetic. This makes documentation complicated for human rights organisations."
The point about the correlation between exposure to human rights violations and literacy is an important one to be made, especially in a digital age. It pushes me to think that new media offers a chance to re-emphasize oral and visual documentation, and move away from text or data oriented collection. In addition, the development of devices, applications and user interfaces that employ metaphors more closely aligned with the specific populations, could be useful in enabling a group to more directly document violations against them, without requiring literacy in a specific written language.
In a related note, my students from last semester at NYU helped design an application for UNICEF to assist in reconnecting children in IDP camps with their parents. One of the interesting challenges was that a child may not know the exact location of the place they are from or be able to point it out on a map. However, our approach was to use a combination of maps, satellite photos, street views, etc to narrow down to a specific region or town. Again, instead of just a single field on a form "Location", a richer tapestry of media data can be weaved together to more accurately represent the complexity of a situation.
Documentation can take on a number of forms. I think it human rights documentation should be creative and innovative. Documentation approaches should be dictated by the end purpose, which is to improve respect for human rights. Whatever information is needed to further that goal, should be collected and used, and it can indeed be a variety of things.
Yes, documentation can fuel research (to understand root causes or consequences), or to inform advocacy... but in practice it can take a number of forms.
The classic approach, as pointed out below by Jana, is the violations approach, meaning to investigate cases of violations, and collect the facts about victims and perpetrators, the circumstances of the act, and so on. This approach, often used for civil and political rights, is still valid today. A violation is a violation, and always one too many.
There are other approaches however. Monitoring indicators can be really valuable, in order to document issues such as discrimination in access to housing, access to justice, access to schooling, and so on. This does not involve documenting cases at all.
Documentation can also mean deep legal investigations for the purpose of strategic or public interest litigation.
Sometimes, documentation can mean researching the archives of a repressive regime to establish post mortem patterns of abuse, such as for Stalinist Russia, Baathist Iraq, and the Ethiopian Red Terror.
Human rights organisations also need to track their interventions, both with an eye towards efficiency and effectiveness (such as NHRIs or special rapporteurs that handle individual complaints, or NGOs that provide medical legal or financial support to victims), but sometimes simply to collect success stories for their donors (who always like success stories!). This is also part of documentation.
Documentation can also mean scanning and searching the media, and transcripts of parliamentary discussions, for examples of discriminatory or xenophobic speeches, or anti-Islamic or anti-Semitic statements.
Documentation can also involve socio-economic research, such as the research conducted by several NGOs on the economic consequences of the West Bank Barrier on the populations of the occupied territories in Palestine.
Documentation can even involve anthropological research, to try to understand the consequences of violations for its victims. When I was at ICRC, we tried to better understand the plight of families of missing persons in Kosovo, in particular women. We found that in addition to the problem of unresolved grief, they faced all kinds of social and economical consequences due to the dominance of traditional law and customs in rural areas.
Documentation can even involve ecological studies. For example, to protect the rights of indigenous peoples in Siberia or the Amazon and ensure their livelihood, it may be useful to examine the effects of oil forage on the water systems and fish stocks, orthe possible negative effects that pipelines can have on migratory patterns of animals that are essential to these peoples survival.
Have I missed anything?
Documentation is also dictated by practical considerations: what an NGO knows how to do, has the means to do, or wants to do. What information is available, where? What information has already been collected? What information is missing? What is important to document on the short term, medium term, and long term? Where can resources be deployed with the most added value? What information is needed to convince a particular target audience? These are also factors to be integrated in a documentation strategy.
Using videos to document stories can useful in areas where literacy rates are low, and are easy to broadcast nowadays on sites like Vimeo or the HUB. I liked this Tactical Tech video on how this can be done.
What do we mean when we talk about human rights documentation?
An excellent question. Many of the featured practitioners here are documenting violations against political and civil rights. We create a record--hopefully a permanent record--of experiences of individuals who have survived violations of their right to life, to peaceful assembly, to security of person. In other words, we document violations of the following types: extrajudicial killings, torture, physical assault, sexual assault, property damage, destruction or theft, forced displacement, and forced consumption (typically of items that are nonedible or taboo, or forced overconsumption). In some cases that record is qualitative in nature; for example, the record might be a story, recorded verbatim, told by a survivor of the abuse. Or it might have come to us in the form of a newspaper report or official document. In some cases that record is quantitative in nature; it has come from a random sample survey or from the coding of qualitative information.
Both quantitative and qualitative documentation are useful; I like to think of them as "the forest" and "the trees." The quantitative information is akin to the forest, or an aerial snapshot of the big picture. The qualitative information is akin to the trees, or a picture of an individual tree. From the picture of the individual tree, you can see all of the details of the tree -- the pattern of the bark, the knotholes -- just like from qualitative information, you can learn the details of an individual's human rights violations experience. From the picture of the forest, you can't see details, but you get a different, equally important overview of the whole group of trees -- just like from quantitative information, you get a "snapshot" of what happened and the prevalence/intensity of the violations. I believe that the best overall picture of mass human rights violations is given by combining the forest and the trees--in other words, documenting what happened via both quantitative and qualitative information.
Going back to the types of human rights violations, there are violations against rights other than civil and political that we must turn our attention to in the future--I am thinking of violations of economic, social, and cultural rights, such as the right to food, water, shelter, a means of employment, medical care, and education. Yes, these types of violations are often harder to document, but then again, there are some clear-cut cases. Sometimes E/S/C rights violations are interwoven with P/C right violations. For example, if forced displacement results in a lack of access to food and water, then both a P/C violation and an E/S/C violation have occurred. As an example of a purely E/S/C violation, consider the case of a government which restricts the movement of humanitarian workers attempting to bring food aid to a population suffering a famine.
How do we, as practioners, best document such violations? I believe this is an open question that deserves some debate and a great deal of research. First, we must agree that it is possible to document violations of this type. Much work has been done by the human rights community on defining E/S/C rights and the violation of those rights -- it is time to translate those definitions into documentation practices.
Thanks for this good input.
Quite some research has already been done on methods for measuring E/S/C/ rights.
Among various other resources, I would like to mention the large number of papers produced for the 2000 Montreux Conference Statistics, Development and Human Rights, available at http://www.portal-stat.admin.ch/iaos2000/index2.htm and Measuring Human Rights and Democratic Governance, published as OECD Journal on Development and available at http://browse.oecdbookshop.org/oecd/pdfs/browseit/4308021E.PDF
Essential is the distinction, and complimentarity, of the "indicators" method, to measure long-term impact, and the violations approach which starts from the point that each violation is one too many. The latter approach can also be used for monitoring E/S/C rights, see among others Audrey Chapman at http://www.cceia.org/resources/publications/dialogue/1_10/articles/580.html
But maybe I am moving too quickly to the "how" - all of you, feel free to exchange views on the "why"!
Thank you for mentioning Montreaux! For those that don't know, that first meeting in Montreaux led to the creation of a project called Metagora; it was housed at the OECD and the web site for the project is still up: www.metagora.org. Audrey Chapman also had a role in Metagora, as well as spending many, many years helping to clarify definitions of economic, social and cultural rights.
However, I would still argue the documentation of E/S/C rights is in its infancy, and not as far along as P/C right documentation!
But what has been happening since Metagora, in the area of measuring these rights? Where is the innovation coming from?
I see interesting work from the UNDP human development reports, the work on MDGs, and specialist NGOs like Cohre, Fian, Social Watch and CESR.
But for generalist NGOs, its a challenge to get started with monitoring ESC rights. How to pick a high-leverage project, a feasible niche? What strategies: awareness raising, litigation, advisory services to policy makers?
ADH in Geneva is running a course about monitoring ESC rights, its in May 10-14 at the same time as the ESC Committee, and there will be high-level interaction with key Committee members. So its a good place to start. Highly recommended.
Thanks for sharing your thoughts on innovations related to economic, social and cultural (ESC) rights.
Jana, your comment regarding "the lack of good baseline data for measuring improvements in terms of poverty levels, access to food, water, shelter, education, etc" really struck me.
New Tactics has a couple of excellent tactical notebooks that highlight NGO advocacy efforts working in this area.
Such baseline information would be incredibly important for tracking improvement data, as well as reversals, that are emerging. Improvement data would be very motivating for communities themselves - it is difficult to recognize incremental gains given the on-going state of their condition. What a great tool such data would be for NGOs to show governments and corporations that what NGOs are advocating for is having impact.
One such example from IDASA in South Africa has been their Using Government Budgets as a Monitoring Tool as one form of data collection that has been used for tracking incremental gains in poverty reduction. IDASA has used this budget information to hold government accountable to its commitmenst, advance their advocacy for system changes, level of investments, and other on-going improvements.
Another example comes from FIAN's efforts to connect transnational corporations supported by German bank investments to hold corporations accountable for their abuses - see "Leveraging the Money: Enforcing human rights by influencing financial institutions".
Jana, your post about ESC transnational corporation made me think of this case study by Tactical Tech, on how farm subsidies are spent.
It shows how to present complex data in ways that really talk to people.
Have a look at the video, should inpsire some really nice ideas. Its part of a series of videos on information activism, really inspirational.
Thanks for posting these resources. The 1997 article in Human Rights Dialogue by Audrey Chapman outlines a 3-part framework for monitoring E/S/C rights violations, arguing most vigorously for monitoring violations, meaning acts of commission, rather than assessing progress on attempts toward realizing greater fulfillment of the rights outlined in the International Covenent of Economic, Social, and Cultural Rights (ICESC).
To quote, she proposes a tripartite categorization of violations. (See Box, page 19.) The first category includes state violations resulting from government actions, policies, and legislation. The second contains violations related to patterns of discrimination. The third includes violations related to the state’s failure to fulfill minimum core obligations of enumerated rights. This article was written in 1997. I see the tools developed by HURIDOCS address this, but I'm wondering if there are additional resources that people could point to. In particular, I am looking for resources about arbitrary taxation as a human rights violation - both tools for monitoring and for data analysis.
The Project on Economic, Social and Cultural Rights at the Geneva Academy of International Humanitarian Law and Human Rights is pleased to announce that it is now accepting applications for its third annual training course on Monitoring Economic, Social and Cultural Rights. The course will be held in Geneva, Switzerland from May 10 – 14, 2010.
The course will provide participants with the know-how to get started in monitoring economic, social and cultural rights (ESC rights), as well as the in-depth knowledge to enhance the ESC rights monitoring work they may already be doing. The course will take place in parallel with the May session of the Committee on Economic, Social, and Cultural Rights in order to allow course participants to observe the Committee in session, to the extent time permits.
The course is designed for human rights professionals from small and medium-sized organisations (particularly NGOs) who monitor ESC rights or who wish to begin doing so.
For more information on the course, please see the attached flyer and application form or visit http://www.adh-geneva.ch/teaching_for_professionals/teaching.php?id_formation=1
Project on Economic, Social and Cultural Rights
Geneva Academy of International Humanitarian Law and Human Rights
Jana's comment is very important. I think that we in the human rights community very much need to open ourselves up to a serious discussion of what it means to document violations.
Jana's point about the tension between "qualitative" and "quantitative" is essential. Communication human rights documentation often involves a tendency towards the quantitative as a stand-in for "serious" with qualitative as a stand in for "emotion".
Actually, a great deal of quantitative human rights data is not gathered in a rigorous manner such that it is either accurate (for example, total numbers of victims) or representative of a population (if 50% of those interviewed were tortured, it obviously doesn't mean that 50% of the population was tortured is those interveiwed were not identified in a rigorous and statistically relevant manner, etc.).
And, so much of qualitative data as presented is oriented towards the emotive aspects of human rights advocacy rather than as a means of better understanding what is actually going on. This is interesting because human rights violations are only really sensible through an engagement with social and political context. People are not "tortured" they are tortured by certain actors in certain social environments. While it is true that teh act of torture as a crime may not require a great deal of sensitivity towards context it is equally true than any serious engagement with the meaning of the crime or the means of responding to that crime very much requires an understanding of context. In this way, the qualitative nature of data collecion is essential for making sense of what is actually going on.
To build on this topic further – as all of us participating in this dialogue know well, the discourse on human rights violations is peppered with demands for quantification. In the Benetech Human Rights Data Analysis Group (HRDAG) we engage in statistical analysis of human rights data when we assess that the data are of a high enough quality and quantity that we can conduct the analysis in a scientifically defensible and rigorous way. However, the conditions required to conduct that kind of analysis are hard to meet. We discuss some of the challenges we've identified over the years on our HRDAG 'Core Concepts' page in case it's of interest - I'm sure you'll recognize the challenges we've listed.
Because of this, while it’s generally possible to create ‘descriptive statistics’ that describe the contents of a documentation project database (for example) it’s much more difficult and sometimes impossible to infer from the observed violations captured in that dataset to the greater reality of the human rights context in that city, or district, or country. (as with the example Daniel gave in his earlier post)
Often, when people ask us to produce statistics on a conflict, our response is to ask them to instead focus on the qualitative data, because that’s where the strength of the documentation is. In our view, it is worse to have a bad quantification of a human rights situation than to have no statistics at all. If flawed statistics are put forth and then undermined by a critic, that kind of methodological attack can damage the credibility of the entire advocacy effort on that given issue.
Qualitative information is an incredibly rich source of detailed information about what’s happening to whom, by who, when, where and how, and a report that draws informed conclusions and recommendations about a human rights crisis based on observed phenomena is much stronger than a number which cannot be defended.
I agree with Dan and Vijaya that quantitative analysis is often a problem for human rights NGOs.
To complement Dan's example, we can take the case of an NGO that collected 20 torture cases in January, 40 in February, and 100 in March. It may be tempted to conclude that torture is increasing. This may be correct, but the increase may also be due to a variety of other reasons:
But I don't think, as Vijaya, that that NGOs shoulf be discouraged from using numbers:
... our response is to ask them to instead focus on the qualitative data,
because that’s where the strength of the documentation is. In our view,
it is worse to have a bad quantification of a human rights situation
than to have no statistics at all. If flawed statistics are put forth
and then undermined by a critic, that kind of methodological attack can
damage the credibility of the entire advocacy effort on that given
Rather, they should be encourage to engage, make mistakes, learn from them, and move forwards and upwards. If their credibility is attacked because their numbers and interpretations are wonky, fair enough, this should stimulate them to review their copy and improve their methodology, and come back to the table with something better. Its a learning process, nobody get it right the first go.
Otherwise we risk creating a situation where quantitative analysis is the the preserve of a few scientists, guardians of the Temple of Truth in Numbers, because only their numbers are scientifically valid, and only scientifically valid is good enough... and the rest of us are to be content with collecting stories, however useful these may be. :-)))
NGOs want to count, and they always will. Its natural. And I would argue, they will always be able to say something meaningful with their data, provided they have a little basic expertise to identify sources of bias and place the necessary caveats where they belong. A violation is a violation, 200 violations are 200 violations, and 20 thousand violations are 20 thousand violations.
An organisation in Africa has collected 20 thousand cases of torture, often with detailed medical affidavits. Now thats something. Maybe they cannot give a precisely count of the whole phenomenon, or say the percentage of persons arrested who were tortured (as those who weren't go unreported), but they can certainly use their data to say that torture is a problem in this country, and that its systematic and widespread, and lasts over time. Now thats already a strong and important type of statement you cannot say with stories alone. It shows a policy of repression. And you need numbers for that, numbers connected to places and perpetrators and context. They also collect details on which methods are used, so they can certainly say something about the prevalence of certain methods in certain areas, or by certain perpetrators. They also collect biographical information about victims, so we can assume they can say something reasonably accurate, about the profiles of the victims.
But even if the NGO has only 200 cases, and the government has only 11 because it is in denial of the problem, then these may be used quantitively. I am referring to CATW in the Philippines, who was able to successfully get an anti-trafficking law passed, on the basis of the advocacy mileage they got from these 200 cases. So maybe the sample was on the smallish side, but at the end of the day they got their law, they won their battle, so they were right to use these numbers. Its maybe not scientific, but it worked.
At HURIDOCS, our approach is to provide both quantitative and qualitative possibilities with the tools be build, so people have options. For example, OpenEvsys can be used both to collect and organize stories, but also to provide "who did what to whom" quantitative analysis of the violations in these stories: how many acts of torture by military in X province, what is the gender breakdown of the victims, etc.
We are currently working on the analytics of this tool, so if you want to take a look at the advanced search area:
Online demo: http://resperesaas.com/openevsys/
The user name is "admin" and the password is "password".
We also are working with Oleg Burlaca, on a prototype for a similar "who did what to whom system", have a look here: http://openevsys.burlaca.com/
And this is the first step, we'd like to go as far as we can: custom reports, built-in production of graphs and charts, GIS and maps. NGOs want this, so we need to give it to them. And if they accept, we'll also provide them with advice on how to make meaningful statements from their data, how to avoid the bias traps.
These scientists themselves they often work with the same imperfect data, often collected by these NGOs at the risk of their lives in difficult circumstances. There is no such thing as a pure, absolute, scientific truth, there is no black or white, somehow its always shades of gray. The key difference is that the scientists know what can be said in the basis of this data, and what cannot be said.
I would say that no dataset is perfect, there will always be some trace of bias especially in repressive regimes where the strategy of perpetrators is often to conceal violations, which only come out once its too late. So scientific rigor is elusive. And NGOs don't have the luxury to wait, they have to make their case now, with the data they have, whether good or bad or somewhere in between. Its an information war, so lets help them win as many rounds as they can. Good enough is good enough, as long as you get from A to B, it doesn't have to be perfect.
On the contrary, we need to bridge the gaps between scientists and advocates, this is the real challenge. Organisations like Stataid and HRDAG have an important role to play in building the capacity of NGOs, because of their acute understanding of social research methods as they can be applied to the human rights context.
Its not easy to bridge the gaps. Activists are often distrustful of science and scientists, or they simply are not willing to acknowledge possible sources of bias, and react defensively to any suggestions. So what can we do? Any ideas would be appreciated.
Its important to bridge this gap, because as our world is becoming more sophisticated, and human rights actors need to be able to deploy ever more complex argumentations to make their case. Not only to provide evidence of violations, but to understand causes and consequences, and to offer well informed and well articulated policy solutions. We need a pluridisciplinary approach to documenting human rights, and many areas of expertise have a role to play: lawyers, communicators, statisticians, information scientists, IT systems developers, social researchers, economists, archivists, historians, public policy specialists...
Its a pity, because help is often just around the corner, at the local university. Online, HRDAG's core concepts offer a concise list of very valuable pointers. The Spirer manual on human rights statistics is also an excellent reference, but nobody knows about it. Herbert Spirer is a statistician who worked in economics, before devoting his life to explaining how statistical methods can be applied to human rights.
There are As Bert mentions, HURIDOCS is currently putting together a digital library with the best resources, in the coming two months. The idea is to work collectively on finding the best items to include in each section, because each of us has a few pieces of the puzzle.
Quantitative analysis extends beyond counting violations, too, as it also involves using indicators approaches. Have a look at this Rosling video and imagine how these types of tools like Gapminder World can be used for human rights, if we can link the right datasets to the human rights indicators.
Again, I'm not saying that numbers are better than stories, only that some carefully chosen and interpreted numbers will usually have their place. And that NGOs should not necessarily be discouraged from using them on the basis of their presumed lack of expertise, rather that they should be helped to use them cleverly, avoiding statements that are not backed up by their data.
Sorry for the long spiel, but its a topic I feel strongly about!!!
I am glad that Daniel D'Esposito refered to the Spirers manual, it remains an essential resource for NGOs to apply simple statistical methods and techniques in the various stages of information processing: collection, documentation, analysis, reporting...
One important lesson from that book is that your data may be good or bad, but will never be perfect. Therefore, it is essential to explain your working methods and qualify your findings. When it comes to figures, it is also better to give qualified estimates rather than round or hard figures.
This is one important way in which the gap between NGO reports and "real science" can be bridged.
I completely agree with Daniel that one of the most important roles experts can play is to help NGOs (and other non-technical-experts) "...use [numbers] cleverly, avoiding statements that are not backed up by their data." There is certainly nothing wrong with reporting the numbers of acts, victims, etc. that have been collected. When we get into trouble is when we attempt to infer from what has been observed to a larger, unobserved, population. If the data has not been collected in a rigorous, representative way, then we cannot possibly draw these types of conclusions. As Jaya points out above, in these cases summary statistics about the data observed are the only responsible numbers to report.
Inferences about a larger population may not be necessary to win a battle. As Daniel points out, certain acts may be considered so egregious that any documented evidence of their occurance is sufficient to enact policy change or affect public opinion. We do not always need to reach beyond what has been observed and attempt to make statements about broader patterns.
Certainly science only advances through trial and error, and of course researchers should constantly work to improve their methodology. However, at every phase of this process we must be modest and transparent in our conclusions - this is what I mean by sticking to the data. We can only report what the data tells us, and if the data was collected in an unrepresentative way, we only know about the data we have collected and observed. It does not tell us anything about larger patterns beyond what we observed.
Unfortunately, I have to disagree that the organization doing the reporting is the only one to suffer when these kinds of mistakes are made. As the recent controversy over mortality rates in the Congo shows us, when researchers publicly reach beyond what can be reasonably concluded from their data, everyone's credibility is undermined.
Hi Megan, could you tell us more about the Congo controversy? Its very interesting. Why was the issue controversial? What were the stakes?
I think its important to link the credibility of NGO data, and the statements they derive from it, to the attitude of the authorities in charge of ensuring respect for human rights, usually the government. Are they acting in good or bad faith? Sometimes, they may attack a report, in good faith, because they have legitimate questions. More frequently, they attack the report in bad faith, and it doesn't matter if its solid or not, they'll find some reason to undermine or bury it. Or other mitigating tactics, such as setting up an "Investigation Commission" that goes nowhere. But its better than ignoring the debate and attacking the NGOs in other ways, such as trumped up legal charges, raids, or physical attacks.
Sure. Researchers at Simon-Fraser University recently published a Human Security Report titled "The Shrinking Costs of War." In Chapter 3 they criticize some of the methodology employed by the International Rescue Committee in some of their surveys in the Democratic Republic of Congo. This story was picked up by numerous news outlets.
Now, it's important to note here that as a statistician I can point to strengths and weaknesses in both sides of this argument. Also to reiterate that scientific debate is absolutely necessary for scientific progress. Much of the negative fall-out of this public debate is due to our inability as scientists to conduct this debate publicly in a way that is clear to a general audience.
However, those issues aside, my point here is that as a result of the IRC reaching beyond what could be reasonably concluded by their data collection methods, they left themselves open to scientific criticism. Now, I fear, in the Congo as in Darfur, all numbers are considered suspect. Whenever any group produces statistics to bolster their claims of the need for policy change or humanitarian efforts in these areas those numbers are viewed with much more skepticism than statistics from other geographic areas that have not experienced this sort of public debate over methodology. I believe this level of skepticism of numbers harms all groups, not just those actively involved in this current debate.
A very interesting example, thanks for sharing this. Read a bit more about it on the HRDAG website. If you have similar experiences, please share before the dialogue closes, its very interesting!
Just a quick response to Daniel's question:
Its not easy to bridge the gaps. Activists are often distrustful of science and scientists, or they simply are not willing to acknowledge possible sources of bias, and react defensively to any suggestions. So what can we do? Any ideas would be appreciated.
Two issues come to mind, both of which are rather sensitive: financial resources and how experts engage as experts. The local Burmese groups have appreciated Benetech's engagement with their project because it has been sustained and there was an institutional commitment to see the project through. Scientists, especially if they are living in the global north where cost of living is high, are expensive. One of the most effective ways for scientists to engage is to not only cover their own costs but to use their access to donors to acquire funds for their local partner(s).
The other issue is the arrogance or perceived arrogance with which scientists engage local groups. (I've actually never witnessed such a thing; lawyers on the other hand ...) It's a matter of trust building and respect and that comes most easily through long-term engagement as well.
Vijayat raises an important point regarding summary statistics of data collected via non-random means; inference is extremely limited. I believe, however, that this is an issue of capacity building -- NGOs need to understand the limitations of quantitative analyses in that context. Similarly, newspaper reporters need to understand the limitations of quantitative analyses in that context! I am guessing that NGOs (and newspapers) will continue to attempt to quantify data whether we intervene or not -- better that they understand the risks and the issues.
There is also the issue of the purpose of the information. If the information is to come under intense scrutiny -- for example, in the case of legal proceedings and/or in the case of the results of a TRC (truth and reconciliation commission) -- then the analysis of the available information -- whether qualitative or quantitative -- needs to be as airtight as possible. In that situation, a quantitative analysis of data that are not derived from a random sample is very risky.
If the data are to be used in a context where there is less scrutiny -- perhaps in the case of humanitarian assistance or reparations -- there is less risk to the human rights cause in quantifying the data. However, if the data are not truly representative of the population, there is a risk of not providing assistance and/or support to members of that population that truly need and deserve it.
Thanks Jana, Vijaya, and Megan,
"I like to think of them as "the forest" and "the trees." The quantitative information is akin to the forest, or an aerial snapshot of the big picture. The qualitative information is akin to the trees, or a picture of an individual tree. From the picture of the individual tree, you can see all of the details of the tree -- the pattern of the bark, the knotholes -- just like from qualitative information, you can learn the details of an individual's human rights violations experience. From the picture of the forest, you can't see details, but you get a different, equally important overview of the whole group of trees -- just like from quantitative information, you get a "snapshot" of what happened and the prevalence/intensity of the violations. I believe that the best overall picture of mass human rights violations is given by combining the forest and the trees--in other words, documenting what happened via both quantitative and qualitative information. "
Wow, excellent metaphor, Jana.
If I understand your points correctly, by inference you mean that you cannot make a presumption about the whole, on the basis of the part that you can observe, unless the part is representative of the whole.
And that the part can only be representative of the whole, if it is chosen with completely random methods, to protect against any selection bias. For example, every fifth household in every fifth street.
Well, that may be possible in polling situations, where you have a stable environment, a nice detailed map of the city, and the financial means and human resources to do the work thoroughly, according to the book.
But what to do in the kind of human rights situations NGOs face in Zimbabwe, or Russia for example, or Burma where cross-border researchers don't even have good real-time access to the population at risk. In these situations, people work with what they can get.
I guess your answer would be to focus on collecting detailed testimonies, and forget the numbers.
But can there be no way to make use of the numbers at all? I mean, if you've collected 10 thousand torture testimonies, you don't have to infer on the probable total number, which would only stir up polemics. The number or 20 thousand stands by itself. Its 20 thousand too many. Its enough to show that you're not talking about a few rogue policemen, thats there is a policy behind it. Or is this mistaken?
What if you have many more testimonies than that, but your monitors were not so well trained or are not very articulate writers, so even if they risked their lives to collect these stories, most don't have many circumstantial details beyond the basic facts and characteristics of victim. So what to do with this information? Throw it out?
Is inference possible by using some random sample surveys to get an idea of the whole? For example, to ask a refugee population, during household food security surveys for example, if they have any persons in their family who were killed or tortured in relation with the conflict.
A personal situation I was faced with, was receiving a regular trickle of families coming in to the office report enforced disappearances. We were not doing any outreach to influence the number of incoming visitors. The government may have been declaring the war was over, the trickle was still taking place. We didn't try to figure out the total number of missing, it was enough for us to conclude, for our advocacy interventions, that the pattern of disappearances was still ongoing. Was this wrong?
I also think that numbers and stories work together. They protect each other. For example, if you come forward with stories, its easy for the perpetrators to dismiss the violations as the actions of a few rogue soldiers, or as deliberate exagerations of politically motivated pseudo-victims told to gullible human rights workers. But you can use numbers to show that the violations are widespread enough to exclude the rogue soldier theory, or the exageration theory. So the numbers protect the story, by giving an idea of the scale, if not a precise count of the total phenomenon (which is not needed anyway).
Have a look at section 3 of this ICRC report, which describes torture by Coalition forces of Iraqis in 2003. It makes a very careful use of stats, not to give their interlocutors any rope to hang them with. But it does use some numbers, but very subtly. It speaks of a memorandum containing 200 allegations of ill-treatment. It speaks of another letter of 50 allegations. It speaks of a wide array of places. This cuts off any "rogue soldier" argumentation and shows that it is indeed a systemic problem and its a rock-solid report, even though the "rogue soldier" tactic was the finally public defense used by the Bush administration. So the numbers protected the stories!
Overall, what level of proof is to be expected from human rights NGOs for their day to day advocacy?
Thanks Daniel for continuing to work to clarify this important idea.
Yes, this is a correct interpretation of inference. However I want to add that random selection methods are but one piece of collecting representative data. In addition to being randomly selected, a large enough sample must also be selected to claim representativeness. “Large enough” is highly dependent on individual context.
I would never suggest forgetting the numbers. I completely agree that collecting a random sample is frequently impossible in situations where we need to gather data about human rights violations. As I hoped to make clear above, what I emphasize to our partners is to use those numbers modestly, to make sure to avoid claims that cannot be backed up by those numbers. That certainly is not the same as not using numbers at all.
No, you’re right, some violations are so egregious any documented evidence of their occurrence is enough – there is no need to generalize beyond what has been collected and observed. In fact, all the more reason to avoid generalizing, when that may open you up to criticism and undermine your cause.
No, this was right precisely because you “didn't try to figure out the total number of missing.” Data that is collected in this way, called convenience data, can be extremely useful. And in the case described above it can certainly serve your advocacy intentions. The appearance of any families in your office was enough to contradict the government’s claims. You did not need to generalize to the total number of missing to make your case. This is sticking close to the data – describing exactly what the numbers say, and nothing more.
Agreed. As Jana so eloquently says above, quantitative and qualitative information need to complement each other to tell complete stories of human rights violations.
Thanks for this, everything is much clearer now! I think I learned a lot these last days! Daniel D'Esposito, HURIDOCS
Frankly I do not see the tension nowadays.
One can record open ended interviews (qualitative) but then use natural language recognition and machine learning to extract quantifiable data from these interviews / pictures / etc.
The quantitaive data may not be accurate at the individual level, but over large numbers of individuals one gets a decent picture for various research purposes.
I think a more important distinction is between machine readable and non machine readable data.
The decsions that have to be made about the collection of data are based not only on how the information will be extracted than about the realities in the field, where the interviewer meets the person who has witnessed or suffered violence. The examples I give above about the Burma database shows this - the information that was collected using the "ask about everything" provided a broad picture. But for people who are looking at specific issues, the depth of the information is lacking. So the question is not about extracting the data but about collecting it. How much time does the interviewer have? How long can she stay and talk with the person? How many people need to provide information for some meaningful analysis to take place?
In addition, sometimes the person's individual truth is exactly what's needed. I think dismissing these tensions as irrrelevant is only reasonable if one has a very narrow purpose in mind. But as the rest of this dialogue is demonstrating, there are a variety of purposes and thinking through these questions is essential given the reality on the ground for fieldworkers.
That said, Fernando - I would be interested to read more about the distinction between machine readable and non-readable data.
I agree with all you have to say. Indeed I see no tension between what I suggested and the more nuanced data collection you have in mind.
My point is that you can collect any data you want - open ended interviews, pictures, Geo locations, video, voice, etc.... - so long as it is in some electronic format.
From there you can feed that data to the computer to sort, interpret, find patterns, etc.. It is amazing what computers can do nowadays.
I am not saying this is all one should do, or that computers can solve everything, but rather that useful data, nowadays, goes well beyond a relational table or a structure inteerview, and that human coding of open ended interviews into "data" is almost unecessary.
So I think the key is to have the data - however qualitative it may appear - in electronic format.
I think this is an excellent idea regarding using speech recognition to create machine readable text from audio and video recordings. This is particularly useful in light of recent technology improvements in this area, such as the audio transcription that YouTube and Google Voice now provide for free (http://googlesystem.blogspot.com/2009/11/youtube-audio-transcription.html), not to mention the number of higher end commercial engines offered by Dragon, IBM and others.
In general, I think this just points to a trend towards the ability to link and analyze deeply within multimedia content which up to this point has been treated more as a block with tags.
One question that WITNESS (www.witness.org) is wrestling with in terms of documentation is how to understand burgeoning amounts of citizen documentation that are not gathered through formal human rights documentation processes. Does text/images/testimony need to have been conceptualized as 'human rights documentation' in order for it to be useful to human rights work? This relates very closely to the real-time dilemmas that I also see emerging in other threads.
This question arises particularly in the light of the increasing amount of visual documentation (primarily of civil and political rights violations, e.g Iran, Honduras etc, but also including testimony-based documentation around ESC rights) that is being circulated online and because of more and more ubiquitous cameras on cell-phones and the like.This kind of visual documentation was already fairly poorly assimilated into human rights documentation strategies (if better incorporated into mobilization and advocacy strateiges), because of the slippery nature of its documentation value (how do we understand pov, whats in and out of frame etc) and the speed, pace and volume of creation of a/v material is outpacing much of the human rights community's capacity to relate to it.
If we assume that in five, ten years time, perhaps 90% of visual documentation of human rights violations will have been gathered outside of the formal human rights infrastructure, how do we make sure that this citizen documentation is of value to more formal documentation/advocacy processes (which will continue to be important for securing protection and redress) both in real-time and after the fact?
I wonder what the boundary is between formal and informal human rights documentation, an issue that Sam raises here. It seems to me to be a matter of being systematic or not, and it applies to to not only video material but other ways of hearing people's stories as well.
For example, if I sit down and listen to the life story of someone who has suffered a series of human rights violations, and then I write that story up and post it on my blog, I am applying some of what might be called "formal" human rights documentation techniques, but not submitting it to any systematic or larger effort. I am also not making an effort to use that person's story to advocate for anything in particular, but just to raise awareness among people who might read my blog.
On the other hand, contributing that story to an effort to collect lots of those stories, analyze them, and use that analysis to bring about some kind of positive change is a systematic way of handling it. Can video documentation be treated the same way? How to make it of value to human rights advocacy?
I think one possibility is to develop systematic ways of storing, coding, and analyzing the information contained in the videos. (Sam, has Witness developed some methodologies for something like this? I see that there is a sophisticated search tool on the media search portion of the website. Can you talk about the process of applying keywords, etc.?)
The second possibility is to take it for what it is, value it for its immediacy. The videos of monks marching in Burma in 2007 and the subsequent crackdown brought unprecedented attention to the human rights situation in Burma. It also created a political environment internationally where promoting a human rights agenda (e.g. pushing for an arms embargo, calling for a commission of inquiry), got a wider hearing, though it was still difficult (and ultimately was not successful).
Perhaps the real-time evidence does not need to be incorporated into a formal effort, but can be seen as an essential tool that can be used alongside human rights documentation.
Formal, informal, interesting and useful distinction! Maybe formality can be applied not only to how the data is processed, but also to how it is disseminated: a blog would be informal, a communication to a special rapporteur would be more formal.
About videos, the key concept is metadata. As you can't "see into" the video without watching it, you need to apply metadata to retrieve it reliably. This includes an abstract, author, date, location, corporate author, format, length, etc.... and index terms.
Witness has developed a really useful thesaurus of index terms, Sam would it be possible to share it? HURIDOCS has a general list of index terms, also a useful starting point, but it needs a bit of refreshing.
Dan - this is Grace Lile, from the WITNESS archive. I am happy to send our subject index to anyone who contacts me at email@example.com; at some point we will make it available online but it is likewise in need of some updating.
There are myriad challenges in managing video as documentation, and yes, metadata is key. It is a simple fact that images, moving or still, never stand on their own as documentation - they must be accompanied by words - description, metadata - to describe, authenticate, evaluate, contextualize, reveal meaning. Perhaps this seems obvious, but the ease with which images are disseminated, duplicated, edited, recombined and recontextualized, especially in the digital world, and the current lack of technical tools for ensuring persistence of metadata across these processes, creates endless opportunities for misuse, misappropriation, error. The ability to identify, authenticate, retain and track content is imperative, not only in an evidentiary context, but to tell any story truthfully.
At WITNESS we have developed our own database to manage our video collection, which has been generated by a large number of remotely-based HRNGOs. A record for any given video document will ideally contain the following:
I would also note that in some spheres, including within the archival community, there has historically been a bias against visual information, regarding it as being of primarily illustrative (which it certainly can be as well) rather than documentary or evidentiary value. So standards for documenting with moving images have sometimes been poor, and there is a dearth of tools and best practices pertaining to audiovisual documentation. I'd love to hear from others who are grappling with this.
I admit that there is a historical bias against visual information within the archival community (Joan Schwartz referred to this as "archival othering": it isn't what we are used to so we treat it differently) as documentary record; however I must stress the historical part. I would be surprised to still discover this bias in any of my colleagues.
Like all records good context is essential but archivists have little-to-no capacity for adding item-level metadata but resort to description on aggregate bodies of records. Good documentation and metadata provided by the creators is like gold to us and future researchers. Grace gave a great list of metadata fields and the need for better standards and tools is certainly there. I must echo her statement: "The ability to identify, authenticate, retain and track content is imperative, not only in an evidentiary context, but to tell any story truthfully." Trustworthiness of the record is even more difficult to retain in a digital information wold. We must keep that in mind and act accordingly.
I agree that metadata is key, not just for video, but for all born-digital and as well as digitized records. HURIDOCS provides a set of terms or a lexicon which can be used to populate metadata fields, but as far as I know there are no standardized metadata schemas for human rights records. Such a metadata schema must meet the requirements of persistent, authentic records, as well as the requirements of the human right practitioners who make and deploy these records. Such a descriptive standard is essential if the records we generate are to remian viable and useful, whether they be documents containing first-hand testimony, text messages from the field, complex data sets, or video or audio recordings.
Patrick and others,
I have found myself getting lost in the terms. There is a great deal of wonderful technical information being shared – perhaps in the New Tactics summary of this dialogue we could include a summary of terms to help people better grasp the powerful and insightful comments.
I found the information on Wikipedia on “metadata” helpful. Although even Wikipedia said that the article was in need of attention from an expert on the subject. Perhaps someone in this dialogue would have an interest in helping to provide a better understanding. Here is the link: http://en.wikipedia.org/wiki/Metadata
The definition provided on “metadata” is this:
Metadata (meta data, meta-data, or sometimes metainformation) is “data about data.” Metadata is an emerging practice with close ties to librarianship, information science, information technology and GIS. It can be applied to a vast array of objects including both physical and electronic items such as raw data, books, CDs, DVDs, images, maps, database tables, and web pages. Since the emergence of the Dublin Core metadata set and the internet, use of metadata has experienced a considerable growth in popularity as businesses and other organisations seek to organise rapidly growing volumes of data and information.
Yes Nancy, metadata is data about data. It helps you find things. For example, if you're looking for a book, you'll use keywords to narrow down your search, or if you click on a term in a tag cloud, you'll get the blog posts that have been given this term.
Your wikipedia article quotes Dublin Core, which is the standard set of metadata, and its good to work with existing standards.
Here is a very good example of metadata for a book: Who did what to whom, no less! http://shr.aaas.org/Ball/copyrite.html
Thank you so much for sharing this great resource by Patrick Ball. The table of contents provides an easy way to see the overview of the "Who did what to whom" documentation planning and documentation process. But it also provides some great insights into what challenges organizations might face from WITHIN their own organization for setting up the kinds of documentation systems that you all have been talking about in this wonderful exchange of experiences.
I appreciate that you shared more information about the Dublin Core and connecting people to that resource as well.
Nancy and All - Although it is true that metadata is "data about data", perhaps a more useful definition would be:
-metadata is a set of structured data or content types (referred to as a metadata schema) that characterize an information object (data) -
There can be different metadata schemas for different uses. For example, there are schemas related to descriptive data, preservation data, rights data, etc.
The best example of a metadata schema is the MARC record that we encounter in all library catalogs across the USA. It is a set of standard fields (author, title, pub. date,etc.) that all libraries adhere to. coupled with controlled vocabularies MARC allows all libraries to describe similar objects (books) in similar ways, and just as importantly, users do not need to re-learn again and again how to discover material. We can pretty much log into any on-line catalog across the country and easily find our way around because of the standardized mnetadata schema.
Imagine how useful a metadata schema approach could be for the human rights community:
1) We develop and promote standard (but dynamic) metadata schemas for describing different types of human rights information (audio-video, data bases, publications, etc.);
2) We couple these with the controlled vocabularies managed by WITNESS, HURIDOCS, and other organizations;
3) We build these concepts, both schemas and lexicons, into data management tools such as MARTUS and the system developed by RED in Mexico;
4) Once data is standardized we could then work towards developing a federated, secure, access-controlled union catalog or index that will allow human rights practitioners to share information in an efficient and user-friendly fashion.
Thank you for sharing this very helpful explanation on metadata schema. You also offer an excellent outline for the steps needed for vision of the future of human rights data collection and usage to where we want to go!
Thanks for the comment, Patrick!
Yes, HURIDOCS has published a series of human rights standards for bibliographic information available below, but this was in 1985 and the world has moved on since then. The internet didn't even exist at the time! So, if you have some serious energy to devote to this, and think we should get a task force going on this, lets talk! http://books.google.com/books?id=37h_3KBm9uoC
HURIDOCS also put together 48 controlled vocabularies, the most important of which is the first one, a set of general index terms. But these are currently static, locked into PDF which hasn't changed since 2001. And again, the world changes and new terms need to be added. For example, the Witness vocabulary includes the term "rendition", which is of fairly recent use. This vocabulary needs to be alive. http://books.google.com/books?id=eUxo7yIUHRAC
So we're working on an online thesauri builder. People will be able to download the latest vocabularies in whatever format they need: PDF, SKOS, spreadsheet, etc. But more importantly, they will be able to sign up and add terms, or even whole vocabularies, so we can keep this up to date collaboratively. We could add the UNHCR refugee thesaurus, and others. If you are working on a thesaurus, you can add it and use the system to get comments.
Personally, I think that these general index terms will have enormous value in today's internet age, to automatically feed together information from various websites into a portal. But the problem is that many don't want to use certain standard terms, but their own preferred terms. For example, some don't like latin terms like "infanticide" or "ne bis in idem", preferring English equivalents. And thats fine. But it makes systematic information exchange difficult.
So I see the potential for online systems to connect these different equivalent terms and translations for the same thing, and map them to an international standard term, so all these documents can be accessible through search engines and portals. So if you search for "ninos", you also get "children", and vice versa. So we can find each others documents, even if we use different terms and speak different languages.
Patrick, as an archivist, did you ever use Atom?
I'd tend to agree that the boundary between formal and informal documentation is the systematic nature. The question for me is how much we ground this 'systematic' nature at the point of direct gathering/documentation vs at the organizing moment, especially with increasing quantities of non-NGO documentation/quasi-documentation of human rights violations or testimonies.There are already a number of examples of documentation that rely more on organizing data in the moment rather than gathering it: e.g media monitoring as an approach to documentation (as Daniel D'Esposito cites); or is also the case in the real-time documentation apps that Nathan is referencing such as Ushahidi/Swiftriver. I think we are potentially about to be confronted by new organizing challenges of data that we don't yet deal with. AT WITNESS the skillset we think about for human rights advocates now stretches from how to aggregate/foster/curate others' human rights video, through how to use it in evidence, to creating NGO driven targeted advocacy videos, as well as increasingly how to use it in real-time mobilizing contexts.
In terms of how we think about how video contributes to larger, more systematic efforts of analysis and advocacy....
On the one hand, there is certainly already an established usage of videos as emblematic stories that represent complex patterns of abuse that the quantitative data has identified - an example from WITNESS' work would be something like the video Dual Injustice, that uses an emblematic story of one woman's disappearance in Ciudad Juarez and the arrest and torture of her cousin to confess to her murder to present a narrative that reflects multiple other cases of feminicide/police torture and misconduct. Another example might be a judicial case submission that uses some sample stories to reflect a broader pattern - which we've done in cases with the Inter-American Commission and African Commission.
The aspect that you are getting at, and that I think is more germane to our discussion is whether quantity of video data can contribute to large-scale systematic efforts. My sense is that the raw data value of video is not fully realized: most storing, coding and analyzing of video relies on tagging it with a thesaurus (WITNESS uses a modified version of the HURIDOCS thesaurus). The challenge for video is how to represent the quantitative/data aspects of video without reducing it back down simply to numbers/relationships that might have as well originated in another format (e.g an interview documentation), and losing the qualitative values of video as a witnessing medium.
In this light, the most (moderately) successful experiments in this regard have been mapping exercises such as the Google Earth layers experimentation in Darfur where video/photos are left as video/photos, and via a visual interface the scale/scope of a problem is shown and reflects the widespread nature of a human rights crisis.
It seems a pity if we cannot find a way to incorporate this real-time data and citizen documentation (I'd distinguish them, though they often overlap) effectively into our work as baseline documentation as well as advocacy and mobilizing materials. I think we're making progress with new platforms like Ushahidi, though there is a lot further to go on the visual documentation/advocacy side.
In preparation for this dialogue, I have come across many great resources to help us identify what documentation is, and how it relates to human rights.
In chapter 1.1 Purpose and definition of an information management system, Patrick explains, "As you can see from the preceding examples, the term 'information management system' implies more than a computer database. Rather, it suggests an integrated system through which an organization collects data, organizes it, puts it somewhere, and then analyzes it. A good paper filing system is always an important component of the system, whether or not an organization uses computers. Good information management can be done without computers. We talk of an information management system in order to highlight the whole process by which an organization obtains and analyzes information." Four steps of an information management system:
In chapter 1.2 Why use a formal system for information management?, Patrick explains,
"The information management system gives the organization a way to accumulate many individuals' systematic efforts. Thus the organization's memory can slowly grow to be greater than any of its member's memories." An information management system, if done correctly, is also important for human rights work in that it helps us to see relationships between events - and trends.
This document points out right away that 'documentation' means different things to different people. Some see 'documentation' as the collection of documents (like a documentation centre). Others think of 'documentation' as the "act of recording the results of an investigation, inquiry, research or similar activity." These two types of 'documentation' can be considered: library-type of documentation (collection of docs) and the documentation of events (recording info on ongoing or recent events).
Documentation is a process consisting of several activities, namely:
Documentation could also mean a specific part of this process. Thus, documentation could refer to the act of recording information, or the act of collecting and organising documents. [taken from 'What is Documentation']
Judith Dueck, the Vice-President of HURIDOCS, gave a presentation on human rights documentation in 2009 at the Soul of the New Machine conference in Berkeley, California. This is a link to a 5 minute introduction to the growth of the human rights documentation field with a few great personal stories.
Kristin, thank you for the 'Who Did What to Whom?' book!
In October 2009 I've read the HURIDOCS books before developing a prototype of a new version of the OpenEvSys system. The prototype is online (screencasts, working demo) and described in this paper. At that time I was feeling that I don't have enough "expertise glue" to nicely stick toghether two pieces: "Documenting Violations" & "Informations Systems". I think that Patrick Ball's book can help me find this magick glue and is a must read for an IT developer. Once again thanks for the link!
Under this main theme, please discuss these kinds of questions:
With so much information, and so little resources, where does a human rights organization start when trying to determine what kinds of information they should collect?
I did a little homework and found some good information in the 'What is Documentation' guide. The guide lists four aspects of determing what kind of info to collect:
Can you share any examples of what this process looks like? This seems pretty straight-forward, but I am sure it is not as easy as it seems...
In my view, a useful tool is the chicken analogy: input, processing, output and dissemination. And the place to start is at the end of the cycle, with your organisational goals and stategies, your organisation strenths and weaknesses, the contextual constraints and opportunities. This gives you the overall picture of what information you need for your stratagies, why you need this information, and what you can realistically expect to collect. And you work your way back from there.
Documentation is resource-intensive, and monitoring is usually a long term activity. So its important to pack your bag as if you're on a marathon, and not a short term sprint. Any excess activity will weigh you down over the long term, and you'll inevitably start to shed information collection that is not truly essential to your strategies, or work in partnership with others who are already collecting this information, or resorting to more efficient tools such as online storage and collaboration. You can talk of information economy, or cost benefit analysis, to make sure you are investing your resources wisely.
So the starting questions could be:
A couple of other useful diagrams for planning documentation, kindly reproduced by Tom Longley from Tactical Tech:
ICRC's double cycle and Manuel Guzman's documentation cycle.