Jump to navigation
Thanks for such nice intro, Kristin.
Users' needs analysis:
Sure, it's a definitely a place to start with. Properly designed IS have to start with users needs, but every time I ask: "OK, tell me what you need?" I get 2 kinds of answers.
As we all know, the solution for this problem is quite simple - ask user to put it on the paper. However, even then the result is far from perfect. In most cases, we need to "predict" future needs and as you may guess, it's a kind of "black magic", and it's definitely a problem. IMHO, it's good to have at least one person with minimum 10 years of practice in the area of analysis and design of Information systems.
Back in 1992, when the War in Bosnia just started, we build our first version of database application in order to keep track of war crimes committed. After one year we realized serious problem with our database: we didn't predict the occurrence of concentration camps and mass graves!!
Today, using data from RDC (and thanks to "Google Earth" and cheap GPS device), I made this.
Only few years ago, was there anyone who could predict this? Definitely no, but as Daniel pointed with 2 diagrams representing development cycles, we are doomed to constantly re-define our goals.
Determination of the possible sources of information:
The sources stated in the "What is Documentation" guide are good examples of possible sources. However, it's interesting that some sources which we consider standard and well-known in some countries are treated differently in other countries. In June 2008 we conducted "Training on the Information System for Human Rights Organisations" so I had a chance to share ideas and experiences with people from 7-8 countries. Here are the examples:
In the Balkans region it's a practice to publish obituaries in local papers. I believe that people publish obituaries all over the world, but in the Balkans (Bosnia & Herzegovina, Serbia, Croatia, Kosovo...) families of the deceased are particularly concerned about this issue. So, if we believe that one person was killed on specific date, the chances are good that we can find obituary in local papers. Kind of press clipping focused on last pages in the papers. It was important source for "Human Loses" project in Bosnia & Herzegovina. More info about this project is here (it's Bosnian language, but google translate will do the job)
- religious institutions
If one can get books from church or mosque, consider this as a quite reliable source. After all, if there is a dead person, there must be some kind of priest, right?
- monuments and memorial sites, cemeteries
If the name has been carved into a stone, it has to be reliable.
- collecting living memories of the war through oral history-based research
- war crimes trial monitoring
There is even more, but I can't recall ...
The rule is: there is no rule, search for your data everywhere!
Well, enough for my first post here ;-)
Kenan Zahirovic, MCITP
Thank you, Kenan, for contributing to this dialogue! What a great list of addition sources of information...it would be great to hear from even more practitioners out there on more types of sources for information on human rights violations.
From my brief experience attempting to create an information management system in Uganda, I can completely relate to the challenges around having the user write down on paper their needs for information/data. The more people involved in the data collection, the more complex it can become! Each person brings their own list of data points to be collected.
Another important source of information is the treatment/rehabilitation center working with survivors of human rights violations (such as torture treatment centers). These organizations are often vital to documenting violations, but the documentation is not the primary reason for these types of organizations. Their primary focus is on the treatment of their clients, as it should be. How can data collection focused organizations like HURIDOCS and Benetech partner with these types of organizations to assist them in their data collection needs?
If you work at a treatment/rehabilitation center - it would be great to hear about your organization's data collection priorities and also how you use the data that has been collected.
I think one of the factors that upsets this framework is that people involved in documentation projects usually start thinking through how to do it systematically after a significant amount of ad-hoc work has already been carried out. So the framework provided by Kristin and others is helpful, but people involved are stuck with the dilemma of what to do with existing processes and existing data. So I would add two steps - first to conduct an assessment of what has already been collected and how it is being stored, and then somewhere in the middle or end - make recommendations for incorporating the existing data into the new system.
Below is a 10-point plan that we use to structure our training of human rights NGOs in Iraq... We will have the whole manual reday in English and Arabic for public use soon. Where can we post it? Perhaps this brief outline will be of use for some... Overview of ten-step plan for human rights research and advocacy There is no single way to do professional human rights research. However, certain steps are necessary. The following is an outline of ten key steps that take you from the identification of a problem through research, analysis, report writing advocacy and subsequent planning.
Thanks for sharing this great training resource, Daniel! Looks like a good plan of action for practitioners.
You and all of our online community members are welcome to share any documents, videos, images, links, and conversations in our 'Documenting Violations' group space! Please join the group and add your resources. You can also use this space to keep in touch with documentation practitioners. I hope you'll find it useful!
Under this main theme, please discuss these kinds of questions:
With the increased implementations of crisis mapping tools, we are seeing the emergency of a "Realtime" acquisition and analysis of data about human rights violations. Rather than focus on detective research after the fact, those interested in protecting populations at risk must now move into more of a operational center model of tracking, vetting, organizing and disseminating data as it is happening, with the dual goal of getting to the truth of what is happening at that moment, while also doing what you can to stop a tragedy before it can escalate.
Obviously, realtime tools will not always be applicable for every case, especially with long term violations related to the environment, sexual crimes or corruption.
I don't want to get into a discussion of the quality of this new type of data collection model here, more I just want to represent that this new model exists, whether we like it or not, and all of the questions asked in this thread, must also take into account the fact that people who are suffering violations may be reporting about them directly, in 140 character chunks, with questionable authenticity and quality. What we are still figuring out is, how and when to respond to this, and where does it fit within all of the other tools which exist.
Good point, Nathan! I trust many of us are familiar with Ushahidi.
Could you give some more examples of recent-ongoing "realtime" projects for communicating violations?
Hi Nathan and Bert,
I wanted to share a few good examples of realtime data collection tactics that have been mentioned in a few of our previous dialogues.
From our dialogue on Geo-mapping for human rights, the Ushahidi example that was mentioned was that of their work in post-election Kenya. Ushahidi called on Kenyans to text in the locations of current acts of violence. This data allowed Ushahidi to create and disseminate a dynamic real-time geo-map of the violence taking place throughout Kenya. The data also served as a 'place-holder' for journalists and human rights workers to follow-up with the reports to further investigate and document the situation. (Also included in the Ushahidi example in the Geo-mapping dialogue is a mention to the use of satellite imagery to document evidence human rights violations -- visually.)
Ushahidi is currently working on a realtime map for Haiti to visualize incidents of emergencies, threats, vital lines, and response locations.
Another interesting example that I think we are all familiar with at this point is that of using mobile phones to collect data on election monitoring. This example has been shared in our dialogue on Election Monitoring. This example comes from Kenya, again regarding the 2007 elections. A collaboration of NGOs encouraged citizens to become their own journalists and activists by sending, via mobile phone, information on violations at polling stations. What they received were images of police brutality and instances of violence targeting opposition supporters.
I think that the ues of mobiles phone to collect information on violations is a promising tool for all the reasons that Nathan points out. Will it always, in your opinion, require the follow-up investigation work of documentation workers on the ground? Are there successful models of this practice being used today?
My two cents: I think that GoogleMaps or OpenLayers (the technology behind ushahidi.com) would be enough for HR geo-mapping needs, but you already can go 3D using GoogleEarth. You may find interesting the following links:
Stunning Examples of Data Visualization in Google Earth
KML Screenshots from Google Earth
Notice that the visualization may be dynamic, i.e. animated.
The use of mobile phones to capture real-time information on the ground is indeed exciting – the ability to immediately record in words, photo or video information about a human rights violation and communicate that instantaneously is powerful, especially when we consider that that person may be otherwise limited in their ability to gather and share that knowledge.
However, I think that the use of mobile technology will never remove the need for more traditional human rights documentation gathered ‘on the ground’. Although they can complement one another, mobile technology meets a different need than qualitative human rights documentation. A 140 character SMS from someone experiencing or witnessing a human rights violation is likely to have very different information density and quality than an interview with that individual, even if that interview is conducted some time after the incident occurs. SMS provides thin,immediate knowledge/alert of an incident, while the interview may provide more information answering our questions of Who Did What to Whom, When, Where and How.
As a technology tool, it is also affected by certain key limitations – for example, it is not a practical tool in areas where there is limited to no cell phone reception. If we rely solely on mobile tech data, we might interpret that the situation is very bad in urban Area A which has multiple reports via mobile phones, and stable in rural Area B which has no reports. Is the situation really better in Area B? Or is there simply no cell phone reception in Area B? Or is it both? From the information, we can't be sure.
Another application of mobile tech that needs to be done carefully is the mapping of mobile tech data. In a map where it appears that there are several incidents in a given area, depending on the quality of the data itself, it may be impossible to say whether each report was about a different incident or whether several people saw and reported the same high profile incident. Because a map is a statistical product, this becomes a problem in the same way it would for statistics – how can we remove multiple reports of the same incident? The SMS data may be too thin for us to address this issue at all.
I think that as the human rights community explores what is possible with mobile technology, we must balance its immense advantages with these limitations.
I think Vijaya's right about the potential limitations of mobiles as real-time reporting tools (both for thin-ness of data in SMS, as well as bandwidth/reception limitations that prevent transmission and skew information received ). But if we think of mobiles as recording devices in and of themselves, not just sharing/communication devices - e.g. audio/video, then they radically expand the number of potential documentors, particularly in contexts of limited literacy; as well as in situations where the real-time documentation may be less important. And there's no reason an interview recorded on a mobile can't be as detailed as a 10 minute interview (or whatever memory is on mobile) conducted via another means.
Nathan - can you talk to safety/security of documentation and the possibilities of mobiles to facilitate this?
This is in response to Sam's question about how mobile technology might improve the security of data and the safety of those documenting or being documented.
I'm going to brainstorm a bit here, to try to expand some of my ideas on this topic beyond the specific work I am already engaged in. So, here goes...
Those are just a few possibilities off the top of my head of ways in which mobile phones might provide for more security of data, and increase the safety of those doing the good work of documenting. I would love to see others contribute more ideas, challenge my assumptions or find flaws in their approach to security, or provide additional case studies that might benefit from a bit of crypto creativity applied to them.
Hi Nathan, nice to meet you!
"A mobile phone can be used as a "something you have" in a two-factor security system, more specifically an application on the phone can generate one time passwords for use in a system like Martus or another tool that requires more than a simple password."
This is something really useful for us: an system that sends an OTP to a user as a second line of security. I like it because its simple, and people usually have mobile phones, it doesn't involve hardware like tokens. Is this hard to implement? Can such messages be intercepted? I can imagine that if you start receiving OTPs you didn't ask for, you'll know your system is compromised and the sysadmin can quickly lock it down. What about if the system prints out a PDF with a table containing 100 OTPs, that the user can store in his or her wallet? What is safer?
What about a system using a ditigal object, like Martus keypair? Truecrypt offers a simpler alternative, in the form of keyfiles, which can be anything.... an MP3, a picture of your cat, that you carry on a USB. That seems really inconscicous to me. Is that hard to implement? What do you think of keyfiles in comparison with other "what you have" security systems.
Could you also tell us more about your work on these projects with students at NYU, like the elections monitoring? Are you looking for projects to dig in to?
>Hi Nathan, nice to meet you!
Great to meet and read become acquainted with your work through this dialogue, Daniel.
>it doesn't involve hardware like tokens. Is this hard to >implement? Can such messages be intercepted? I can >imagine that if you start receiving OTPs you didn't ask for,
OTP over SMS is pretty common actually, along with simple mobile phone apps that can replace hardware tokens. It has its issues, but it can work if implemented properly.
>PDF with a table containing 100 OTPs, that the user can >store in his or her wallet? What is safer?
Wow, "One Time Pads" - that is very retro of you (http://en.wikipedia.org/wiki/One-time_pad), but it is also very, very secure. As long as you have a way of secure physically transmitting the new set of passwords on a regular basis this can be the ultimate in authorization and encryption.
>keyfiles in comparison with other "what you have" security >systems.
I am not a fan of keyfiles, because they tend to get misplaced. However, they are better than weak passwords, as long as you have a way centrally for them to be expired.
>Could you also tell us more about your work on these >projects with students at NYU, like the elections >monitoring? Are you looking for projects to dig in to?
I taught last semester at NYU's Interactive Telecommunications Program (ITP), where Clay Shirky is a full-time instructor. ITP had been around for thirty years, with more of an art bent, but slowly they've been awaking to the design, technology and social challenges of activism and humanitarian efforts, mostly due to Clay's guidance and presence. He taught another class this last Fall in partnership with UNICEF, focused on technology design challenges for IDP camps.
I was invited by the department chair to create a new course (which we ended up calling "Social Activism using Mobile Technology"), with the goal of bringing my hands-on, real life experiences into a program that has been pretty theoretical in the past. The course featured case studies of recent use of mobile technology (from Iran to Obama), guest speakers (in person and via Skype) and required team projects focused on using mobile technology to improve a social condition, whether it was in the local neighborhood or at a global level. You can see some of what was presented through my site at: http://openideals.com/itp2800
There were twenty students, and the final projects were all very successful. They ranged from iPhone apps to assist in suicide prevention training, to Google Voice for Homeless populations (free lifetime phone numbers!), and Project NOAH - a geo-bio tracking system for smartphones that plugs into the Encyclopedia of Life project. I will hopefully be teaching it again next Fall, and would love to connect with anyone on this list who'd like to propose projects, or collaborate in some other way.
Thanks, thats very very very useful!
I'm sure we'll be in touch. We have a lot of ideas and projects, and we need your experience on all the areas you've mentioned. I got your email from your blog.
Just a few thoughts in response to your excellent introduction on this topic.
>However, I think that the use of mobile technology will >never remove the need for more traditional human rights >documentation gathered ‘on the ground’. Although they can >complement one another, mobile technology meets a >different need than qualitative human rights documentation.
I think the role of mobile technology today within HR documentation work is in more of an either or. The mobile is seen as a low-end, limited text-oriented device that members of a population may own, and can use to self-document violations against themselves. In addition, a mobile is seen as something that requires an always available GSM connection to function.
>A 140 character SMS from someone experiencing or >witnessing a human rights violation is likely to have very >different information density and quality than an interview >with that individual, even if that interview is conducted some
I agree with this point, within the definition of mobile device that I laid out in my previous paragraph. The issue I have is that the more in depth interview with an individual could also occur on a mobile device, albeit a more sophisticated one, or perhaps, through a mobile phone call that was tied into a more sophisticated back end system.
I was one of the developers involved in TwitterVoteReport (http://twittervotereport.com), an Ushahidi-like system that was developed in Nov 2008 to provide realtime election monitoring by US citizens of their own election. It was a "People's Monitoring System" by, for, and of, all powered by mobile devices of many different varieties. We, of course, provided a baseline SMS number and Twitter interface that anyone could text a report into from a polling place, ideally using a basic set of tags to help indicate what they were reporting about (wait time, ID card issues, broken machines, etc).
We also provided interactive voice systems in English and Spanish that allowed the user to select menu items to provide more automatically structured information, as well as leave a voice audio message.
Finally, we released iPhone and Android applications which provided the most high fidelity and media rich tool for reporting on the election. The apps would automatically find the user's GPS, linking their report to the nearest polling place. The apps also provided a rich form with a variety of slider widgets and checkbox options to visual present a way to quickly enter data with one finger. Most importantly, the apps provided a way to record audio, video and take photos. All of this data was wrapped into a bundle and uploaded to the VoteReport server databases (Ruby on Rails, MySQL based).
As the election reports began rolling from east to west throughout the day, it was fascinating to see how different geographic and demographic areas corresponded with means of report. It was more evenly distributed than we expected, but there was also a definite increases of smartphone app usage on the west and east costs. I expect that if we deployed this same tool today, we'd see a much wider use of iPhone and Android across the country, due to the $99 price point of these devices and easy availability through Walmart and Best Buy.
My point with all this is to lay out how we are beginning to see the evolution of reporting beyond 140 character SMS, and I think people int he HRV documentation arena should be prepared for this, and in fact encouraging it by utilizing pilot/seed hardware whenever possible.
>limitations – for example, it is not a practical tool in areas >where there is limited to no cell phone reception. If we rely
The need for active, available cell phone reception is one that Java phones, both Nokia/J2ME and more sophisticated smartphones, particularly Android-based devices are beginning to be able to address. The key is to support "offline mode", in other words local databases on the device. My brother is currently working in Haiti using his Droid device without an active network (no Verizon in PaP). However the device is still immensely useful to its GPS, offline OpenStreetMap support, high resolution camera, multitude of applications, long battery life, strong construction, and so on. It is much more suited than a laptop, video camera or just about any other larger bulkier device in that situation. In addition, because it is an open platform, it can be completely customized to provide all the functions he needs to complete the work while he is there in the field. Once the data is captured, he can then upload to a database/server when he can find wifi or a 3G network... with my brother, as soon as he gets back to the Dominican Republic, he will have Verizon service again, and can transmit the data he has captured.
>I think that as the human rights community explores what is >possible with mobile technology, we must balance its >immense advantages with these limitations.
In closing, I do commend and agree with a sober attitude towards the adoption of new technology, as long as it is paired with a fairly dynamic, informed awareness of all the possibilities that rapidly evolving mobile technology provides.
Wow, am really amazed by this!
"Finally, we released iPhone and Android applications which provided the
most high fidelity and media rich tool for reporting on the election.
The apps would automatically find the user's GPS, linking their report
to the nearest polling place. The apps also provided a rich form with a
variety of slider widgets and checkbox options to visual present a way
to quickly enter data with one finger. Most importantly, the apps
provided a way to record audio, video and take photos."
Could you tell us more about this project? Do you have a case study or somthing similar? What was involved in developing the android and iphone apps? Do you have screenshots of these? I tried to download the iphone app, but its not available in the Swiss app store.
I was chatting about this with a friend from ANFREL, an NGO that monitors elections across Asia. He was pretty excited, its exactly the kind of thing they need. So don't be surprised if you get an email from them.
I am also thinking that we may need an iPhone app to interact with a database we're planning for litigators, called Casebox, which would be a bit like Basecamp (we'd use Open Atrium as a code base). This would allow a litigation NGO to store and collaborate on case documents, manage court deadlines, access caselaw databases, and so on.
The iPad (or Android tablet) would be a great way of accessing this database on the move, because its more secure than a PC (no keyloggers in the app store) and very affordable too.The manager of a litigation NGO could work via wifi in the court room, review cases and deadlines, pull out a copy of a key affidavit and forward it by email, make comments by email, and even look up the lawyer in charge and call him or her directly from the iPad via skypeout or a similar Voip app. What do you think?
Case studies of VoteReport, and its more light-hearted sequel "Inauguration Report", are available at: http://www.centerforsocialmedia.org/resources/publications/public_media_20_field_report_building_social_media_infrastructure_to_engage/
Source code the Android application is available through my site here: http://openideals.com/guardian/greporter/
While I am a huge fan of Ushahidi, in many ways I feel the richness of the VoteReport engine, both for input methods and output rendering (widgets, maps, charts, multimedia support), was far superior. However, Ushahidi proved that dedication, persistence and organization trump short-term feature richness when it comes to open-source project development.
Regarding Casebox, you should check out the open-source issue/project management system called RedMine - http://redmine.org. It is a fine open-source replacement for Basecamp, and supports wikis, forums, and more. In addition, there are a number of iPhone apps and mobile web interfaces under development. You can secure Redmine hosting through SSL certificates (client and server) as well as IP filtering, or even using a VPN.
With regards to limitations with reception another aspect that is useful about samrt phones is their possibility to trasnmit info via wireless network.
One could record events like a protest and then do a data dump near a building with an open wireless network.
Moreover one could set up a wireless hub/server in the back of a car even if unconenected to internet. It would allow local phones to talk to each other, store data in server to be latter uploaded to internet in data dump site.
I think this is all feasible.
Since I'm very new to human rights domain (since August 2009), I've decided to lurk around before making a post.
As pointed out, there are 3 steps:
If we summarize what have been said today, almost all messages are about the Acquiring stage. I completely agree that it's the most important step and it's very hard to create a successfull software product without having a background in human rights violations. HRV is not accounting and even if you (as an IT developer) have this background, it's hard build a flexible product because the requirements may be fuzzy.
What I'd like to see is an online aggregation facility of HRV projects. A project is composed of: problem statement, a list of tools that were used, the challenges, the tradeoffs and some use cases. A use case shows a specific solution to a specific problem, for example: how the 'who did what to whom' methodology was used to document a specific event. The idea behind such a resource is to have a complete list of available software for the HRV domain and what kind of problems these software is solving and how.
When Jana Asher said about "quantitative and qualitative documentation", I recall the idea to use OpenCalais to automatically extract “who, what, where and when.” information out of articles published on OMCT. Almost all articles on the OMCT website is about HRV. The fact that the results of applying the 'who did what to whom' methodology is similar to what OpenCalais can do, what if we automatically process a big dataset (a collection of articles) and build a database of entities and relationships ?! For example, we can extract the names of the persons (and eventually their age) that were mentioned in an article, how these persons connect (Person1 killed Person2), what actions have been commited against a person etc, the location of the event etc etc. Here is the result of processing this page using Calais Viewer. You may read how Calais works and an article about OpenCalais. I fully understand that such approach may not yield very accurate data, but it can help a lot if you quickly want to see the "forest, or an aerial snapshot of the big picture" as Jana said. This might be a complementary tool along with OpenEvSys from HURIDOCS for example.
I agree... automated semantic analysis of data using tools such as OpenCalais both during and after a crisis, can be very helpful in seeing the situation from new angles. In addition, HRV data should be infused with as much semantic data as it is captured, via microformats and via platforms such as Semantic MediaWiki.
Excellent point and post, Oleg!
Using widely accepted(and well maintained) thesauri for recording HRV into a computer system, infusing with as much semantic data as Nathan mentioned, will help us in the future when organizations that deal with HRV will need to exchange information, or maybe when the need for an aggregation service will appear (automatically collecting information from many repositories). By the way, how many organizations in the world are using software to record HRV? How big are their databases in terms of the total number of acts of violence for ex? Can we cluster these repositories by domain? Probably ESCR (Economic, Social and Cultural Rights) and PC (Political & Civil) are two clusters.
I'm talking here about the need to use the same "language" accross HRV organizations.
This infusion mentioned by Nathan can be inline or as a complementary afterthought. By 'inline' I mean the use of microformats to modify the content of a testimonial or an urgent appeal for ex. The 'complementary afterthought' is a well structured&interconnected 'translation' of a free text (the automated approach to use OpenCalais is such an afterthought).
I've tried to make a mash-up of terms: microformats + HRV + geo mapping. The result was the discovery of http://chicago.everyblock.com/crime/ website. This website was mentioned in the book Microformats -empowering your markup for Web 2.0. You can download the the chapter on the GEO and ADR microformats to get an idea.
Imagine we have X organizations in Kenya and Y in Brazil that record HRV. How can we automatically extract violations from X+Y databases according to some criterias (acts of torture for ex.) if every organizations speaks it's own language (custom made thesauri) ?
One of the tools available for those documenting human rights is Martus (Greek for ‘witness’), a free and open source secure information management tool developed by the Benetech Human Rights Program .
We developed Martus because after years of working on human rights data analysis projects, we felt there was a need for an easy-to-use encryption tool as well as an information management solution for the human rights community. We designed Martus based on feedback from our project partners, and since 2003 have provided Martus outreach and support to users around the world, and continued releasing new versions of Martus.
Here’s how it works:
Martus is used by organizations around the world to document human rights abuses, protect sensitive information and shield the identity of victims or witnesses who provide testimony on human rights abuses.
You can read more about Martus and download the software here
Martus is just one of the resources available, however, and it’s not necessarily right for every individual or every organization. We always encourage people to try out the demo and think through their project needs and resources when they are considering whether to use Martus for their information management project. A given organization’s information management and security needs may well change over its lifetime as well, so that Martus may be right for it when it starts out but not a few years down the road – or vice versa. We’re always available to advise during the process of deciding what the correct tool is.
Hi Vijaya, what about Analyzer?
Can you let us know a little more about this Benetech tool? How is it to be used, what can you do with it exactly? And how can you import data from Martus? I guess you still have to go through each bulletin manually to extract the information on acts, victims, perps, and so on. Is this how it works?
Patrick Ball says you're updating it as a web app, any progress in this or screenshots to share? Would love to know!
Hi Jaya, Daniel and others,
I would be curious to know more about the relationship between HURIDOCS and Benetech since there seems to be overlap of your work - specifically regarding the OpenEvSys database and the Martus database. Do you envision the two database systems becoming one system in order to pool resources for maintenance, support, training, etc? Or, on the other hand, has it been a strategic decision to keep the two systems seperate because it is important for practitioners to have a number of options for their documentation work? Thanks for this interesting discussion, everyone!
Thanks for the question, Kristin.
Simply put, Benetech's Martus and our OpenEvsys are two tools dealing with the same challenge of documenting violations, but in different ways.
Martus is above all about security, about locking your data in an encrypted safe on your computer, and exporting it easily to an out-of-country internet server. Once you're inside the safe, Martus allows you to set up record these violations using free-text fields and attachments, and also to categorize your stories about violations, and you can then retrieve them by searching these fields. So its very much for qualitative work: you enter your stories, and then you can search and retrieve them efficiently, like a library system.
I thiink Martus is a very good tool for use in countries with very repressive regimes, where you and your sources can get into serious trouble if your data is found.
OpenEvsys is different, in that you can also record in as much detail as needed what happens inside these stories. You can record violations, link them to the victims, and the perpetrators, and the sources. Its a fully relational system, so you only enter perpetrator X once, and then you link perpetrator to all the acts that he or she has committed, in all your stories, and then you can get a "bio" of all the acts that perpetrator has committed.
You can even set specific fields for certain violations: for example for torture, in addition to generic fields like place and date, it has fields about whether a confession was forced. Or for house destructions, the value of the property.
Like Martus, with OpenEvsys you can then retrieve the stories according to the fields you chose to use. But you can also get a much more fine-grained analysis of your data, such as how many acts of torture committed by the police in 2008, then do a breakdown by province, or a time series analysis, and then get an analysis of the victims by gender, ethnicity, religion, political affiliation, and so on.
If you want to check it out, you'll find an online demo here. We're still finetuning the advanced search, should be ready in a couple of weeks.
You can also record interventions, their status and impact and so on, which is useful for NGOs that provide legal or medical aid.
So it depends on what you want to do. I think Martus is a great, rock-solid tool, I've tried it out and noted its progression in terms of how you can add your own fields. Both Martus and OpenEvsys are flexible and can be adapted by the end-user. Its a question of the right tool for the right purpose, to seek advice, to read the manuals, and take the time to really understand your needs before choosing.
Documentation is a like a trekking: its going to be a long enough slog anyway, so at least make sure you have the right shoes!
Some organisations have developed their own systems, like Karapatan in the Philippines.
One problem NGOs face is the "black box" syndrome. They spend a lot of time entering data, and its stays in the black box, and its never really put to use. Or a lot of time is spent extracting data, making graphs and tables, and pasting it into reports. This should be more efficient.
To solve this problem, I see a lot of potential in web systems, that connect the organisation's website to its internal database. This will shorten the processing loop to quicker advocacy and allow more time to be spent on analysis, and less on repetitive copy-paste jobs. For example, as you enter the facts about a case, it would appear automatically on the website: the case abstract would appear in the right thematic or regional sections, and the statistical tables would be updated automatically. We are building such a system, it will come online in a few days. And we hope to build more.
But there are other generic needs as well, meaning needs that are shared by a large number of NGOs, to justify a system for all to use.
Litigators will not be fully happy with either Martus or OpenEvsys. Yes, they can create a section for a case, and store case docs and so on. But they also need to be able to assign lawyers and staff to a case (not all lawyers should see all cases), work collaboratively on the brief, discuss on forums, retrieve jurisprudence from caselaw databases, manage court deadlines, and track lawyer time spent on each case. LCM is an example, which offers some of these functions:
Another area for tool development is search and intelligence, also known media monitoring although its wider than that. This involves tracking and searching a number of online sources. A subject matter specialist then reviews the results, and selects the items that are worth sharing within the organisation. These items are then disseminated via RSS feeds or email newsletters. Some very nice and free media monitoring tools already exist, such as Phase2's Tattler and Development Seed's Managing News, but there is room for a lot of development until we have something that is really powerful and easy to use.
In short, I don't think its a question of integrating Martus and OpenEvsys into one super-tool that will make everybody happy. That would be like having a Swiss knife with 250 blades! Complete but cumbersome and hard to maintain.
There are so many different needs, which require different tools, and thinking carefully about needs should be the starting point. In some cases, as those mentioned above, there is scope to develop a tool that will serve many. Others will need develop their own tools for very specific purposes, such as the tools that Nathan was mentioning for elections monitoring, or simply because want to experiment, which is great too. In some cases, a spreadsheet will be perfectly OK, or Google Alerts for media monitoring, or project management software like Basecamp.
Thats all fine. The important thing is to share, as much as we can, so as not to reinvent the wheel each time. We try to track all existing initiatives, so we can match a need to a solution and save people time and energy. Thats why this dialogue is so valuable for us.
Software is a very competitive area, even for free tools for nonprofits. An NGO will not use a tool unless it really matches their needs. It should provide the key functionalities, integrate well into existing work processes. It should not become an extra burden, but should make life easier. It should be the right size.
Again, the Swiss knife metaphor is useful: you have knives for campers, knives for techies, knives for sailors... each type of user must find their tool useful enough to be worth carrying in the pocket. Or it gets left in the drawer.
There is a debate between standard tools for use by all, and customized, bespoke, tailor made tools that fit a particular organisation like a glove. Of course the latter is better - this very website is a example of why custom tools are so good. But few can afford them, especially in the South. So what to do? Our previous approach was to make OpenEvsys into a 250-blade Swiss knife, but we've learned that one tool cannot make everybody happy. So our new approach is to develop custom systems for specific NGOs, and if we note that there is a generic need by many potential users, to develop them into a standard open source system for use by all.
Thanks, Daniel, for replying to my question about the difference between Martus and OpenEvSys. This is VERY helpful! And yes, I like your advice: one gigantic super-tool would be too cumbersome and hard to maintain. Having a super-tool makes it even more difficult to use these tools the ground without experts.
You mention LCM - Legal Case Management - as a good tool for litigators and I looked into it because I hadn't heard of this before. The website mentions that LCM is no longer being maintained, but that they recommend using another great option - CiviCRM.
CiviCRM is a free, libre and open source software constituent relationship management solution. CiviCRM is web-based, internationalized, and designed specifically to meet the needs of advocacy, non-profit and non-governmental groups. Integration with both Drupal and Joomla! content management systems gives you the tools to connect, communicate and activate your supporters and constituents.
Now this is a great tool for many purposes, and is FREE and open-source. Many developers maintain it, including people passionate about human rights issues. This is a tool I have been wanted to play around with for a while, so when I finally get the chance to I will be sure to check back in here and let you know how it goes!
Thanks very much Daniel for that thorough and helpful explanation of OpenEvsys!
As you note, Martus’ design has been built around encryption, secure sharing and automated backup features. As far as I know, OpenEvsys does not have an encryption component.
As I mentioned above in the post about the Martus software - Martus is a tool for capturing information. You can structure the information you capture in Martus by customizing the tool. Some projects do develop quite sophisticated customizations, structuring the information to facilitate advanced searching and reporting and later analytic work. And, although you can create reports and run complex searches on your data, you can only count the number of bulletins you have that meet a certain criteria (as opposed to counting how many peoplewere actually killed).
For example, you can ask Martus ‘how many bulletins do I have which have “Location = Capital City” and contain “Human rights violation = Disappearance”?’ You cannot from the number of bulletins then say ‘There were X cases of Disappearance in Capital City’. The reason for our caution in developing the reporting (andcounting) feature of Martus is that de-duplication and other important data processing steps haven’t yet been done.
At Benetech we view Martus and OpenEvsys as human rights information management tools meeting some of the same needs and some different needs, and certainly not competitors! Because of this, we don’t feel that it makes sense for the two tools to merge.
When thinking about which of these (or other) tools to use, we strongly encourage groups or individuals documenting human rights abuses to inform themselves well about each tool, and ask the following questions:
It’s always a fascinating conversation – and it’s one that we regularly have with current and potential project partners. We are always happy to talk through project goals and discuss whether Martus is the right tool with a new potential user. If you do have questions about Martus, definitely check out the links I provided in my earlier post, and feel free to email us at firstname.lastname@example.org.
Hi Daniel (and everyone!)
Analyzer is a free and open source statistical database tool developed by Benetech that provides the structure required to quantify patterns of large-scale human rights abuse.
To answer your qs, Daniel - We are not ourselves developing a web app version of Analyzer, although a partner is developing a web app based on Analyzer. (We think it’s a good direction for the tool.)
It is certainly possible to import information initially captured in Martus into Analyzer, but this would require either careful coding of the information or development of an import script.
Below, I talk more about the software itself as well as the data processing core concepts that we strongly encourage all projects seeking to quantify information to consider as they develop their project. All of our Analyzer projects take these concepts/challenges into account.
From our experience, statistical projects that would benefit from using a tool like Analyzer require a considerable investment in technical development, data processing methods, and thinking about exactly what they intend to analyze. Analyzer does require asignificant commitment of training, resources, and typically, a close collaboration with us. Projects that use Analyzer tend have (or intend to have) thousands of raw documents that they need to process and analyze. Using Analyzer usually requires significant resources over multiple years, including technical database administration skills and in-depth staff training. One of our most recent Analyzer projects was with the Liberian Truth and Reconciliation Commission (see more about the project here: http://www.hrdag.org/about/liberia.shtml),and earlier projects include working with the Truth and Reconciliation Commission and the Campaign for Good Governance in Sierra Leone, member non-governmental organizations of the Human Rights Accountability Coalition(HRAC) in Sri Lanka, the Boroumand Foundation for the Promotion of Human Rights and Democracy in Iran based in Washington DC, and the Commission for Reception,Truth and Reconciliation (CAVR by its Portuguese acronym) in Timor-Leste.
Analyzer must be implemented in the context of a series of data processing steps and good practices in order to produce defensible results. As an introduction to some of the concepts behind modeling and quantifying human rights violation data, you might find “Who Did What to Whom?” (available athttp://shr.aaas.org/www/contents.html) a useful resource.
Data processing steps are needed to address several challenges involved inaccurately quantifying data about human rights violations, including: duplicate reporting by multiple sources; representing the structural complexity of human rights violations; and consistency in meaning and counting.
Here is some more information about each of these areas:
*Duplicate reporting - When projects collect testimony about human rights events, many different narratives may describe the same events. The same killing may be reported to the project by five different witnesses or sources. When trying to count the total number of abuses, it is important to go through a process called "record linkage" to identify repeated victims and violations so that they are not over-counted. To learn more, see http://www.hrdag.org/resources/source_judgment.shtml. It is important not to lose or delete duplicate reports. In fact, information for overlapping reports can be extremely valuable. We have pioneered the application of a statistical technique called Multiple Systems Estimation (MSE)(http://www.hrdag.org/resources/mult_systems_est.shtml) for human rights data analysis, which you may have heard about. MSE uses the pattern of overlap between data-gathering projects, or systems, to make inferences about how many violations were never reported to any project. We have employed MSE in human rights data analysis projects in Colombia, Guatemala, Kosovo, Peru and East Timor.
You can read more about our data analysis projects here: http://www.hrdag.org/about/projects.shtml
* Representing the structural complexity of human rights violations - There is a considerable amount of complexity (as many have mentioned in posts throughout this dialogue) that must be managed when identifying, classifying and enumerating victims and violations:
- Victims can suffer many violations;
- The violations can happen at many different times and places;
- Each violation may be committed by one or many perpetrators;
- Each perpetrator may commit one or many violations;
- People can play different roles in different events (i.e.a victim in one event may be a perpetrator in another, and vice versa).
Over simplifying this complexity distorts the statistical results. Coded information from narratives must be entered into a specially designed database (such as Analyzer) to preserve the data integrity of the stories of human rights abuses.
* Consistency in meaning - Narrative text must be "coded." Coding is the process by which raw narrative data is classified in consistent and repeatable definitions to distil the narrative elements, including witnesses, victims, perpetrators, personal information, and numbers and types of violations. (Please see http://www.hrdag.org/resources/controlled_vocab.shtml for more information about developing a controlled vocabulary – you can also refer to my other post about controlled vocabularies below.) Each project must develop a unique controlled vocabulary based on the specific nature of the information collected and the analytical objectives of the project. Included in ensuring consistency is making sure to conduct inter-rater-reliability (IRR) exercises, which measure the consistency withwhich the data entry team codes information, thereby ensuring high data quality and therefore meaningful results.
Analyzer facilitates this process through several data processing utilities as well as providing a robust model for storing theinformation once it has been processed - based on the "who did what to whom" human rights data model. See http://www.hrdag.org/resources/human_rights_data.shtml for more here.
Projects working to accurately quantify human rights violations share the core "who did what to whom" data model that is at the heart of Analyzer. However, in addition to this core information about victims, violations and perpetrators, many of our human rights projects collect a wide range of other data specific to their contexts. Trying to anticipate the needs of every human rights project would be impossible - and would lead to an amount of detail in the database that would bog down individual projects! It is important to note that because each project is different, as we build a custom extension for each one including the data fields that they need to capture.
I hope the above (very long) post is useful! Please contact us at email@example.com if you have further questions about Analyzer or these projects.
Thanks for the update and explaining Martus, Vijaya.
For the record, we should also mention the HRDAG core concepts.
Analyzer, OpenEvsys and the La Red systems are all true "who did what to whom" systems, that can represent the relationships in a complex violation with several victims and so on, but Analyzer has the added advantage of being able to merge duplicate records while keeping a trace of the originals.
Here is this data model:
Taken from: http://openevsys.org/wiki/index.php/Events_Logical_Domain_Model
For those interested, the data model is explained in the Events Standard Formats manual: see page 55 I think, available on Google Books or as PDF on our website.
The Who did what to whom book makes the case for this kind of relational model over using spreadsheets, which is the only way to represent a complex human rights event, where you can have several victims, acts, pereptrators all connected to each other.
Spreadsheets seem fine at first, but you soon run into problems. They only work really well, when you have simple events : one victim, one act, one perpetrator, one source,one intervention. Because only then the relationships are clear. And also the victims must be unique: if some victims are mentioned several times in your spreadsheet, then it will be hard to count the total victims, and it will be difficult to be sure if its two persons with the same name, or the same person mentioned twice.
I am sure many are familiar with the spreadhseet problem. An example. Organisation A does litigation, and they keep their 200+ cases on a spreadsheet. One of the columns is for the lawyers attached to the case. OK, some cases have several lawyers, so the litigation coordinator stuffs the field with several names. But makes it impossible to sort the list in alphabetical order of lawyer's name, to see what cases a lawyer has! It only works if there is one lawyer per case.
So a relational system is more powerful, where you can attach several lawyers to a case, like several books in a shopping cart on Amazon or several project experts in Basecamp. Then you can easily see which case has what lawyers, and which lawyer has what cases!
For example, for OpenEvsys we also added a document entity to the data model. So you only need to upload a document once, and then you can attach it to the events or persons concerned (a testimony may related to more than one event or case).
In some cases, spreadsheets are OK. For example, if you just need to track acts, date of act, place of act, number of victims... and the identity of the victim is not essential for your project, then its OK.
In other cases, a simple one-table database can also be fine, with a combination of free text fields, date fields, drop downs, and check boxes. Such a database is very easy to build with tools like Access, and data entry is easier than with a relational model. Martus is a bit like that, you have one main table called a bulletin, and you can customize the fields.
Spreadsheets are therefore the natural competitor of database tools like OpenEvsys. Because they are easy to use at first. I'm not saying they won't work, just that its important to see the complexity of your cases, and your documentation objectives, before choosing the tool. Its good to invest time on this, before starting: ask for advice, see what others are doing. So your choice of tool will be the right one, and will serve you well.
Thank you Jaya for introducing us to the Martus database system, and thank you Daniel for sharing lots of good information on the OpenEvSys database system. For me, and maybe for others, the differences between the two database systems are still a bit fuzzy. Can someone points out the main differences between the two database options? Do either of these database systems allow for users to share the data with others? I can see many security issues with allowing for this functionality - but I have always been curious if it is an option. For example, can a human trafficking organization in the Philippines enter their data into a secure Martus or OpenEvSys application, store the data on a secure server, and then allow access to that data by an umbrella human trafficking organization? Is this why documentation practitioners use controlled vocabulary - to be able to combine and share data from a number of sources/NGOs? There is a consortium of torture treatment centers in the United States that share a number of common data-points. There is one organization responsible for collecting these data-points from 30+ torture treatment centers. Though I am not exactly sure what this consortium does with the data-points, I would image they are documented in a report and shared with advocacy groups in the US. Would a common documentation database allow for this kind of sharing of data less time-consuming? Thanks!
Vijaya (Jaya) Tripathi, Benetech
Great question, Kristin! A few quick thoughts in response -
Securely sharing information in Martus
In Martus, you can securely share information from your account to another Martus account. You have to exchange unique account information with the person you wish to send information to so that you can create a relationship in Martus identifying that user as a “Headquarters”account, or “HQ”. This terminology is not hierarchical, but rather ‘directional’- it identifies them as someone you would like to send information to. You can share every record (‘bulletin’ in Martus) you enter with that user by calling them a ‘default’, or you can authorize them on a case by case basis. You can have as many HQs as you want. You are in complete control of who receives what information. Even if someone is your default HQ, you can still choose to not authorize them to view a given bulletin.
When the bulletin is backed up to a remote Martus server, the bulletin is stored for your account’s back up purposes, and the server also notes that the bulletin should also be available for your HQs to download.
The way that your HQs view information is by logging into their Martus account and downloading (‘retrieving’ in Martus) the bulletins they have been authorized to view from the server, over a secure connection, into their own Martus account.
Your HQ could be a colleague in a field office of your organization, a contact in an umbrella human rights organization, or a journalist you trust, or anyone else that you want to share your documentation efforts with. The important thing is that you have to specifically authorize that person to view your information.
All of these operations – backing up the bulletin to a Martus server, storing the bulletin on the server, downloading it to an HQ account – happen over secure connections. The bulletin itself is also saved in an encrypted format.
Making records, or ‘bulletins’, publicly available in Martus
Martus also allows users to publish information on the Martus Search Engine website. Every time you create a bulletin in Martus, you have the choice of saving it as ‘Public’ or as ‘Private’. If you save it as ‘Private’, it’s saved in an encrypted format, and when it’s backed up to the server it’s only available to you or your authorized HQs. When you save it as ‘Public’and back it up to the Martus server, the server stores a copy for your back up purposes, and also publishes the bulletin on the Martus Search Engine website.
Please note that, unless you specifically save a bulletin as ‘Public’ and choose to back up your bulletins to a Martus server, yourbulletins will not be published!
We encourage all of our users to consider carefully what information is sensitive and should not be published prior to saving it as Public. You also have the option in Martus to ‘disable’ the creation of Public bulletins to prevent the accidental creation of Public bulletins.
You can read more about these functions in the Martus User Guide, available on the Documentation page <http://martus.org/downloads/>
Controlled vocabulary uses:
If groups that are working together develop and use a shared controlled vocabulary, that certainly goes a long way towards improving their ability to share information effectively! Collaborating Group A and Group B can then be sure that when they refer to ‘extra judicial killing’ they have, in theory, identified the same kind of act.
It's important to remember that implementing a controlled vocabulary doesn’t stop when you finish writing the definitions down and hand the manual to your coding team. What comes next is even more important! You need to make sure 1) that your definitions are comprehensive/refined enough for your coding purposes, and 2) that your coding team applies the definitions consistently and agrees in their understanding of each term.
At HRDAG we’ve learned the importance of these two steps through our large scale data analysis projects, several of which have required the development of controlled vocabulary and this kind of follow up. I’ll talk a bit more about each of these steps in another post.
Thanks for the questions Kristin!
Regarding 30+ torture treatment centers, if its a question of aggregating data from a monthly form submitted by each member, a custom tool with online forms would be the best option. But it depends on what they need to do, hard to answer without knowing exactly.
Can a human trafficking organization in the Philippines enter their data into a secure Martus or OpenEvSys application?
Yes, OpenEvsys is built as a web application, like Facebook, so its easy to share and collaborate on data. Each member would have an user account. Currently all members would see all data, but we can change this so only HQ staff can see all the data, if this is needed.
Also, it would be possible to link the secure OpenEvsys to the public website of this organisation, to publish selected cases studies for example, or to publish live charts and tables, if useful. That would be a week or two of development work.
Do either of these database systems allow for users to share the
data with others? I can see many security issues with allowing for this
functionality - but I have always been curious if it is an option. For
example, can a human trafficking organization in the Philippines enter
their data into a secure Martus or OpenEvSys application, store the
data on a secure server, and then allow access to that data by an
umbrella human trafficking organization? Is this why documentation
practitioners use controlled vocabulary - to be able to combine and share data from a number of sources/NGOs?
Do either of these database systems allow for users to share the
data with others? I can see many security issues with allowing for this
functionality - but I have always been curious if it is an option. For
example, can a human trafficking organization in the Philippines enter
their data into a secure Martus or OpenEvSys application, store the
data on a secure server, and then allow access to that data by an
umbrella human trafficking organization? Is this why documentation
practitioners use controlled vocabulary - to be able to combine and share data from a number of sources/NGOs?
Excellent remark, Kristin.
I've been thinking about the same issue recently. We have developed the application which is used in Bosnia & Herzegovina as well as Serbia and Kosovo. I could say it's a similar to OpenEvSys with addition of module for Trial Monitoring.
Having the same software on both sides is great if one want to exchange data. However, the real challenge is exchange between heterogeneous data sources and applications. Soon or latter, no matter how small or big Your organization is, You'll need to export/import data. Here are few examples:
In most cases, we are using good old plain text, or in some cases Excel.
Now, don't get me wrong, plain text is great, especially for ad-hoc tasks. In fact, You would be surprised to hear that many banks are still using good old TXT (because of legacy applications, but that's another issue). Nevertheless, it would be nice if we can move to something more advanced.
One solution could be the use of XML.
As you all probably know, XML is a basis for many standards. After all, if musicians and chemists have XML standards , why not some sort of standard for HR violations? Well, technically speaking, we need a XML Schema but in an essence it's a question of consensus - if we can agree what data we want to exchange (data on victims, incidents, witnesses...) and what attributes (name, address, age...) then we can make our own "standard".
Does it make a sense or it's too complicated? Or maybe we already have some sort of standard for this?
Hi Kenan, nice to meet you!
Could you please tell me more about the Trial Monitoring module? Its the first time I heard about such an application, and I'm very curious to see how it works. Do yourhave a document or a screenshot?
Here is the XML Schema for OpenEvsys, and the export/import is an XML file. It relates to a data model which is based on what many consider a standard, the events standard formats.
They made need revising at one point, but they're quite comprehensive. You're right, any form of producing standards should be done by including all stakeholders, in a task force. This was how the Events standard formats were made. Some contrbuted more than others, but overall it was a collective effort, based on years of preceeding discussions, which is while they're still relevant today.
We certainly recommend to use this model for those starting a new database project, to ensure interoperablity.
I think Martus also allows export to XML.
Martus, as a data capturing tool, has been flexible, secure and adaptable.
After more than 92k bulletins captures, we can assure its excellent performance.
Our data comes from documents created by one of the most oppressing police forces in Latin America during the past century. This makes the analysis, of this information, a high risk task.
In this context, Martus has helped us protect the information obtained from the documents and has enabled us to store digital images of each and every document included in the quantitative study.
It also has the advantage that it saves the data with the necessary structure to export it in formats that are compatible to databases, statistical analysis programs, spreadsheets, etc.
Our experience with this software has been very good, however we haven’t had experience with any other application of this type. Any one else has had a similar experience with this or another software?
Thank you for sharing your experience using Martus - it is great to hear that it has worked well for you. It is also interesting to hear that you are able to store digital images on the bulletins! Why did your organization decide to use Martus over other documentation options? How did you hear about it? What was the process that your organization took to develop a documentation plan? How do you think we can get this information and training out to more grassroots organizations?
Well, first of all, there are two ongoing processes in the Archive. The archival process and the quantitative process. The first one is priority because we need all the documents to be classified, properly preserved and safely stored to enable all human rights investigations, give the documents its proper validity and provide access to the public. All of these activities (both archival and investigative) I mentioned have been done simultanously because of risks, political junctures, etc proving that it is possible but difficult to keep track of progress.
The quantitative study is, on the other hand, independent from this archival process. It was intended to go deeper into the archive (in quantity) in less time. And give a global picture of what the archive was and contained. Since it was basically unexplored teritory, we didn't know what we would run into, and because of that we needed a flexible intrument to capture data.
We heard about Martus from different NGOs when the archive was found and the project was being designed. Because of the variety of information found in the archive and needed for consult, Martus didn't prove to be very useful for the qualitative process (archival-investigative). More specific documentation systems were designed depending on the information needed for consult or investigative processes.
However it was the most practical instrument for the quantitative study. It needed to be secure so that all the data could be sent to benetech for statistical analysis. It was of quick installation and use, which made it perfect for the conditions under which we were working. The info is easily exported and most of all, it's flexible. Example of that is that it's usually used for testimonies and we're using it for document data.
On your last question....well, we're wondering the same thing. We hope to present to the public everything we've done and encourage interest on the results as much as the process. You can read more about it on the papers presented at the JSM in Washington last year. http://www.hrdag.org/resources/publications.shtml. We hope, everyone can learn from this experience.
One of the challenges we face in documenting human rights violations in Burma has to do with interview methodology. We have been training fieldworkers for a few years, and we consistently face the question of how structured the interviews they conduct should be.
On the "highly-structured" end of the spectrum is a questionairre. The advantage of this appoach is that all the fieldworkers are gathering almost the same sets of data and so analyzing the responses across various interviews is made easier. This approach is also helpful for fieldworkers who do not have a lot of confidence or experience - they have very specific guidelines to follow. The biggest downside as I see it is that it puts the people being interviewed in a box and removes their power during the interview to talk about what they believe is important. And no matter how well designed a questionairre is, it may not fully take into account the reality of a violent situation.
On the other end of the spectrum is an unstructured interview. Using this methodology, the interviewer may set the general parameters of the interview and then turn over the direction of the interview to the person telling his or her story. I have used this methodology in some interviews about attitudes toward peace and justice. I ask the interviewee to tell me about their life, starting with biographical information and childhood experience. As they go through the various stages of their life, I ask for more detail about the parts that I am most interested in (the violence they have suffered and their resistence, coping, resilience that followed in response). The last question I ask is about what the interviewee thinks should happen to the person responsible for their suffering and what they want/need as a survivor in order to feel any satisfaction.
For the human rights documentation work that is linked to advocacy (to impact countries' foreign policies or UN actions on Burma) we tend to encourage a semi-structured methodology, between these two ends of the spectrum. Fieldworkers ask about specific incidents that they are interested in (e.g. with IDP's who have recently been forced off of their land). They are trained in the "essential elements" of the 15 categories of human rights violations that their network focuses on, and then ask who, what, when, where, how, why (and how do you know) for each element of the violation. So for forced relocation, the four essential elements are (1) removal from one's home or land, (2) arbitrary nature of the removal, (3) coercion or lack of consent, and (4) state action. The interview guidelines offer suggestions for the questions to ask to establish each violation:
1. Establishing the relocation
2. Establishing that the relocation was arbitrary
3. Establishing that the relocation was involuntary
4. Establishing state action
The guidelines for all 15 categories of human rights violations follow a similar pattern, so that at the end of a one-week training, (ideally) the fieldworkers do not need to go through the list of questions above - they flow logically. We are currently working on a "fieldworkers pocket guide" that will include the essential elements for each of the 15 violations and the questions needed to establish each violation.
The overall idea with this methodology is to allow the interviewee to tell her or his story, and to kick into this more structured methodology when a violation is mentioned as part of the story. We're still rolling out this methodology through trainings and developing a curriculum to increase consistency across all the fieldworkers involved in the network. (See the link above for more information and to link to the documentation manuals developed by the Network for Human Rights Documentation - Burma).
I would be very interested to learn about how others have addressed this challenge, determined what interview methodology is best, etc. I'm also interested in understanding more about how the data gathered is going to be used impacts decisions about the collection methodology. And while I'm at it - also interested in training materials for fieldworkers.
An excellent topic. In my own work (being a statistician) I have tended to work with a structured questionnaire for the simple purpose of making sure that the particular pieces of information desired -- timing of events, place of events, actors involved, etc. -- are captured appropriately (or at all). However, I train interviewers to allow respondents to tell their story in as natural a style as possible and to follow up with questions when details are lacking. In that way, the communication is not as constrained for either party (i.e., the communication resembles a natural discussion more than an inquisition) but the essential data elements are captured. So I believe this would fall under "semi-structured".
A related issue is how the interviewers/statement takers are trained. In "traditional" (i.e., Global North) survey houses (i.e., government agencies, research centers, commercial survey centers, etc.) a consensus has grown over time that the interviewer is to serve more as a "tape recorder" in order to remove "interviewer effects" and preserve the consistency of the questionnaire across its use. In that model, the interviewer must read the questionnaire rote and has only a limited number of options if the respondent doesn't understand the question. The type of dynamic that Patrick is suggesting in his post is not allowed.
I believe that this model for the interviewer as a "tape recorder" is not viable in the context of documenting human rights violations for many reasons. One is the fact that during a fully structured interview, a respondent might be re-traumatized by what will appear to be an "insensitive" or "uncaring" interviewer (due to the inability to naturally respond to what is being said). Also, during conversation about traumatic events, a survivor needs to be able to tell his/her story in a way that is comfortable, even if that is not the most efficient way to tell the story in terms of filling out the questionnaire.
An important point is that training for interviewers who are to perform semi-structured interviews (of whatever type) requires more time so that a larger set of skills are developed. It is relatively easy to be a tape recorder; it is harder to interact with the survivor in a meaningful way, that doesn't re-traumatize them, yet allows collection of the best-quality data possible.
Jana and others -
Can you share materials that are used to train fieldworkers to conduct interviews (wherever they lie on the un- to -structured continuum)? I'm developing a training curriculum and some sample materials would be most helpful, and I suspect others reading this dialogue would benefit as well.
The Metagora Training Materials include some example tools -- that is, they include examples from actual projects. The link to those tools is http://www.metagora.org/training/example.html. There are all sorts of examples of training manuals for different types of data collection. I hope these help.
Patrick and others,
You asked about sharing of training materials. I hope others will respond and share their training resources.
I am wondering if you do already, or would find it helpful to share documentation examples during your trainings of how others have been collecting and using documentation. The New Tactics project seeks to share the creative work and successes being carried out around the world - to give inspiration and ideas for how others can improve and share their own efforts. I would like to share a couple of tactical notebooks written by human rights practitioners that show different ways that documentation processes are trained and implemented. The three in-depth examples below specifically engaged and trained local communities in the documentation processes.
In your trainings, do you - and others in this dialogue - use examples of how documentation is being applied? If so, what have you found to be most effective about using such examples?
If you are not using examples now, do you have ideas about how such examples could be helpful for the groups you are traning?
Patrick, you may want to look at the Ukweli manual, written by Amnesty International Netherlands' Special Program on Africa. See here.
The SPA has also prepared a very nice training package for these manuals. Ping me for the contact.
The best training manual I know of is the DJ Ravindran manual by Forum Asia. Its a classic, a treasure. Unfortunately its out of print, and we scanned the manual. Its a huge file, but I can try to send it to you by email if you want.
As you're in Thailand, would be good to see with Forum Asia if they have the original text file, and share with us, so we could make a low-size PDF. Its a pity when valuable tools just disappear.
I'd like to know if anyone can share experience with alternative methods ot social research to the traditional documentation of events/cases/incidents...
For example, using surveys, focus groups... but in the context of human rightsdocumentation.
And what useful manuals on social research methods should we read? I found this free online primer, and I like it a lot, but there must be others.
I've used or advised random sample surveys on human rights violations in a variety of settings, where human rights violations are defined broadly and include social/cultural/economic violations. There are some manuals available for surveys in general, and some manuals related to surveys in developing countries, but not really manuals specific to human rights violations. However, some colleagues of mine and I did publish a book recently called "Statistical Methods for Human Rights." It is available from Springer-Verlag, and in the first chapter (Introduction) we outline some of the history of human rights violation data collection, looking both at collection of testimonies that are then coded into quantitative data and also random sample surveys that focus on human rights issues.
The book contains examples of multiple types of data gathering and analyzing. There is even a project that utilized focus groups (the Phillipines Metagora project) that is mentioned in one of the chapters. In any case, this might be a good resource for some practitioners that read this dialog.
The disadvantage is that the book is not free -- I belive it is around USD$32 on amazon.com. One of the products my organization hopes to produce in the future is a set of training modules on different types of human rights data collection that would be freely available.
There is another free resource out there that might not have been mentioned as of yet: as part of the Metagora project, a set of training materials was created. The materials are specifically about using data to inform policy processes, but in the context of human rights and/or governance data. The link to the materials is at: http://www.metagora.org/training/ .
I almost forgot -- the United Nations produces some good resources as well -- for example, Household Surveys in Developing Countries.
Hopefully some of these links/resources will provide what you need.
Dear colleagues: These discussions are very interesting. I’m not sure if what I will post now is useful, but I decided to share with all of you a brief overview of a project that I have run here at the International Human Rights Law Institute (IHRLI) of the DePaul University College of Law. It is called the Iraq History Project (IHP). We documented the profoundly destructive impact of political violence under the regime of Saddam Hussein and following the US-led invasion by gathering thousands of testimonies with a methodology that plays off of many issues raised here. Over the past six and a half years, studies show that 100,000 to 800,000 Iraqis have been killed and the United Nations estimates that one out of every six Iraqis have fled their home because of violence, creating one of the world’s most significant refugee and displacement crises. All of this has occurred in a country that suffered devastating losses over three decades of authoritarian rule in which the Ba’ath Party government killed hundreds of thousands, displaced millions and forced the entire nation to live under a state of constant surveillance and brutal repression. The IHP has collected 8,911 testimonies representing over 55,000 pages of personal narratives recounting the individual experience of torture, massacres, assassinations, rape, kidnapping, disappearances and other violations. The goal of the project is to provide Iraqis with an opportunity to talk about their experiences and thereby document the truth of political violence while placing a human face on the suffering of the Iraqi people. In this way, the project is similar to truth commissions in South Africa, Guatemala, Peru and elsewhere as well as to projects like the Shoah Foundation’s database of Holocaust testimonies and the REMHI project in Guatemala (particularly since these initiatives were not officially sanctioned processes like truth commissions). The project has three main objectives: documenting past and present violations by collecting large numbers of testimonies from around the country; analyzing this material to reveal patterns of violence and repression; and, encouraging the development of domestic and international policies to assist victims through reparations, memorialization, education and national reconciliation. The project seeks to contribute to an improved understanding of the scope, impact and severity of systematic political violence over the past four decades in Iraq and to aid a broad social process of transitional justice, national reconciliation and reconstruction. Here is an overview of the IHP methodology: For the IHP, we designed qualitative methodology based on a review of similar large scale human rights projects, such as truth commissions. The project trained an all-Iraqi team of over 100 interviewers who worked throughout the country speaking with nearly nine thousand Iraqis representing the country’s diverse ethnic/religious population. The interviews were carefully recorded by hand and then transmitted to a central office where they were entered into a secure and searchable database using Martus. Some of this material was transferred to a secondary analytic database using Access and SPSS. The quality of the material gathered relies to a large degree on the skills and training of the IHP interviewers. Interviewers were selected to represent diversity of gender and religious/ethnic background. They ranged in age and professional background and included physicians, professors, lawyers, and journalists. Interviewers used social networks, victims’ organizations, and local non-governmental organizations to identify and contact potential interviewees. Interviewers were paired with interview subjects in a manner that maximized their comfort and encouraged the collection of detailed testimonies. For example, women were interviewed by other women, Kurds by other Kurds, Assyrians by other Assyrians, etc. In addition, interviewees generally worked in the governorate or region where they live. The interview process was designed to allow victims, their families and others to talk openly about their experiences in a manner that was both personally meaningful and useful for gathering material on specific violations and broad patterns of abuse. Since it is difficult for many victims to discuss their experiences of past repression, interviewers devoted special attention to approaching interviewees with kindness, respect and patience. Interviews typically lasted many hours and, in some cases, interviews took place over several meetings. The IHP interviewers arrived at a designated location to meet interviewees. They provided the interviewee with a clear overview of the project and would then seek to establish a positive, trusting relationship. Because it is often emotionally difficult and even traumatic for people to discuss their experiences of political violence and repression, interviewers were trained to approach interviewees with great care. Interviewers explained that the basic goal of the project was to prepare an account of political repression in Iraq during the regime of Saddam Hussein and after the U.S. led invasion through the personal stories of victims and their families. The interviewers described how these testimonies are gathered from victims and their families all over the country and that the material is stored in a database and that some of the results are to be published in various media with the goal of informing the Iraqi people about the suffering caused by the prior regime and in the past six and a half years. Interviewers sometimes discussed the possible use of the material in courses in schools and universities as well as the hope that the project might encourage the government to create programs to address victims’ needs. We have presented the material gathered in books, newspaper inserts and on call-in radio programs in Arabic and Kurdish that have been heard by an estimated audience of over 500,000 Iraqis. Interviewers highlighted the fact that participation was entirely voluntary and that there were no immediate financial or material benefits of being interviewed. Interviewers explained that interviewees should only participate if they were interested in contributing their story to the larger collection of testimonies about past political violence in Iraq. Interviewers then explained that they would answer any questions potential interviewees may have. They tried to ensure that interviewees had as clear an understanding as possible of the project. Interestingly, a number of interviewees later told our team that they were pleased that the project was honest in offering no direct benefits for participation. This was especially true for victims in places like Halabja where many prior interviews have been conducted, often alongside substantial promises of aid and assistance. The interview methodology began by allowing the interviewee to speak openly about his or her experiences. Interviewers asked interviewees to tell their story in an unstructured manner. The IHP methodology allowed interviewees to talk about their experiences of past human rights violations in the way that is personally meaningful. Interviewers worked in a focused, yet informal manner, listening closely to the stories presented and carefully recording the testimonies by hand. They would write down everything the interviewee said in a word-for-word manner, interrupting the narrative only when absolutely necessary or to slow the process down to make sure all of what was said is accurately recorded. Interviewers were encouraged to respect the fact that every person has their own way of telling their story. The methodology focused on the specific facts of each person’s story as well as the unique ways in which each interviewee chose to tell their own story. Interviewers were trained to work with each interviewee as an individual and to help victims feel as comfortable as possible while discussing these difficult and traumatic issues. After the interviewee finished their story with minimal questions and prompting (a process that often took hours), the interviewers began asking questions. These questions followed the testimony as it has been recorded and focused on clarifying data, such as names, dates, and other relevant information. The interviewer might ask questions about dates, times, as well as how a particular event occurred or how a series of violations progressed. The interviewer might also seek clarification on feelings, descriptions, witnesses, information on perpetrators and anything that seemed necessary to fully understand the testimony. Interviewers would then mark down each element of additional information, whether a specific fact or a lengthy description, as well as any new elements of the testimony that arose from these questions. After completing that stage, the interviewer would then read the testimony slowly, line by line, to the interviewee, asking him or her to correct or clarify any aspect of the narrative that they wanted changed. The purpose of that stage is to ensure that the interviewee fully accepted the testimony in terms of its accuracy, style and tone as a reflection of their personal experience. In this way, each interviewee approved the text of his or her testimony. Then, the IHP staff would thank the interviewee and return home with the relevant paper records from the interview. Each interviewer would then prepare a clean version of the testimony using set guidelines. All original paper and notes would be destroyed. The final testimony material would be coded for safety and then transferred to the main office for review and entry into the project database. The general data on each interviewee as well as the testimony would then be entered into the Martus database by staff working under the supervision of the database manager. The database encrypted the identifying information and narratives which are stored on a server located outside of the country to protect material from tampering, theft, or accidental damage. All paper records were destroyed after being entered into the database, which is password protected and can only be accessed by authorized staff. Following the data entry, we engaged in selective analysis of material based on specific violations, historical events and other issues. We worked closely with colleagues all over the world on this project, including excellent assistance from Benetech. This is an ongoing project and we will post a revised website in Arabic, English and Kurdish shortly (www.iqhp.org). I am planning to draft a detailed review of the methodology and to make available all of our training materials on how to conduct interviews, etc. Any comments?
Burmese human rights groups collect almost all of their data by conducting interviews with people who have witnessed or suffered human rights violations. In looking at comparative examples, I am consistently surprised and baffled by the data that has emerged from the files of repressive regimes.
I have visited the Guatemalan Police Archives (just after the files were discovered) and learned good lessons on how to preserve and organize that kind of material from experts at the Iraq Memory Foundation. The Ethiopian Red Terror Documentation and Research Center is now endeavoring to take on a similar project. I think there are a lot of questions to ask about those methodologies for organizing and preserving these kinds of data, but because in Burma we do not yet have access to those files, I have a different question for any of you who have worked on such project. Based on what you have found, why do repressive regimes preserve their files? And how can our knowledge of their motivations help us in accessing and understanding the data?
We work with the Guatemala Archive Project and have worked with such administrative documentation in other projects. When I visited the Archive, I was astounded at the volume of documentation and grappled with the same question – why would an actor perpetrating abuses document its behavior so thoroughly?
I can only offer two personal thoughts on this that are by no means scientific. First, I believe the hubris of a dominant repressive actor orstate regime can preclude them from contemplating how incriminating such documents would be if they were ever held accountable for their actions.
Second, (and this is an insight from conversations with my team members) this documentation can be incredibly useful to such actors or regimes.For example, in certain systems it can be essential to know what operations or missions a police or military officer has successful undertaken when considering whether or not to promote him/her. Generation and storage of that kind of information is part ofwhat yields rich knowledge later when we try to understand the truth about what happened.
We recently completed a report showing that the former presidentof Chad, Hissène Habré, was informed of hundreds of deaths in prisons operated by his state security force. This study, ‘State Coordinated Violence in Chad under Hissène Habré, A Statistical Analysis of Reported Prison Mortality in Chad'sDDS Prisons and Command Responsibility of Hissène Habré, 1982-1990’, is based on thousands of documents generated by the Documentation and Security Directorate (DDS). The DDS was the security force that pursued political opponents and operated notorious prisons during the Habré regime.
The documents were discovered by chance by Human Rights Watch at the abandoned DDS headquarters in N'Djamena.
Although I hope this example is useful, I don't think it answers your last question, Patrick. I'm not sure how our understanding of their motivations can help us access the data, since in both of these examples (the Archive and the DDS documents) the documents were discovered after the actors had abandoned them and conditions on the ground changed enough that others could access them. But, both are examples of actors with structural organization that generated the documentation to help themselves function as they perpetrated abuses.