Machine Learning and Human Rights: How to Maximize the Impact and Minimize the Risk

Conversation Details

Dates of conversation: 
Monday, June 18, 2018 to Friday, June 22, 2018
Conversation type: 

Machine Learning and Human Rights: How to Maximize the Impact and Minimize the Risk

Technology is rapidly changing the world around us, offering new ways for human rights defenders to use new tools to their advantage - machine learning is one of them.

Machine learning is a powerful tool that offers tremendous opportunities in the field of human rights. Machine learning can help us detect patterns of corruption to support advocacy, predict poverty to support policy change, and analyzing evidence of human rights violations for transitional justice. However, with these opportunities that machine learning provides, the same technology also raises significant human rights concerns. Algorithmic biases have the potential to completely change the lives of individuals, as well as reinforce and even accelerate existing social and economic inequalities: flawed facial recognition systems, misclassification of videos documenting war crimes as terrorist propaganda, and racist chatbots.

Our goal as human rights defenders is to distinguish between beneficial machine learning systems from harmful automated decision-making processes in order to minimize the risks and maximize the impact of new technologies in human rights work. A few good practices discussed include: fair and transparent machine learning algorithms, and close collaboration and open conversation with experts from these different fields.

Thank you to our featured resource practitioners who led the conversation:

  • Enrique Piracés, Carnegie Mellon University
  • Natalie Widmann, HURIDOCS
  • Micaela Mantegna, Center for Technology and Society, San Andreas University
  • Nani Jansen Reventlow, Digital Freedom Fund
  • Bill Doran, Ushahidi
  • Santiago Borrajo, CELS Centro de Estudios Legales y Sociales
  • Adam Harvey, VFRAME
  • Vivian Ng, University of Essex

Where to start: Machine learning 101 for human rights defenders

Machine learning (ML) is a subfield of artificial intelligence, with a goal of enabling computers to learn on their own. Through the computer’s algorithm, the computer can identify patterns, build models that explain the world, and make predictions without having pre-programmed rules and models governing the predictions. Arthur Samuel described machine learning as giving “computers the ability to learn without being explicitly programmed”. Before ML, patterns for classifying data would have to be manually defined, which was a long-winded process. ML can lead to a drastic reduction in the labor needed, that can be redirected towards other human rights focused endeavors. There are several tools and resources that provide information about ML. Distill focuses on clarity and transparency when working within the field of ML. Google has provided several learning tools and games that people can undertake to understand the concepts and methods within ML. Further, there are online tools providing an introduction to ML in more than 10 languages, such as R2D3 that can be very helpful for non-English speaking practitioners. With the development of resources, it is suggested that practitioners will have a better chance to enter into the ML space as there are relatively low-cost resources available, such as services, products, libraries and hardware. Such resources include Tensorflow, DeepLens, AWS Machine Learning, and Google’s Cloud AI, which make ML increasingly accessible.

Promise and perils: Machine learning applied to human rights practice

ML is still in the developing phases, but has already been implemented into some human rights work. ML programs can aid detection of human rights abuses, improve existing systems and prevent dangerous situations. When working in human rights, practitioners are often faced with reports, evidence and other data that needs to be categorized. ML tools decrease the amount of time needed to accomplish this, for example by implementing a tool that classifies sentences and is adaptable to the specific research questions human rights defenders are interested in. Other tools currently developed are those using video analysis that can detect objects, sound, speech, text and event types, which allows users to run semantic queries within video collections to discern what is happening. They can also document human rights violations, predict judicial hearings, and be used as open source computer vision tools in large video datasets. Currently, video analysis is being used in Syria in an attempt at providing verified videos that can be used as evidence of war crimes.

However, there are still challenges to ML that can provide obstacles and limitations. The Danish Institute has gathered a large database of information based on Universal Periodic Reviews (UPR). Currently, they are exploring the usage of ML in making predictions, but the availability of the database itself was only made possible by manually categorizing reports and recommendations, that have been made searchable through their website. In addition, the growth of ML also leads to concerns that it will be used in ways contrary to human rights standards. A common concern is the misuse of facial recognition, which could potentially put human rights practitioners and vulnerable populations at risk. Other ML projects have been developed to answer this risk, like Harvard Law School's EqualAIs, which slightly alters an image in a way that is undetectable to the human eye but prevents it from being identified through other ML technology.

Impact of machine learning on society

ML technology provides exciting new opportunities for human rights defenders as it can lead to a significant decrease in the amount of time practitioners use to categorize and classify data. However, as mentioned above, ML also brings about a set of new obstacles that must be addressed. The Toronto Declaration of 2018 serves as an example of public policy attempting to protect against discrimination and human rights abuses in ML technology. Not only are public policy officials working on mitigating the potential harms of ML, but companies who are leaders in ML are taking their own steps to ensure that it is used ethically. One example of this is Google's principles on AI. Some have raised concerns about ML systems used by governments, like the Netherland's System Risk Indication program (SyRI), which creates risk profiles of its citizens to detect fraud. Many have argued that systems such as these reinforce their own findings and have a disproportionate impact on vulnerable members of society.

Further, some have highlighted the potential ethical question of whether or not ML should be used to assist or replace judicial decision making, pointing to how legal systems are created by humans to ensure social order. It is suggested that perhaps ML can be used to make processes more efficient, but that for decisions revolving around critical aspects of personal and social life, humans should make the last call. To mitigate these practical and philosophical issues, special attention has been paid to ensuring that fairness, transparency and diversity are present in ML programs. Some solutions include applying current human rights framework to the use of ML technology. Others point to the EU’s current General Data Protection Regulation (GDPR) as another source of guidelines for ML.

Struggles of machine learning practitioners in the human rights field

Despite the many useful applications of ML in the human rights field, there is a gap in the understanding of ML and its potential by human rights defenders, while ML practitioners struggle to understand human rights practice. To bridge this gap suggestions include open and diverse dialogue between the two groups, as well as more long term projects where the different actors work closely together. In addition, the Fairness, Accountability and Transparency in Machine Learning, provides resources that can resonate with both human rights defenders and ML practitioners, as it plays a fundamental role within the human rights framework, and connects to conversations about design and development processes of ML systems.

Ushahidi is a non-profit tech company focused on helping marginalized people, and their decision to start utilizing ML has led to them experiencing the intersection between human rights and ML professionals. However, through their work they also faced challenges, particularly in data rarity and sparsity. Often the data is very specific and can be limited, so that it does not transfer well to other instances. This makes it hard to train ML algorithms on a new domain.

Open Discussion

While ML is still new in the human rights field, discussions on the strengths and challenges related to its use in the field are becoming more and more common. The 

Huridocs Collaboratory facilitates discussions between technologists and human rights defenders on the relationship between ML and human rights. Other panels, like "Beyond Explainability: Regulating Machine Learning In Practice" at the Strata NY 2018 also touch on similar issues of ML and human rights.    

Tactic Examples

Resources

 

Conversation Leaders

epiraces's picture
Enrique Piracés
Carnegie Mellon University
Natalie's picture
Natalie Widmann
Huridocs
MicaelaMantegna's picture
Micaela Mantegna
Center for Technology and Society, San Andres University
Nani Jansen Reventlow's picture
Nani Jansen Reventlow
Digital Freedom Fund
billdoran's picture
Bill Doran
Ushahidi
sborrajo's picture
Santiago Borrajo
CELS Centro de Estudios Legales y Sociales
aharvey's picture
Adam Harvey
vframe.io
vivian.ng's picture
Vivian Ng
University of Essex