The Human Rights Risks of Artificial Intelligence (AI)

If you’ve been following the rapidly developing news on Artificial Intelligence (AI), you might think that robots are on the cusp of taking over the world. I recently watched the video of the world’s first human-robot press conference at a UN summit in Geneva. As much as the technologist in me wants to geek out at the “cool factor,” I have to admit, it left me feeling unsettled.

AI has brought excitement and concern for the human rights community. As AI becomes more available, civil society organizations are examining the risks it presents. These risks are often more subtle than the headlines about autonomous robots and weapons systems might suggest. Activists and advocates are highlighting the urgent need for careful regulation in AI’s development and use to protect from real human harm.

Much of the conversation around AI in human rights surrounds the right to truth. It is becoming more difficult to distinguish reality from propaganda and falsehood. Generative technologies are making deepfakes and synthetic media easier and easier to accomplish. Even AI detection softwares are not entirely accurate. However, while the impact of synthetic media is highly concerning, there are much more subtle and nuanced harms of AI, primarily affecting historically marginalized communities.

So, what are the risks?

Below, I explain several of the existing and potential human harms of AI. I provide links to more information and resources throughout the article to the many incredible individuals and organizations who have been doing this work. In Part 2 of this series, I will explore whether there are ethical use cases for AI in human rights-based work.

1. Discrimination: A major concern revolves around potential and existing discrimination perpetuated by AI systems. AI models train on the data available to them, which is often inherently biased. For example, AI outputs such as image generation tend to reinforce existing stereotypes and standards of beauty. We see this in the recent AI generated images of Barbie from countries around the world, which have been highly criticized. In addition to reinforcing stereotypes, AI generated data could lead to biased decision making and discriminatory outcomes. AI models have shown discriminatory behavior in:

  1. Hiring: If historical hiring data shows a preference for candidates from certain backgrounds, an AI model may learn to favor those candidates. This can lead to further marginalization of underrepresented groups.
  2. Law enforcement: Facial recognition technology has higher error rates when identifying women and people with darker skin tones. This can lead to misidentification and the unjust targeting of people of color. These systems have the potential to exacerbate existing racial disparities in law enforcement. AI systems that predict recidivism or determine sentencing have also shown biases. Models reflect the biases in historical crime data, leading to unfair outcomes for marginalized communities.
  3. Language and representation: AI models perpetuate dominant uses of language rather than less common ones. They can generate biased outputs based on the language patterns in their training data. Chatbots may respond inappropriately or reinforce stereotypes when interacting with users. AI models do not yet accommodate diverse languages with accuracy, “threatening to amplify existing bias in global commerce and innovation.” AI companies rarely promote opportunities and employment for language speakers across the globe. The exclusion of certain people from AI systems reinforces existing power imbalances. Moreover, the moderation of hate speech and extremist content often falls on individuals from marginalized backgrounds, leading to mental health struggles and high rates of PTSD.

These examples highlight how AI models can perpetuate discrimination and exacerbate social inequities. It underscores the importance of diversifying training data and implementing evaluation strategies to ensure fair AI systems. For more information on a human rights-based approach to AI that centers equity & inclusion, see the AI in Equality inititiative’s toolbox.

2. Privacy Concerns: Another concern about AI is the erosion of privacy rights. AI technologies collect and analyze vast amounts of personal data. This raises questions about consent, surveillance, and data protection. For example, the terrifying way in which the US uses biometric data and digital surveillance technologies to police migrants and asylum seekers throughout their journeys through Central America and Mexico. Collection of personal data infringes upon our rights to privacy, autonomy, and freedom of expression. Misuse of this data could have serious consequences. Data breaches have repeatedly exposed sensitive information. The lack of transparency and accountability in AI algorithms certainly poses a challenge to human rights. When decisions are based on complex algorithms, it can be difficult to hold actors accountable for harm caused.

The potential for AI to be used for repressive and authoritarian purposes raises serious human rights concerns too. AI surveillance can infringe upon the rights to privacy, freedom of expression, and peaceful assembly. When deployed by oppressive regimes, these technologies can lead to surveillance states. The suppression of dissent is not a new tactic, but a scary one with the growing availability of AI tools. Most recently, the European Court of Human Rights found that using facial recognition to locate and arrest a Russian protester violated his right to freedom of expression and privacy.

Governments and organizations must put strategies in place to limit the repressive use of AI. Along with CSOs, academia, and industry, these actors must enforce compliance with regulations such as the EU AI Act. We must advocate for transparency. Ethical implementation frameworks are crucial. These frameworks can mitigate risks and promote responsible AI development. The Partnership on AI provides actionable guidance from diverse perspectives on AI. Tools like this can ensure human rights due-diligence and an inclusive economic future for AI use.

3. Workers’ and Creators’ Rights:

As a content creator, I find this one highly disturbing. Data theft is a persistent practice by AI systems, most notoriously by the AI image generator, Midjourney, which uses images from across the Internet to create derivative works. This poses a threat to artists and creative professionals. The ingestion of copyrighted content by AI models raises massive ethical concerns. There is currently no “opt-in” policy or system of fair compensation for creators whose work is used by these systems.

In addition, AI has the potential to disrupt labor markets, raising concerns about workers' rights. Automation and AI-driven technologies can lead to job displacement. These technologies widen socioeconomic inequalities. They pose challenges to the rights to work and livelihood. We must ensure that the benefits of AI are distributed equitably. Appropriate social safety nets are essential to protect individuals' economic rights.

4. Power Centralization

When I see more tech giants joining the AI rat race, I am plagued by the question: Do these companies really need more power and profit?

This is another concerning aspect of AI – the concentration of power and profit in the hands of a few tech billionaires. AI as it currently exists only serves to perpetuate wealth inequality. It allows Big Tech to evade responsibility for the harms caused by AI systems. Lack of regulation in the industry further amplifies these issues.

This is why AI development requires comprehensive regulatory frameworks to safeguard human rights. A prominent figure in the field of AI and human rights is Timnit Gebru. Gebru founded the Distributed Artificial Intelligence Research Institute (DAIR) and Black in AI. Her work focuses on mitigating the harms of existing AI systems and advocating for a future of AI that incorporates diverse perspectives and deliberate development decisions. Her contributions highlight the agency that humans have in shaping the tools we create. Work like this gives me a grain of hope that better solutions are possible.

California and Washington state have both provided positive examples of successful AI regulation efforts. Yet we must remain aware of the ways in which regulations could also be potentially weaponized against human rights advocates, as seen in the case of Twitter. Additionally, the international regulation of AI presents challenges. It’s difficult for the United Nations to effectively oversee and enforce ethical standards across nations.

The environmental impact of AI systems is another critical concern. It’s easy for us to think that all of this infrastructure exists “in the cloud” and doesn’t impact real people. The reality is that data centers supporting AI technologies consume substantial amounts of water and energy. Again, climate impacts disproportionately affect the world’s most vulnerable. Overall, skepticism towards large tech companies is warranted. We must ensure that their monopolistic control does not further oppress marginalized communities.

Let’s build the kind of world we want to live in

The emergence of AI in various fields raises questions about the kind of world we want to inhabit. Do we want to live in a world where our everyday interactions with other human beings become automated by systems like AI? I certainly would prefer going in the other direction, a direction that values our connection to one another and the planet.

Despite the new technological landscape, many of the concerns listed above are not new, but rather old issues dressed up in new technology. Addressing them will require a multi-faceted strategy and innovative new tactics. It will be essential to:

  1. Hold organizations accountable for their actions. This involves establishing robust regulations and ethical guidelines. Guardrails are crucial to ensure ethical practices in AI development and deployment.
  2. Build a resilient civil society ecosystem that speaks up for the human rights of individuals before technological advancement.

There are already enough systems working against the world’s most vulnerable. We must be sure with every new system put in place, that we are protecting the human rights and dignity of all who are affected.

For more information on the organizations engaging with this work, see the European AI & Society Fund’s report on opportunities and barriers to growing philanthropic engagement around AI. Stay tuned for Part 2 of this series coming soon! It will discuss whether there are ways in which AI can be ethically leveraged to promote human rights and social justice.

 

This perspective was contributed by Melissa McNeilly, New Tactics in Human Rights Project Manager for Digital Content Creation.