The Dark Side of AI: How Deepfake Pornography Violates Our Human Rights

By Hannah Lee

Published: 28th November 2024

Artificial Intelligence (AI) is advancing quickly, transforming industries and daily life in countless beneficial ways. However, alongside these advancements lie significant threats, particularly in how AI usage can be abused, creating devastating consequences and infringing on people’s human rights. An example of this is evident through the recent rise in ‘deepfake’ technology.

Deepfake technology produces highly realistic, synthetic, AI-generated content depicting people saying or doing things they never did. Although deepfake content has legitimate applications, such as in education, this form of media can be weaponised for harmful purposes, including disinformation; harassment; and non-consensual pornography (which we will discuss here). These abuses raise serious human rights concerns, affecting privacy, dignity, and security globally.

Statistics show that in 2023, 98% of all deepfake content online was non-consensual, pornographic and publicly accessible, with 99% of those targeted being women. Once shared, this content spreads uncontrollably across social media. This can be seen through a plethora of cases:

  • Earlier this year X (formerly Twitter) had to block searches for Taylor Swift due to a surge of non-consensual deepfake pornography featuring her.
  • A Channel 4 News analysis found almost 4,000 individuals listed on the five most visited deepfake websites, including 255 British public figures.
  • In another disturbing case, Karl Marshall, 47, from Southport, pleaded guilty to creating 266 sexually explicit deepfake images of women and children between July 2023 and January 2024, which he then shared online.

Deepfake technology challenges fundamental human rights, such as our right to privacy, freedom from degrading treatment, and security of the person. The distribution of non-consensual pornographic content strips individuals of control over their personal image, creating severe psychological impacts which can ultimately lead to reputational harm, harassment, and even blackmail.

Amnesty International reports that victims of online image-based abuse experience higher levels of anxiety, depression, and PTSD. Our human right, ‘the right to security’, is also compromised, as deepfake technology can increase the risk of blackmailing and stalking. The threats of deepfakes are global, although the problem is particularly prevalent in technologically advanced countries, where access to software and media is more widespread.

The gendered nature of deepfake abuse highlights a disturbing imbalance of power, where women disproportionately become victims. This reflects broader societal issues rooted in misogyny, where female sexuality is commodified and exploited. Studies reveal that men are predominantly responsible for creating and distributing non-consensual deepfake pornography, reinforcing a harmful cycle of digital exploitation and gender-based violence. This pattern not only perpetuates existing inequalities but also highlights the urgent need to address the cultural and systemic factors which enable such abuses.

In the UK, the Online Safety Act was passed in April 2024. This Act criminalised the creation of sexually explicit deepfakes without consent, regardless of whether they are/aren’t shared. Offenders can face fines, a criminal record and/or jail time if the content is distributed. However, a huge issue lies in the fact that prosecution requires evidence proving the perpetrator’s intent to harm, rather than focusing on the victim’s lack of consent. This loophole in the law allows offenders to escape accountability, raising serious concerns about whether current legal protections are strong enough to safeguard human rights. Many advocates believe tougher regulations are essential to ensure both perpetrators and AI developers take responsibility for how these technologies are misused.

However, regulating AI abuses and deepfake production is challenging. With the fast-paced development of this technology, it is difficult for laws and policies to keep up. Furthermore, identifying and tracking down deepfakes requires specific advanced technology, and uses a lot of resources that may not be accessible.

Amnesty International condemns AI abuses and the distribution of deepfake pornography, saying it presents a severe violation of human rights, particularly infringing on privacy, dignity, and safety. They advocate for stronger legal protections and regulations to prevent digital abuse and safeguard individuals, especially women, from gender-based violence online.

Moving forward, a comprehensive human rights-based approach is necessary to combat the misuse of AI and deepfakes such as stricter laws and better enforcement mechanisms, as well as education programs to raise awareness around the dangers of deepfakes. Governments, activists, and tech companies must work together to create a safer online environment that respects human dignity and safeguards individual rights.

Editor: Leah Russon Watkins

Leave a comment