The Growing Phenomenon of Deepfakes
Digitally edited pornographic videos featuring the faces of unconsenting women are becoming increasingly popular on websites, with tens of millions of visitors. These videos, known as deepfakes, are created using AI software that can replace an existing video and mirror facial expressions. According to studies, almost 96% of deepfakes are sexually explicit and feature women who did not consent to the creation of content.
Deepfake creators use online chat platforms to advertise their videos for sale and offer the creation of custom videos. Initially, most deepfake videos were created using footage of female celebrities. Still, now creators also offer to make videos featuring anyone who is willing to pay for them.
Rampant Abuse and Misuse of Deepfakes
The ease with which deepfakes can be created has led to rampant abuse and misuse. The creators sell access to their libraries with thousands of videos for subscription fees as low as $5 a month. Visa and Mastercard are available to use on some deepfake websites.
Only four states in the US have passed legislation specifically about deepfakes. Additionally, people who are subjects of deepfakes can request removal of pages from Google Search that include “involuntary fake pornography.” However, it remains incredibly challenging to control the distribution and access to such content.
The Impact on Individuals and Society
The proliferation of deepfakes has significant consequences for individuals and society at large. For instance, it can cause harm to individuals who are victims of non-consensual pornography by violating their privacy and belittling them in front of the world.
Moreover, given that AI-generated content can influence the news cycle, there is a legitimate fear that deepfakes could be used in spreading fake news or political propaganda. For instance, some political opponents of Donald Trump used AI image generators to create deepfakes of him, which were shared on social media.
Midjourney Ends Free Trial of AI Image Generator
Midjourney CEO, David Holz, recently announced the end of free trials of their AI image generator due to “extraordinary demand and trial abuse.” Midjourney’s AI image generator was used to create deepfakes of Donald Trump and Pope Francis, which raised concerns about the potential spread of misinformation by bad actors.
The company acknowledged it had trouble establishing content policies and hopes to improve AI moderation to screen abuse. Some developers have strict rules on image creation while others have relatively loose guidelines. There is also a fear that the images generated may be stolen since they use existing images as reference points.
Influence of AI-generated Images on Public Perception
Recently, artificial intelligence-generated images depicting the arrest of former US President Donald Trump went viral on social media. The images created by a British journalist were meant to be a joke and were not intended to deceive people. Despite their obvious fakery, the images garnered over 5 million views and elicited a strong emotional reaction from the public.
The incident highlights the power of attention and spectacle in shaping public perception and generating hype. The growing influence of AI-generated content blurs the lines between reality and hyperreality. Notably, shock is considered a finite resource. Thus, even if Trump’s arrest happens, it may not meet the emotional response that those fake images created.
In conclusion, the rise in deepfake videos has significant consequences for individuals’ privacy rights and raises larger issues around technology use in society. Unfortunately, there is no easy solution to this problem, but continued efforts towards awareness and regulation are vital in addressing its impact fully.
Image Source: Wikimedia Commons