AI Porn Is a Serious and Terrifying Issue

As of 2019, 96% of deepfakes on the internet were sexual in nature, and virtually all of those were of non-consenting women. With the release of AI tools like Dall-E and Midjourney, making these deepfakes has become easier than ever before. And the repercussions for the women involved are much more devastating.

Recently, a teacher in a small town in the United States was fired after her likeness appeared in an adult video. Parents of the students found the video and made it clear they didn't want this woman teaching their kids. She was immediately dismissed from her position.

But this woman never actually filmed an explicit video. Generative AI created a likeness of her and deepfaked it onto the body of an adult film actress.
She pleaded her innocence, but the parents of the students couldn't wrap their heads around how a video like this could be faked. They refused to believe her.

And honestly, it’s hard to blame them. We've all seen just how good generative AI can be. This incident and many others like it prove just how dangerous AI adult content is, and if left unchecked, it could be so, so much worse.

At first glance, AI pornography might seem harmless. If we can generate other forms of content without human actors, why not this one? Sure, it may reduce work in the field, but it could also curb more problematic issues in the industry.
If the AI was used to create artificial people, maybe it wouldn't be so bad. But the problem is that generative AI has mainly been used with deepfakes to convince viewers that the person they're watching is a specific, real person—someone who never consented to be in the video.

Speaking of consent, by convincingly portraying women in suggestive situations, the perpetrators commit sexual acts or behaviors without the victim's permission. That, by definition, is sexual assault.

But does using generative AI to produce these videos cause any actual harm beyond being defined as assault? For the victims involved, there are numerous consequences to being portrayed in these videos.

QTC Cinderella is a Twitch streamer who built a massive following for her gaming, baking, and lifestyle content. She also created "The Streamer Awards" to honor her fellow content creators. One of whom was Brandon Ewing, aka Atrioc (ay-tri-oc).
In January of 2023, Atrioc was live streaming when his viewers saw an open tab on his browser for a deepfake website. After getting screenshotted and posted on Reddit, users found that the site address featured deepfakes videos of streamers like QTC Cinderella doing explicit sexual acts.

QTCinderella began getting harassed by these images and videos, and after seeing them, she said: "The amount of body dysmorphia I've experienced since seeing those photos has ruined me. It's not as simple as just being violated. It's so much more than that."

For months afterward, QTCinderella was constantly harassed with reminders of these images and videos; some horrible people sent the photos to her 17-year-old cousin. And this isn't a one-off case. Perpetrators of deepfakes are known to send these videos to family members of the victims, especially if they don't like what the victim is doing publicly.

The founder of NotYourPorn, a group dedicated to removing nonconsensual porn from the internet, was targeted by internet trolls using AI-generated videos depicting her in explicit acts. Then, somebody sent these videos to her family members. Imagine how terrible that must feel for her and her relatives.

The sad truth is that even when victims can discredit the videos, the harm may already be done. A deepfake can hurt someone's career at a pivotal moment. QTCinderella was able to get back on her feet and retain her following, but the schoolteacher who lost her livelihood wasn't so lucky.

Imagine someone running for office and leading in the polls, only to be targeted with a deepfake video 24 hours before election night. Imagine how much damage could be done before their team could prove that the video was doctored.

Unfortunately, there’s very little legislation on deepfakes. So far, only three states in the U.S. have passed laws to address them directly.

Even with these laws, the technology makes it difficult to track down the people who create and upload these deepfakes. Also, because most of them post on personal websites rather than on social media, there are no regulations or content moderation limits on what they can share.

Since tracking and prosecuting the individuals who make this kind of content is so challenging, the onus should be on the companies that make these tools to prevent them from being used for evil.

And in fairness, some of them are trying. Platforms like Dall-E and Midjourney have taken steps to prevent people from creating the likeness of a living person. Reddit is also working to improve its AI detection system and has already made considerable strides in prohibiting this content on its platform.

These efforts are important, but I’m not sure they’ll completely eliminate the threat of deepfakes. More generative AI tools are coming on the scene, and will require new moderation efforts. And eventually, some of these platforms won't care, especially if that gives them an edge over well-established platforms.

And then there's the sheer influx of uploaded content. In 2022, Pornhub received over 2 million video uploads to its site. That number will likely increase with new AI tools that can generate content without needing a physical camera. How can any moderation system keep up with this insane volume?

The worst thing about these deepfakes is that victims can’t just log off the internet. Almost all of our livelihoods depend on the internet, so logging off would be an enormous disadvantage in their careers and personal life.

And expecting anyone to leave the internet to protect themselves isn't a reasonable ask. The onus is not on the victim to change, it's on the platforms and the government to create tools that prevent these things from happening so easily. If all the women who are being harassed went offline, the trolls would win, and this tactic of theirs would be incredibly successful. They could effectively silence critics and whoever they felt like attacking.

There's another problem with generative AI tools producing so much adult content. It introduces strong biases to the algorithms in how women should be presented. Many women have reported that they're often over-sexualized when they try to create an image of themselves using AI tools.

These biases are introduced by the source of the AI’s training data: the internet. Although nudes and explicit images have been filtered out for some generative AI platforms, these biases persist. These platforms have to do more than just let the open internet train their AI if they want to prevent the overt sexualization of women to be their normal output.

Deepfakes may be making headlines now, but the truth is, they've been around in spirit for a long time. Before generative AI, people used tools like Photoshop and video editing software to superimpose celebrity heads on the bodies of adult film actors. Broadly, these doctored videos weren't compelling, but things are very different now with AI.

We're careening dangerously close to a point where we can no longer discern the real from the fake. French postmodernist philosopher Baudrillard (bo-dree-aard) warned of a moment when we could no longer distinguish between reality and a simulation.

Humans use technology to navigate a complex reality. We invented maps to guide us through an intricate mass of land. Eventually, we created mass media to understand the world around us and to help simplify its complexity.

But there'll be a point when we’ll lose track of reality. A point when we’ll spend more time looking at a simulation of the world on our phone than we will participating in the real world around us. And we're almost there.

With generative AI, our connection to reality is further disconnected. Because technology can convincingly replicate reality on our devices, we are less inclined to go outside and see what’s real for ourselves.

This inability of human consciousness to distinguish between what is real and what is simulation is what Baudrillard called hyperreality. A state that leaves us vulnerable to malicious manipulation, from things like deepfakes getting people fired, to propaganda leading to the loss of millions of lives.

You may remember that a couple of years ago, there were numerous PSAs, often from celebrities, warning us to keep an eye out for deepfakes. They were annoying, but ultimately, they succeeded in making the public hyper-aware of fake videos.

But not so much with deepfake adult content. Maybe it's because the PSAs about deepfakes didn't mention pornography. They addressed fake speeches by presidents and famous people instead. Or maybe it's because those who consume this content don't care whether it's real. They're OK with the illusion.

One thing is true, though, if the general public was trained to recognize deepfake pornography, the potential for harm would be limited. By being more critical as information consumers, and reporting these harmful videos when we see them, we might be able to curb the effects of this dangerous new medium.

It's not like we're strangers to being critical of what we see and read online. When Wikipedia was first introduced, the idea that it could be a legitimate source of information was laughable. It was mocked on sitcoms and late-night television. It symbolized the absurdity of believing what you read on the internet.

That perception changed with time, deservedly so for Wikipedia, but we had a healthy skepticism toward user-generated internet platforms for a while.

The question is, can we be critical and discerning toward deepfakes while acknowledging that some content is real? Will we lose track of what’s simulation and what’s reality and just distrust whatever we see online?

Or worse, will manipulators succeed in making deepfake-inflicted suffering an everyday occurrence and we end up accepting that as the cost of existing online?
And is there any hope of regulation stopping the constant assault of generative AI on our well-being?

One thing has become clear since ChatGPT and Dall-E started making headlines last year. AI will replace a lot of what humans currently do. It can already convincingly replicate human communication and human design. And our inability to distinguish between human output and AI output has created a laundry list of problems that will be challenging to address.

Soon, AI will replace many of our jobs. Already, businesses are using ChatGPT's writing capabilities in their marketing and sales departments. It's even possible that we'll be watching AI-written TV shows soon. For writers, that's your dream job and your day job both vanishing overnight.

And then there are all the opportunities for deception using AI. Imagine what phishing scams will be like in a year when scammers can easily fabricate videos and audio of anyone you know. People with ill intent can create content to cause others real harm. And right now, that's AI-generated adult videos inflicting pain against women.

If we're unable to tell what's real and what's an AI-generated fake, humanity has a tough road ahead, and I'm not so sure anyone of us is ready for it.