ÌÇĐÄvlog

Skip to content
NOWCAST ÌÇĐÄvlog News at 7am Sunday Morning
Watch on Demand
Advertisement

Victims of explicit deepfakes can now take legal action against people who create them

Victims of explicit deepfakes can now take legal action against people who create them
THE PROGRAM WOULD WORK. KHIREE. WELL, THIS RIGHT HERE IS THE KIND OF THING WE’RE TALKING ABOUT WHEN IT COMES TO DEEP FAKES. THIS VIDEO WAS UPLOADED BY MCAFEE ON ICS, AND IT SHOWS AN EXAMPLE OF A DEEP FAKE GENERATED BY AI. AS YOU CAN SEE, IT SHOWS SOMEONE LOOKING LIKE TAYLOR SWIFT GIVING AWAY COOKWARE. NOW, THIS IS THEIR EXAMPLE OF THE KIND OF VIDEO YOU JUST SHOULDN’T TRUST. AND THE PROFESSOR I SPOKE WITH AT UMD IS WORKING ON SOFTWARE THAT CAN HELP YOU TELL THE DIFFERENCE. DANGER CAN RANGE ANYWHERE FROM A FUND TO A SERIOUS ISSUE, LIKE MANIPULATING OPINIONS IN IN A DEMOCRACY. AND THAT IS THE THING. THOSE ARE THE THINGS THAT ACTUALLY ARE MADE, MAKES US MORE WORRIED. NORAH POMEROY IS AN ASSISTANT PROFESSOR AT THE UNIVERSITY OF MARYLAND IN COLLEGE PARK. HE AGREES WITH NATIONAL SECURITY EXPERTS THAT DEEP FAKES ARE A GROWING ISSUE. A DEEP FAKE IS THE MANIPULATION OF A PERSON’S LIKENESS IN A VIDEO THAT CAN BE USED TO SPREAD MALICIOUS OR FALSE INFORMATION. TO COMBAT THAT PROBLEM, ROY AND HIS RESEARCH AND DEVELOPMENT TEAM OF FOUR STUDENTS ARE DEVELOPING TALK LOCK. IT’S A CRYPTOGRAPHIC QR CODE BASED SYSTEM THAT CAN VERIFY WHETHER CONTENT HAS BEEN EDITED FROM ITS ORIGINAL FORM. HERE’S AN EXAMPLE OF HOW TALK LOCK WORKS. IF I’M ABOUT TO GIVE A LIVE SPEECH, THE APP WILL GENERATE A QR CODE LIKE THIS DEMO RIGHT HERE. AS I KEEP TALKING, THE CODE KEEPS CHANGING. THE QR CODE IS DISPLAYED NEXT TO THE PERSON THAT’S SPEAKING AND RECORDS THE SPEAKER’S AUDIO. ANYONE CAN TRIGGER A VERIFICATION THAT CAN TAKE CONTENT WITH OUR TALK LOCK QR CODE, AND OUR SERVER CAN VERIFY WHETHER THE SPEECH THAT IS IN THE CONTENT AND THE QR CODE THAT IS SHOWING ON THE CONTENT MATCHES OR NOT. AND IT CAN TELL AUTHENTICATE THE SPEECH WHETHER IT HAS BEEN MANIPULATED OR NOT. IT ALSO WORKS FOR ORIGINAL CONTENT THAT A CREATOR WANTS TO MAKE SURE CAN’T GET MANIPULATED. THEY CAN UPLOAD THIS TO TALK LOCK SERVER AND TALK LOCK SERVER WILL PROCESS THE DATA AND IT WILL CREATE A DYNAMIC QR CODE AND IT WILL EMBED TO THE EVERY FRAME OF THAT MEDIA CONTENT. HIS TEAM OF STUDENTS SAYS THAT WE SHOULD BE CONCERNED ABOUT DEEP FAKES, BECAUSE CREATING ONE ISN’T AS HARD AS YOU MAY THINK. THESE DAYS. YOU CAN JUST DOWNLOAD AN APP ON PHONE AND THEN JUST TELL THEM, OKAY, CHANGE THIS VIDEO AND CHANGE SWAP FACE WITH THIS PERSON, HE SAYS. CREATING THIS KIND OF SOFTWARE IS IMPORTANT BECAUSE DEEP FAKES COULD LEAD TO LIFE CHANGING CONSEQUENCES. IN TODAY’S WORLD. AND THE PROFESSOR TOLD ME THAT HE IS LOOKING TO ROLL OUT THE FIRST FREE VERSION OF THIS FOR ANDROID AND IPHONE. BY THE BEGINNING OF THE SUMMER, REPORT
CNN logo
Updated: 2:36 PM CDT May 19, 2025
Editorial Standards ⓘ
Advertisement
Victims of explicit deepfakes can now take legal action against people who create them
CNN logo
Updated: 2:36 PM CDT May 19, 2025
Editorial Standards ⓘ
In recent years, people ranging from Taylor Swift and Rep. Alexandria Ocasio-Cortez to high school girls around the country have been victims of non-consensual, explicit deepfakes — images where a person’s face is superimposed on a nude body using artificial intelligence.Video above: Maryland professor working on software to identify deepfakes Now, after months of outcry, a federal law criminalizing the sharing of those images is finally here.President Donald Trump signed the Take It Down Act in a ceremony at the White House on Monday. In addition to making it to illegal to share online nonconsensual, explicit images — real or computer-generated — the law also requires tech platforms to remove such images within 48 hours of being notified about them.The law will boost protections for victims of revenge porn and nonconsensual, AI-generated sexual images, increase accountability for the tech platforms where the content is shared and provide law enforcement with clarity about how to prosecute such activity. Previously, federal law prohibited creating or sharing realistic, AI-generated explicit images of children. But laws protecting adult victims varied by state and didn’t exist nationwide.The Take It Down Act also represents one of the first new U.S. federal laws aimed at addressing the potential harms from AI-generated content as the technology rapidly advances.“AI is new to a lot of us and so I think we’re still figuring out what is helpful to society, what is harmful to society, but (non-consensual) intimate deepfakes are such a clear harm with no benefit,” said Ilana Beller, organizing manager at progressive advocacy group Public Citizen, which endorsed the legislation.The law passed both chambers of Congress nearly unanimously, with only two House representatives dissenting, in a rare moment of bipartisan consensus. More than 100 organizations, including nonprofits and big tech companies such as Meta, TikTok and Google, also supported the legislation.First lady Melania Trump threw her support behind the effort, too, lobbying House lawmakers in April to pass the legislation. And the president referenced the bill during his address to a joint session of Congress in March, during which the first lady hosted teenage victim Elliston Berry as one of her guests.Texas Sen. Ted Cruz and Minnesota Sen. Amy Klobuchar first introduced the legislation last summer.Months earlier, a classmate of Texas high schooler Berry shared on Snapchat an image of her that he’d taken from her Instagram and altered using AI to make it look like she was nude. Berry wasn’t alone — teen girls in New Jersey, California and elsewhere have also been subject to this form of harassment.“Everyday I’ve had to live with the fear of these photos getting brought up or resurfacing,” Berry told CNN last year, in an interview about her support for the Take It Down Act. “By this bill getting passed, I will no longer have to live in fear, knowing that whoever does bring these images up will be punished.”Facing increased pressure over the issue, some major tech platforms had taken steps to make it easier for victims to have nonconsensual sexual images removed from their sites.Some big tech platforms, including Google, Meta and Snapchat, already have forms where users can request the removal of explicit images. And others have partnered with nonprofit organizations StopNCII.org and Take It Down that facilitate the removal of such images across multiple platforms at once, although not all sites cooperate with the groups.Apple and Google have also made efforts to remove AI services that convert clothed images into manipulated nude ones from their app stores and search results.Still, bad actors will often seek out platforms that aren’t taking action to prevent harmful uses of their technology, underscoring the need for the kind of legal accountability that the Take It Down Act will provide.“This legislation finally compels social media bros to do their jobs and protect women from highly intimate and invasive breaches of their rights,” Imran Ahmed, CEO of the non-profit Center for Countering Digital Hate, said in a statement to CNN. “While no legislation is a silver bullet, the status quo—where young women face horrific harms online—is unacceptable.”Public Citizen’s Beller added that it’s also “important to signal as a society that this is unacceptable.”“If our federal law is passing a law that says, this is unacceptable and here are the consequences, that sends a clear signal,” she said.

In recent years, people ranging from Taylor Swift and Rep. Alexandria Ocasio-Cortez to high school girls around the country have been victims of non-consensual, explicit deepfakes — images where a person’s face is superimposed on a nude body using artificial intelligence.

Video above: Maryland professor working on software to identify deepfakes

Advertisement

Now, after months of outcry, a federal law criminalizing the sharing of those images is finally here.

President Donald Trump signed the Take It Down Act in a ceremony at the White House on Monday. In addition to making it to illegal to share online nonconsensual, explicit images — real or computer-generated — the law also requires tech platforms to remove such images within 48 hours of being notified about them.

The law will boost protections for victims of revenge porn and nonconsensual, AI-generated sexual images, increase accountability for the tech platforms where the content is shared and provide law enforcement with clarity about how to prosecute such activity. Previously, creating or sharing realistic, AI-generated explicit images of children. But laws protecting adult victims varied by state and didn’t exist nationwide.

The Take It Down Act also represents one of the first new U.S. federal laws aimed at addressing the potential harms from AI-generated content as the technology rapidly advances.

“AI is new to a lot of us and so I think we’re still figuring out what is helpful to society, what is harmful to society, but (non-consensual) intimate deepfakes are such a clear harm with no benefit,” said Ilana Beller, organizing manager at progressive advocacy group Public Citizen, which endorsed the legislation.

The law passed both chambers of Congress nearly unanimously, with only two House representatives dissenting, in a rare moment of bipartisan consensus. More than , including nonprofits and big tech companies such as Meta, TikTok and Google, also supported the legislation.

First lady Melania Trump threw her support behind the effort, too, lobbying House lawmakers in April to pass the legislation. And the president referenced the bill during his address to a joint session of Congress in March, during which the first lady hosted teenage victim Elliston Berry as one of her guests.

Texas Sen. Ted Cruz and Minnesota Sen. Amy Klobuchar first introduced the legislation last summer.

Months earlier, a classmate of Texas high schooler Berry shared on Snapchat an image of her that he’d taken from her Instagram and altered using AI to make it look like she was nude. Berry wasn’t alone — teen girls in New Jersey, California and elsewhere have also been subject to this form of harassment.

“Everyday I’ve had to live with the fear of these photos getting brought up or resurfacing,” Berry told CNN last year, in an interview about her support for the Take It Down Act. “By this bill getting passed, I will no longer have to live in fear, knowing that whoever does bring these images up will be punished.”

Facing increased pressure over the issue, some major tech platforms had taken steps to make it easier for victims to have nonconsensual sexual images removed from their sites.

Some big tech platforms, including , and , already have forms where users can request the removal of explicit images. And others have partnered with nonprofit organizations and that facilitate the removal of such images across multiple platforms at once, although not all sites cooperate with the groups.

Apple and Google have also made efforts to remove AI services that convert clothed images into manipulated nude ones from their app stores and search results.

Still, bad actors will often seek out platforms that aren’t taking action to prevent harmful uses of their technology, underscoring the need for the kind of legal accountability that the Take It Down Act will provide.

“This legislation finally compels social media bros to do their jobs and protect women from highly intimate and invasive breaches of their rights,” Imran Ahmed, CEO of the non-profit Center for Countering Digital Hate, said in a statement to CNN. “While no legislation is a silver bullet, the status quo—where young women face horrific harms online—is unacceptable.”

Public Citizen’s Beller added that it’s also “important to signal as a society that this is unacceptable.”

“If our federal law is passing a law that says, this is unacceptable and here are the consequences, that sends a clear signal,” she said.