Moving

AI-generated deepfakes are transferring sooner than coverage can : NPR

An image from an ad by the Republican National Committee against President Biden shows images generated by artificial intelligence. The proliferation of AI-generated images, videos and audio poses a challenge to policy makers. Republican National Committee Hide Caption

toggle caption

Republican National Committee

An image from an ad by the Republican National Committee against President Biden shows images generated by artificial intelligence. The proliferation of AI-generated images, videos and audio poses a challenge to policy makers.

Republican National Committee

This week, the Republican National Committee used artificial intelligence to create a 30-second ad imagining what President Joe Biden’s second term might be like.

It depicts a series of fictional crises, from a Chinese invasion of Taiwan to the shutdown of the city of San Francisco, illustrated with fake images and news reports. A small disclaimer in the top left states that the video was “created with AI imagery”.

It takes a few dollars and 8 minutes to create a deepfake.  And thats just the beginning

The ad was just the latest example of AI blurring the line between real and apparent. Fake images of ex-President Donald Trump tussling with the police have been circulating in recent weeks. So does an AI-generated image of Pope Francis in an elegant puffy coat and a fake song with cloned voices from pop stars Drake and The Weeknd.

Artificial intelligence is getting better and better at mimicking reality, which raises big questions about how it can be regulated. And as tech companies give anyone the power to create fake images, synthetic audio and video, and text that sounds convincingly human, even experts admit they’re stumped.

“I look at these generations several times a day and I have a very hard time distinguishing them from each other. It’s going to be a tough road ahead,” said Irene Solaiman, security and policy expert at AI firm Hugging Face.

Solaiman is focused on making AI work better for everyone. It also involves thinking a lot about how these technologies can be misused to generate political propaganda, rig elections, and create fake stories or videos of things that never happened.

When you find that your new favorite song was written and performed by...AI

Some of these risks already exist. For several years, AI has been used to digitally insert unknowing women’s faces into porn videos. These deepfakes are sometimes targeted at celebrities and sometimes used to get revenge on private individuals.

It underscores that the risks of AI are not only in what the technology can do, but also in how we as a society respond to these tools.

“One of my biggest frustrations, shouting from the mountaintops in my area, is that a lot of the problems we see with AI aren’t technical problems,” Solaiman said.

Technical solutions are struggling to keep up

There is no silver bullet to distinguish AI-generated content from human-made content.

There are technical solutions such as software that can recognize AI outputs and AI tools that watermark the images or text they produce.

Another approach uses the clumsy name Content Provenance. The aim is to make it clear where digital media – real and synthetic – come from.

Grimes invites fans to make songs with an AI-generated version of their voice

The goal is to let people easily “identify what type of content it is,” said Jeff McGregor, CEO of Truepic, a company that works on digital content verification. “Was it made by a human? Was it made by a computer? When was it made? Where was it made?”

But all of these technical answers have flaws. There is still no universal standard for identifying real or fake content. Detectors don’t capture everything and need to be constantly updated as AI technology advances. Open source AI models must not contain watermarks.

Laws, regulations, media competence

For this reason, those working on AI policy and security say a mix of responses is needed.

Laws and regulations need to play a role in at least some of the highest-risk areas, said Matthew Ferraro, an attorney at WilmerHale and an expert on legal issues surrounding AI.

“It’s likely to be non-consensual deepfake pornography, or deepfakes of election candidates or campaign workers in very specific contexts,” he said.

Ten states already ban some types of deepfakes, mostly pornography. Texas and California have laws banning deepfakes targeting candidates for office.

Copyright is also an option in some cases. This is what Drake and The Weeknd’s label, Universal Music Group, has invoked to get the song, which embodies their voices, from streaming platforms.

When it comes to regulation, the Biden administration and Congress have signaled their intention to do something. But as with other technology policy issues, the European Union is leading the way with the forthcoming AI law, a set of rules intended to set guidelines for the use of AI.

However, technology companies are already making their AI tools available to billions of people and integrating them into apps and software that many of us use every day.

That means, for better or worse, sorting fact from AI fiction requires people to be more savvy media consumers, although it doesn’t mean reinventing the wheel. Propaganda, medical misinformation, and false claims about elections are problems that predated AI.

“We should look at the various ways to mitigate these risks that we already have and think about how we can adapt them to AI,” said Arvind Narayanan, a computer science professor at Princeton University.

This includes efforts like fact-checking and asking whether what you’re seeing can be corroborated in what Solaiman calls “people literacy.”

“Just be skeptical, review anything that might have a major impact on your life or democratic processes,” she said.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button