‘What you see is not real.’ Social media and internet platforms announce new AI safety for political ads

Artificial Intelligence warnings before 2024 elections

Artificial Intelligence warnings before 2024 elections

Seeing is believing — Or is it?

Case in point: an artificial intelligence post featuring what looks like actor Morgan Freeman.

But it’s not.  

“What you see is not real,” the figure in the post says. “In fact, in contemporary terms, it is not.”

A 5 EYEWITNESS NEWS crew showed University of Minnesota students Brandon Jilek and Maggie Chiu our previous story about deepfakes and the political landscape.

One AI post in the story shows former President Donald Trump struggling with police in New York City.

That never happened.

Another post shows President Biden singing ‘Baby Shark.’

That was also untrue.

“It hard, I guess, to fact-check when artificial intelligence can make something that realistic,” Jilek says.

“Like there are ones that are super-silly, like the Baby Shark one,” Chiu adds. “I think most people would consider that impossible, or just like funny.”

This week, Meta, the parent company of Facebook and Instagram, said political ads running on its platform will need to disclose if they were created using AI.

Under the new policy, labels acknowledging the use of AI will appear on users’ screens when they click on ads.

The rule is to take effect sometime after the New Year.

“Our democracy could be at risk where people are not able to distinguish between real and not real,” declares Manjeet Rege, the Director of the Center for Applied Artificial Intelligence at the University of St. Thomas.

Rege is applauding the move but says Meta is going further, using AI algorithms to check for non-human generated content.

“They kind of detect, hey, this is probably AI-generated, but they never disclosed it,” he explains. “Then you get flagged for it. Eventually, if you repeat that, you will be banned from the platform.”

On Tuesday, Microsoft unveiled a tool that allows political campaigns to insert a digital watermark into their ads.

The watermarks are intended to help voters understand who created the ads while ensuring the ads can’t be digitally altered by others without leaving evidence.

“A lot of people will disregard that,” says David Schultz, a political science professor at Hamline University. “They’ll strip whatever disclaimers from it, and still push it out onto social media.”

Schultz says he worries that bad actors will find workarounds.

“Given how intense the competition, how polarized the country is, there’s no question we’re going to have an incredible problem regarding distorted or falsely created videos,” he explains.

The government is also looking into AI and political ads.

Legislation in the U.S. House would require candidates to label any ad created by AI that runs on any platform.

A second bill would require watermarks on synthetic images and make it a crime to create unlabeled deepfakes inciting violence or depicting sexual activity.

Jilek and Chiu say they hope these new steps will make a difference.

“I think alerting people is a good thing, it’s almost a must,” Jilek says. “We’re coming to that era. I feel like this is a new thing, not a lot of oversight with AI.”

“I feel like if you weren’t notified ahead of time, you might just easily be led to believe it’s real,” Chiu adds. “At least giving that warning ahead of time, the consumers aren’t exposed to the AI without being aware of that, without their consent, per se.”