Experts, lawmakers voice concerns on rise of deepfake AI technology
At the University of St. Thomas, Data Science Professor Chih Lai is using images of fish embryos to teach his students about artificial intelligence and deepfakes.
“So, the one on the left is the real one, the one on the right is the fake one,” he says, pointing to two enlarged images. “It is very interesting for many people to create something just for fun to begin with.”
The right-hand one — generated by an AI program the professor created himself.
Parts of it are slightly fuzzy, what Lai calls ‘a mirror image of reality.’
“AI tries to make something similar to the real,” he says. “But it is definitely not that real.”
It’s not hard to find deepfakes on the internet.
One deepfake post shows an AI-generated video of actor Morgan Freeman, explaining what you’re looking at.
“What you see is not real, at least in contemporary terms, it is not,” the avatar declares. “What if I were to tell you I’m not a human being, would you believe me?”
That old adage — ‘seeing is believing’ — turned on its head.
“When we can’t trust our own senses, what we see and what we hear — it could not be real, that is incredibly destabilizing for society,” says Senator Erin Maye Quade (DFL-Apple Valley).
Deepfakes are already part of the political landscape.
One post shows former President Donald Trump struggling with police in New York City — something that never happened.
Another, with President Biden singing ‘Baby Shark’ — also untrue.
“The technology ends up in the hands of bad actors,” explains Manjeet Rege, the Director of the Center for Applied Intelligence. “There are fake images and then there are fake videos, and both end up having a similar impact on people. They are relying on information, and they believe that.”
The race is on, in Minnesota and across the country, to put safeguards on artificial intelligence, the technology used to create deepfakes.
President Biden recently met with industry leaders, who agreed to voluntary restrictions, including security testing and transparency measures to identify AI-generated materials.
Rege fears that many voters are considering deepfakes as an accurate source of information.
“People have to make a decision about let’s say, two candidates running for office,” he says. “And they do not have enough time to basically come at the conclusion that this is not truthful information about one candidate.”
Recently, Governor Tim Walz signed a new law, prohibiting the non-consensual sharing of sexual deepfake images or using deepfakes to interfere with an election 60 days before polls open.
“You can make any person appear to say anything. Imagine what a person with nefarious intentions could do,” Maye Quade, one of the sponsors of the measure, says. “I’m not even talking about a regular citizen in Minnesota, but an international country that doesn’t have good intentions, or make leaders say things they didn’t mean to say.”
Lai says there are concerns about deepfakes being used in political campaigns.
“I think everyone knows elections can generate or produce lots of damage to the political candidates,” he notes.
Violators of the Minnesota deepfake law could face up to five years in prison, and $10,000 in fines.
Lai suggests if you are trying to spot a deepfake, slow the video down.
Fake avatars, he says, often don’t blink at all — and if you look closely, lip movements often don’t match their voice.
Lai says he believes restricting how deepfakes are used is a step in the right direction.
But he worries that as the technology gets more advanced, it will be harder to tell what’s real.
“Frankly, I don’t think there’s a good tool right now people can use to detect the deepfake,” Lai declares. “So, I think using the law is one way to discourage this kind of criminal behavior. I’m not saying you can completely prevent deepfakes from happening, but that is a step in the right direction.”