Biden executive order seeks to rein in AI, but much work remains
The good and bad of artificial intelligence are very much a concern at the White House.
“AI is all around us, much of it making our lives better,” President Biden told reporters. “AI devices are being used to deceive people. Deepfakes are used to generate video and audio, smear reputations, and spread fake news and commit fraud.”
The president Monday issued an Executive Order, calling for oversight on what ABC News calls a near half-trillion-dollar industry used by some of the nation’s largest companies, including Google and Amazon.
“They are trying to reduce some risk by asking companies to share their testing data, to use watermarks on documents and images that are created by AI,” explains John Abraham, an engineering professor at the University of St. Thomas, who says he uses AI in his research.
The order demands that companies share that data with the federal government before the new capabilities become available to the public.
The watermarks are to alert consumers when they encounter a product enabled by AI, like a deepfake.
“Something needed to be done,” declares Manjeet Rege, the director of the Center for Applied Artificial Intelligence at the University of St. Thomas. “Up until now, there were not any guardrails.”
Rege says he hopes the order will create a pathway for federal laws regulating AI.
“It is important that the Senate acts and creates laws,” he notes. “Once a law is in place, then things become illegal and legal. This is the first step towards that.”
Minnesota already has a ‘deepfake’ law — which prohibits the non-consensual sharing of sexual deepfake images or using deepfakes to interfere with an election 60 days before polls open.
Violators could face up to five years in prison and $10,000 in fines.
But what about Biden’s Executive Order?
Experts say the rules will function only as suggestions, not mandates — and companies can decide whether they want to follow those rules or not.
“There are no teeth in this,” Abraham explains. “These are recommendations. Companies can ignore them if they want. This is the government’s effort to try to stay ahead of the technology that’s being developed, and to ensure some safety as it is being developed and rolled out. “
Tony Chiappetta — the founder of CHIPS, a White Bear Township cyber security firm, says he believes AI guardrails should have been established a long time ago.
He worries about bad actors who won’t be concerned with government regulation.
“There’s already independent AI products out there in the dark web that hackers use to take advantage of things out there,” Chiappetta says.
Still, he says there are compelling reasons to get federal AI laws in place, ones that would cross state lines.
“Across state lines might really seem to cause a challenge, unless there’s some kind of federal jurisdiction, where it doesn’t matter which state you’re in,” Chiappetta explains. “Is the person that committed the crime, are they in California or Minnesota? Are the servers where these artificial intelligence instances, are they held in a data center in California or Minnesota?”
The Executive Order also calls for biotechnology firms to take appropriate precautions when using AI to create or manipulate biological material.
The safety tests undertaken by AI developers ensure that new products don’t pose a major threat to users or the wider public.
Abraham says he tells his students to get educated about AI and to learn to use it as a tool to help them become more productive.
Federal regulation or not, he says he and his colleagues worry about the future — and bad actors abroad.
“Their hope is that we develop good AI before bad people develop bad AI. There’s an arms race happening for AI right now, and let’s hope the good guys win,” Abraham declares. “Companies and countries are racing forward. There’s no incentive to stop. We’re crossing our fingers, and hoping it all works out in the end, but we’re really in uncharted territory.”