Government, industry leaders voice concerns about potential dangers of artificial intelligence
This is Clifton Burriss’ happy place — doing online research in a quiet classroom.
“It’s not some omniscient superbeing,” the University of St. Thomas data science grad student said. “It’s literally just a probability machine.”
Burriss, 33, is keeping close tabs on the world of artificial intelligence, and chatbot programs like ChatGPT.
“We’re kind of looking at misinformation, manipulation of different things,” they said. “You know, it would kind of make a lot of sense to kind of put their reins on some of the technology and make sure it doesn’t go into the wrong hands.”
That kind of thinking made recent headlines, when hundreds of researchers, executives and engineers signed a statement by the Center for AI Safety — a California non-profit — about the potential dangers of artificial intelligence.
“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war,” the statement said.
Minnesota Congressman Dean Phillips says he added his name to the online declaration.
“We have a nascent industry coming to the U.S. Congress, essentially asking to be regulated,” Phillips said. “Right now, we have a dearth of understanding in the U.S. government and a lot of catching up to do. I think that was a profound statement that is a call to action.”
We first introduced you to ChatGPT shortly after it was unveiled last November.
RELATED: Cyber experts discuss artificial intelligence program ‘ChatGPT’
The program collects data from hundreds of thousands of links and summarizes information for users.
“It’s effectively a system that’s predicting the outcome of a prompt,” David Nguyen, a Fellow at the U of M Technological Learning Institute told us at the time. “It’s regurgitating something new, so it can be perceived as original thought.”
ChatGPT can respond to a prompt with poetry, songs and essays.
But lawmakers and industry leaders are voicing concerns that AI programs could be potentially used to build military weapons, create cyber-attacks and flood the web with disinformation.
“Imagine a device that has all the intelligence from the beginning of time to this very moment, of every single human being — but lacking in a moral compass,” Phillips explained.
He’s calling for bi-partisan oversight hearings as soon as possible.
Lawmakers are also discussing the notion of a new federal agency that would oversee digital platforms and AI products.
Other ideas include self-regulation, a slowdown in AI creation, or an international regulator, much like the U.N. Nuclear Agency.
The Center for AI Safety is urging global leaders to produce international safeguards to keep bad actors from getting more involved.
One of the big concerns is that there are groups or individuals out there who are already ahead of the game — creating deep fakes, for example.
“AI recorded a Drake song — AI wrote a Drake song with his voice,” declared John Abraham, an engineering professor at the University of St. Thomas. “I’ve listened to it, and it’s indistinguishable to my 49-year-old ears from the actual.”
Abraham said the government should consider working with tech companies to develop what he calls ‘good AI.’
“We want people to be working on developing AI in a responsible manner such that AI is aligned with us,” Abraham said. “Hopefully that will set the stage for AI being a help and a partner with us and not be a competitor with us.”
Burriss hopes these efforts will stave off some of the scenarios about a dark future for AI — and the people using it.
“I have to imagine a world where humanity knows when to step up and prioritize themselves over the goals that they believe their technology can help them achieve,” Burriss said. “I think there are moments like that have kept us alive to this very day.”