People need to change how they’re thinking about regulating artificial intelligence, according to a prominent expert in the field, who pushed back on an idea gaining traction among lawmakers to create a new government agency to regulate AI.
“Regulation is a really hard question,” Andres Sawicki, a professor of law and director of the business of innovation, law, and technology (BILT) concentration at the University of Miami, told Fox News Digital. “The topic of AI is too big to be handled in one big coherent manner.”
Rather than tackling AI in a sweeping, comprehensive way, Sawicki recommend a more pragmatic, piecemeal approach.
“Think about concrete things that AI is impacting — for example, copyright and patent issues,” he said. “Look specifically and concretely at effects the technology is having, the impact of AI on this or that issue. There shouldn’t be a Department of AI to handle this in one big swoop.”
DEMOCRATIC SENATOR PROPOSES NEW FEDERAL AGENCY TO REGULATE AI
Sawicki’s comments come as the idea of a new regulatory agency specifically for AI is gaining momentum on Capitol Hill. Last month, for example, Sen. Michael Bennet, D-Colo., proposed legislation that would create a new federal agency to regulate AI.
Days before Bennet’s proposal, OpenAI CEO Sam Altman testified to a Senate Judiciary Subcommittee on the need for government oversight of AI technologies. At the same hearing, multiple senators from both parties supported the idea of a federal AI agency to regulate the transformative technology.
One apparent reason for Sawicki’s hesitation about such an idea is that no one knows what’s coming next.
“If I had to use one word to describe this area, it’s uncertainty,” he said. “The technology is very impressive right now, but feels like we’re relatively early in terms of industrial organization and geopolitical implications. I would caution that how things look today is likely not how they’ll look in six months or a year, let alone five years. The leaders of AI today may not be the leaders tomorrow. Amid such uncertainty, the goal should be to foster openness and competitiveness.”
Sawicki echoed the concerns of other AI experts, such as DeepAI founder Kevin Baragona, who recently told Fox News Digital that he has doubts about the federal government’s ability to address AI and that insiders aren’t any better prepared for what’s coming than the average consumer.
One key question that has many observers concerned is whether AI will ultimately be a force that hurts or helps humanity. Last month, tech industry leaders, scientists and professors issued a new warning shared by the Center for AI Safety: “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”
According to Sawicki, concerns about AI are legitimate — as is optimism about its potential benefits.
“You can imagine a “Terminator” future of drones and robots deciding humans interfere with their goals and should be eliminated — that’s pretty unlikely or at least far off into the future, and it’s hard to imagine that scenario using the current state of the technology,” he said. “The technology also holds great promise across multiple arenas. For example, in the field of education, there’s the potential for people to have immense access to knowledge, such as studying with a live chatbot to answer questions in real life. We need to focus on those kinds of opportunities while keeping in mind the potential for bad outcomes.”
Sawicki added that humans creating and operating AI are at least as much a concern as AI itself, saying humans effectively serve as “mediators” between AI and the physical world and can act malevolently.
NEXT GENERATION ARMS RACE COULD CAUSE ‘EXTINCTION’ EVENT AKIN TO NUCLEAR WAR, PANDEMIC: TECH CHIEF
He also argued AI will cause economic disruptions but likely not a complete societal transformation.
“Phone operators were replaced by digital technology,” he explained. “In the future it’s plausible AI will replace some jobs, but it’s not going to replace humanity. The Industrial Revolution created job losses, but most people would say it was worth it.”
When asked about the importance of the U.S. achieving AI supremacy over competitors like China, Sawicki said it would be important but cautioned dynamics aren’t the same as a traditional arms race between rival countries.
“Being the leader of a powerful emerging technology would be to our advantage, but looking at the AI race, it’s not the best way to think about it like the race for nuclear weapons.”
People need to change how they’re thinking about regulating artificial intelligence, according to a prominent expert in the field, who pushed back on an idea gaining traction among lawmakers to create a new government agency to regulate AI.
“Regulation is a really hard question,” Andres Sawicki, a professor of law and director of the business of innovation, law, and technology (BILT) concentration at the University of Miami, told Fox News Digital. “The topic of AI is too big to be handled in one big coherent manner.”
Rather than tackling AI in a sweeping, comprehensive way, Sawicki recommend a more pragmatic, piecemeal approach.
“Think about concrete things that AI is impacting — for example, copyright and patent issues,” he said. “Look specifically and concretely at effects the technology is having, the impact of AI on this or that issue. There shouldn’t be a Department of AI to handle this in one big swoop.”
DEMOCRATIC SENATOR PROPOSES NEW FEDERAL AGENCY TO REGULATE AI
Sawicki’s comments come as the idea of a new regulatory agency specifically for AI is gaining momentum on Capitol Hill. Last month, for example, Sen. Michael Bennet, D-Colo., proposed legislation that would create a new federal agency to regulate AI.
Days before Bennet’s proposal, OpenAI CEO Sam Altman testified to a Senate Judiciary Subcommittee on the need for government oversight of AI technologies. At the same hearing, multiple senators from both parties supported the idea of a federal AI agency to regulate the transformative technology.
One apparent reason for Sawicki’s hesitation about such an idea is that no one knows what’s coming next.
“If I had to use one word to describe this area, it’s uncertainty,” he said. “The technology is very impressive right now, but feels like we’re relatively early in terms of industrial organization and geopolitical implications. I would caution that how things look today is likely not how they’ll look in six months or a year, let alone five years. The leaders of AI today may not be the leaders tomorrow. Amid such uncertainty, the goal should be to foster openness and competitiveness.”
Sawicki echoed the concerns of other AI experts, such as DeepAI founder Kevin Baragona, who recently told Fox News Digital that he has doubts about the federal government’s ability to address AI and that insiders aren’t any better prepared for what’s coming than the average consumer.
One key question that has many observers concerned is whether AI will ultimately be a force that hurts or helps humanity. Last month, tech industry leaders, scientists and professors issued a new warning shared by the Center for AI Safety: “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”
According to Sawicki, concerns about AI are legitimate — as is optimism about its potential benefits.
“You can imagine a “Terminator” future of drones and robots deciding humans interfere with their goals and should be eliminated — that’s pretty unlikely or at least far off into the future, and it’s hard to imagine that scenario using the current state of the technology,” he said. “The technology also holds great promise across multiple arenas. For example, in the field of education, there’s the potential for people to have immense access to knowledge, such as studying with a live chatbot to answer questions in real life. We need to focus on those kinds of opportunities while keeping in mind the potential for bad outcomes.”
Sawicki added that humans creating and operating AI are at least as much a concern as AI itself, saying humans effectively serve as “mediators” between AI and the physical world and can act malevolently.
NEXT GENERATION ARMS RACE COULD CAUSE ‘EXTINCTION’ EVENT AKIN TO NUCLEAR WAR, PANDEMIC: TECH CHIEF
He also argued AI will cause economic disruptions but likely not a complete societal transformation.
“Phone operators were replaced by digital technology,” he explained. “In the future it’s plausible AI will replace some jobs, but it’s not going to replace humanity. The Industrial Revolution created job losses, but most people would say it was worth it.”
When asked about the importance of the U.S. achieving AI supremacy over competitors like China, Sawicki said it would be important but cautioned dynamics aren’t the same as a traditional arms race between rival countries.
CLICK HERE TO GET THE FOX NEWS APP
“Being the leader of a powerful emerging technology would be to our advantage, but looking at the AI race, it’s not the best way to think about it like the race for nuclear weapons.”