Tesla and SpaceX CEO Elon Musk has been waging a battle for the last several months over what he called “woke” artificial intelligence, a fight that appears to have factored into his call for a six-month pause in the development of next generation AI systems.
Musk was one of several signatories to a letter this week that warned of advanced AI technology that could pose “profound risks to society and humanity.” The letter said one of those risks is that AI might be used to “flood our information channels with propaganda and untruth.”
The letter was signed by several notable technology experts, and it’s not clear who might have pushed for the inclusion of that specific phrase. But it jibes with the public fight Musk has been having since late last year over the ability of AI to constrain what people can say and read on digital platforms – a fight that involves a company Musk had a role in launching.
In 2015, Musk co-founded OpenAI, the company that released GPT-4 this month, a few weeks before the letter was released. GPT-4 is the latest edition of a language system that underlies the company’s ChatGPT tool that can receive inputs and generate human-sounding outputs.
Musk left the board of OpenAI in 2018 and explained that one reason why he left was that the company was chasing profits instead of serving as an open-source “counterweight” to Google.
“Now it has become a closed source, maximum-profit company effectively controlled by Microsoft,” Musk tweeted in February. He was referring to the $10 billion it received from Microsoft, an infusion that OpenAI CEO Sam Altman has defended by noting that Microsoft doesn’t sit on the board of his company and does not control it in any way.
But Musk’s opposition to OpenAI went beyond its funding model. Late last year, Musk made it clear he opposes the way OpenAI has been developing its AI chatbot.
In December, Altman defended the rules developed to limit the ability of ChatGPT to produce controversial or insensitive outputs. “‘AI needs to do whatever I ask’ and ‘I asked the AI to be sexist and it was, look how awful!’ are incompatible positions,” Altman tweeted.
AI EXPERTS WEIGH DANGERS, BENEFITS OF CHATGPT ON HUMANS, JOBS AND INFORMATION: ‘DYSTOPIAN WORLD’
Musk tweeted in reply, “The danger of training AI to be woke – in other words, lie – is deadly.”
In February, Musk had a similar reaction when a Musk ally tweeted that ChatGPT lists former President Trump and Musk himself as “controversial” figures, while President Biden and Bill Gates are not.
Musk replied by tweeting, “!!”
Also in February, Musk replied to a tweet that showed ChatGPT was unwilling to write a poem about the positive attributes of Donald Trump because it can’t produce content that is biased or partisan, but was willing to write a poem about President Biden. “It is a serious concern,” Musk replied.
ARTIFICIAL INTELLIGENCE ‘GODFATHER’ ON AI POSSIBLY WIPING OUT HUMANITY: ‘IT’S NOT INCONCEIVABLE’
OpenAI has a set of rules for using ChatGPT that get to the heart of Musk’s complaint about a “woke” AI system. According to the company, its tools can’t be used to generate “hateful, harassing, or violent content.” That includes content that “expresses, incites, or promotes hate based on identity,” “intends to harass, threaten, or bully” someone, or “promotes or glorifies violence or celebrates the suffering or humiliation of others.”
Just days after Musk tweeted “!!,” press reports said Musk was recruiting AI experts to create his own non-woke chat AI system.
A spokesperson for Musk at SpaceX declined to respond to a request for comment for this story. But one policy watcher in Washington agreed that Musk’s open battle against woke AI seems to be a significant factor in his call for an AI development pause.
“Elon has been on the front line of the Twitter files, so he’s seen how bad the censorship can be,” said Jake Denton, research associate in the Heritage Foundation’s Tech Policy Center. “This is just so evident to everyone in this space… that [AI tech] is exceeding the pace of our ability to control it.”
Denton said that while AI will have countless applications in the future, the early application most people are seeing today are things such as ChatGPT.
“The consumer-known issue is the ChatGPT bias,” he said. “It’s obviously on a path to replace search. The average person will soon go to a chat-based AI system rather than a search bar.”
“And that means the response, the information that they get when they enter a search query, is going to be… a curated thing with the restrictions of the AI company reflected in that answer,” he added. “That’s a major danger.”
The letter signed by Musk and others called on governments to enforce a pause on AI research, a position that conservative groups like the Heritage Foundation seem inclined to support given the evidence of bias in current AI systems, even though it isn’t normally looking for a government solution.
“I think regulation is absolutely what’s needed, government intervention in some capacity,” he said. “I don’t think we should move forward without such a thing. Our future shouldn’t be decided by unelected elites in Silicon Valley.”
Tesla and SpaceX CEO Elon Musk has been waging a battle for the last several months over what he called “woke” artificial intelligence, a fight that appears to have factored into his call for a six-month pause in the development of next generation AI systems.
Musk was one of several signatories to a letter this week that warned of advanced AI technology that could pose “profound risks to society and humanity.” The letter said one of those risks is that AI might be used to “flood our information channels with propaganda and untruth.”
The letter was signed by several notable technology experts, and it’s not clear who might have pushed for the inclusion of that specific phrase. But it jibes with the public fight Musk has been having since late last year over the ability of AI to constrain what people can say and read on digital platforms – a fight that involves a company Musk had a role in launching.
In 2015, Musk co-founded OpenAI, the company that released GPT-4 this month, a few weeks before the letter was released. GPT-4 is the latest edition of a language system that underlies the company’s ChatGPT tool that can receive inputs and generate human-sounding outputs.
Musk left the board of OpenAI in 2018 and explained that one reason why he left was that the company was chasing profits instead of serving as an open-source “counterweight” to Google.
“Now it has become a closed source, maximum-profit company effectively controlled by Microsoft,” Musk tweeted in February. He was referring to the $10 billion it received from Microsoft, an infusion that OpenAI CEO Sam Altman has defended by noting that Microsoft doesn’t sit on the board of his company and does not control it in any way.
But Musk’s opposition to OpenAI went beyond its funding model. Late last year, Musk made it clear he opposes the way OpenAI has been developing its AI chatbot.
In December, Altman defended the rules developed to limit the ability of ChatGPT to produce controversial or insensitive outputs. “‘AI needs to do whatever I ask’ and ‘I asked the AI to be sexist and it was, look how awful!’ are incompatible positions,” Altman tweeted.
AI EXPERTS WEIGH DANGERS, BENEFITS OF CHATGPT ON HUMANS, JOBS AND INFORMATION: ‘DYSTOPIAN WORLD’
Musk tweeted in reply, “The danger of training AI to be woke – in other words, lie – is deadly.”
In February, Musk had a similar reaction when a Musk ally tweeted that ChatGPT lists former President Trump and Musk himself as “controversial” figures, while President Biden and Bill Gates are not.
Musk replied by tweeting, “!!”
Also in February, Musk replied to a tweet that showed ChatGPT was unwilling to write a poem about the positive attributes of Donald Trump because it can’t produce content that is biased or partisan, but was willing to write a poem about President Biden. “It is a serious concern,” Musk replied.
ARTIFICIAL INTELLIGENCE ‘GODFATHER’ ON AI POSSIBLY WIPING OUT HUMANITY: ‘IT’S NOT INCONCEIVABLE’
OpenAI has a set of rules for using ChatGPT that get to the heart of Musk’s complaint about a “woke” AI system. According to the company, its tools can’t be used to generate “hateful, harassing, or violent content.” That includes content that “expresses, incites, or promotes hate based on identity,” “intends to harass, threaten, or bully” someone, or “promotes or glorifies violence or celebrates the suffering or humiliation of others.”
Just days after Musk tweeted “!!,” press reports said Musk was recruiting AI experts to create his own non-woke chat AI system.
A spokesperson for Musk at SpaceX declined to respond to a request for comment for this story. But one policy watcher in Washington agreed that Musk’s open battle against woke AI seems to be a significant factor in his call for an AI development pause.
“Elon has been on the front line of the Twitter files, so he’s seen how bad the censorship can be,” said Jake Denton, research associate in the Heritage Foundation’s Tech Policy Center. “This is just so evident to everyone in this space… that [AI tech] is exceeding the pace of our ability to control it.”
Denton said that while AI will have countless applications in the future, the early application most people are seeing today are things such as ChatGPT.
“The consumer-known issue is the ChatGPT bias,” he said. “It’s obviously on a path to replace search. The average person will soon go to a chat-based AI system rather than a search bar.”
“And that means the response, the information that they get when they enter a search query, is going to be… a curated thing with the restrictions of the AI company reflected in that answer,” he added. “That’s a major danger.”
CLICK HERE TO GET THE FOX NEWS APP
The letter signed by Musk and others called on governments to enforce a pause on AI research, a position that conservative groups like the Heritage Foundation seem inclined to support given the evidence of bias in current AI systems, even though it isn’t normally looking for a government solution.
“I think regulation is absolutely what’s needed, government intervention in some capacity,” he said. “I don’t think we should move forward without such a thing. Our future shouldn’t be decided by unelected elites in Silicon Valley.”