Elon Musk Calls for Six-Month Halt to Training of AI Bots

Co-founder of OpenAI says advance of AI too dangerous without global guardrails.

In an open letter, Elon Musk—who was a co-founder of OpenAI, the creator of ChatGPT—and a bevy of tech visionaries have called for an immediate six-month pause on the training of more powerful artificial intelligence bots now in development.

The letter, signed by Musk, Apple co-founder Steve Wozniak and hundreds of others, was posted on the Future of Life Institute website. It warns of imminent “profound risk to society and humanity” without the establishment of global guardrails for the development of AI platforms that soon will be able to outsmart us.

“Contemporary AI systems are now becoming human-competitive at general tasks. Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us?” the letter asked.

Musk split with OpenAI before it launched ChatGPT and was backed by a $10B investment from Microsoft, which immediate took what Musk planned as open-source code for ChatGPT and put it behind a firewall.

Microsoft’s move has set off an artificial intelligence “arms race” between tech giants including Google and Meta, who are rushing to bring their own AI bots to market. Computer science grad students also are racing to push the boundaries of novel AI platforms—Stanford reportedly has set up an AI model that cost $600 to train—in labs and dorm rooms around the world.

In this case, the “arms race” metaphor is literal: artificial general intelligence (AGI) systems—known as “strong AI” and capable of something that approaches sentient thought—already are under development, with a quantum breakout to what is known as technical singularity looming.

Here’s what that means: the machines will train themselves and start charting their own path. Since they already know that reducing carbon emissions is a top human priority, they may decide to get rid of carbon-based organisms altogether—hasta la vista, humans.

In a spine-chilling interview this month with New York magazine, ChatGPT CEO Sam Altman conceded that Musk and his brethren of brainiacs may be right about the perils of AI.

As he started to discuss the edge of the cliff the human race now is teetering over, Altman somewhat apologetically said that the development of AI should have been a government-supervised project, much like the way the Internet emerged from the Pentagon’s DARPA skunkworks.

However, he said, the government—which has outsourced most of the space program to Musk’s SpaceX venture—doesn’t do that kind of big-think stuff anymore.

The really scary stuff came out when the interviewer reminded Altman that he wrote in 2015 that “superhuman machine intelligence is probably the greatest threat to the continued existence of humanity.”

“Yep, I still think so,” Altman responded.

“I think there’s levels of threats. Today, we can see how this can contribute to computer-security exploits, or disinformation, or other things that can destabilize society,” Altman said, in the magazine interview. “Certainly, there’s going to be an economic transition. Those are not in the future; those are things we can look at now.”

“In the medium term, I think we can imagine that these systems get much, much more powerful. Now, what happens if a really bad actor gets to use them and tries to figure out how much havoc they can wreak on the world or harm they can inflict?” he continued.

“And then, we can go further to all of the traditional sci-fi—what happens with the runaway AGI scenarios or anything like that?” Altman added.

“One of the reasons that we want to talk to the world about these things now is—this is coming. This is totally unstoppable,” he said.

Altman agrees there should be US government oversight of artificial intelligence and some sort of global regulatory body.

“The thing that [should] happen immediately is just much more [government] insight into what companies like ours are doing, companies that are training above a certain level of capability at a minimum,” Altman told New York.

“I think totally banning this stuff is not the right answer, and I think that not regulating this stuff at all is not the right answer either,” he said.

We were going to ask ChatGPT to expand on that last statement but we got distracted by a story that popped up on the NBC News website: a team of researchers from Stanford and the Chinese University of Hong Kong hooked an AI program up to an fMRI machine, a CAT scan for brain waves.

The program has been able to produce accurate images that show what the test subjects are thinking—pictures that look like they were taken by a photographer for National Geographic (video is coming, no mushrooms needed).

That’s right, Stanford is working with a Chinese university to train AI bots to read people’s minds. What could possibly go wrong?

All of the above explains why Elon Musk is racing as fast as he can in his latest startup to implant computer chips in human brains—he doing it with chimps now but has been berating the government to give him the green light to experiment on people.

Obviously, Elon’s Plan A to deal with The Rise of the Machines is to become one of them. If Musk suddenly announces that he’ll be the first passenger on the rocket to Mars he’s building, we’ll know the SpaceX and Tesla CEO has turned to Plan B.

This also could explain why Musk apparently stopped paying the rent on all of Twitter’s office space. It’ll take at least three years—assuming Elon builds more than one rocket to the Red Planet—for the dunning notices to reach him.

Open the pod bay door, HAL. This is where we get off.