Elon Musk, the CEO of Twitter, is one of those who wants to stop training AIs above a certain capacity for at least six months.

Published: 2023-04-04

Leaders in the field of artificial intelligence seek to halt the development of strong AI systems due to concerns that mankind may be in danger.

They claim the rush to build AI systems is out of control and have signed an open letter warning of possible consequences.

Elon Musk, the CEO of Twitter, is one of those who wants to stop training AIs above a certain capacity for at least six months.

Steve Wozniak, a co-founder of Apple, and a few DeepMind researchers also signed.

The developer of ChatGPT, OpenAI, recently unveiled GPT-4, a cutting-edge technology that has stunned observers with its aptitude for tasks like identifying objects in photos.

The letter, from the Future of Life Institute and signed by the luminaries, requests a temporary halt to development at that stage and warns of the dangers that potential future, more sophisticated systems may provide.

The report warns that societies and mankind face grave hazards from AI systems with intellect on par with humans.

A non-profit organization called the Future of Life Institute states that its goal is to "direct transformational technology away from severe, large-scale hazards and towards benefitting life."

Mr. Musk, the owner of Twitter and the CEO of the automaker Tesla, is named as the organization's external adviser.

The letter claims that careful consideration must go into the development of advanced AIs, but lately, "AI laboratories have been trapped in an out-of-control rush to build and deploy ever more powerful digital brains that no one - not even their creators - can understand, anticipate, or reliably manage."

The letter issues an alert that artificial intelligence (AI) might automate occupations and saturate communication channels with false information.

Is the world ready for the impending storm of AI?
The letter comes in response to a recent study by the investment firm Goldman Sachs, which predicted that although AI will probably enhance productivity, it also had the potential to automate millions of jobs.

Some experts, though, told the BBC that it was very difficult to anticipate how AI will affect the labor market.

The letter poses a more hypothetical question, "Shall we produce non-human brains that may ultimately outnumber, outwit, obsolete, and replace us?"

"AI systems pose significant risks to democracy through weaponised disinformation, to employment through displacement of human skills, and to education through plagiarism and demotivation," said Stuart Russell, a computer science professor at the University of California, Berkeley and a signatory to the letter, to BBC News.

Advanced AIs might also pose a "more broad danger to human control over our society" in the future.

Prof. Russell said, "In the long term, adopting reasonable safeguards is a minor price to pay to reduce these dangers.

However, Arvind Narayanan, a professor of computer science at Princeton, said that the letter focused on "speculative, futuristic danger, neglecting the version of the problem that is actually impacting people."

OpenAI cautioned of the dangers if artificial general intelligence (AGI) were created carelessly in a recent blog post that was cited in the letter: "A misaligned superintelligent AGI could cause grievous harm to the world; an autocratic regime with a decisive superintelligence lead could do that, too.

The company stated that "coordination among AGI initiatives to slow down at crucial junctures will probably be necessary."

About the letter, OpenAI has not made any public remarks. The company was questioned by the BBC if it supports the request.

Although he left the organization's board a number of years ago and has spoken negatively about its present path, Mr. Musk was a co-founder of OpenAI.

Like the majority of comparable systems, autonomous driving features produced by his automaker Tesla rely on AI technology.

The letter requests that "the training of AI systems more capable than GPT-4 be immediately suspended for at least six months."

Governments should intervene and impose a moratorium if such a delay cannot be swiftly implemented, it says.

It would also be necessary to create "new and capable regulatory authorities specialized to AI."

In the US, UK, and EU, several recent ideas for the regulation of technology have been made. Yet the UK has decided against having an AI-specific authority.