Tech celebrities call for a temporary halt to ‘risky’ AI development

Advances in AI (Artificial Intelligence) are moving so fast right now that a break is necessary. This is what many celebrities from the science and technology industry wrote in an open letter. Among the more than 1,100 signatories are tech celebrities such as Elon Musk and Steve Wozniak, as well as numerous academics.

The development break should last at least six months and apply to any AI more powerful than GPT-4, the latest engine under the well-known advanced text generator ChatGPT. According to the signatories, the break should be used to create security measures to deal with the increasingly complex and sophisticated artificial intelligence.

The open letter writes about AI systems that can already compete with humans for certain tasks. The authors wonder whether efforts should be made to develop non-human systems that could be more intelligent and replace humans in the future.

In their view, such problems should come from politicians, not from the unelected leaders of the tech giants.

“Scarecrow required”

Prejudice or misinformation can infiltrate the systems of such advanced AI. For example, this can lead to discrimination, explains Catelijne Muller. Muller is president of ALLAI, an organization that promotes the responsible use of artificial intelligence.

It begs not only for future surveillance systems, but also for existing ones. Muller: “There are already systems like ChatGPT that give the wrong answers to questions you ask in an authoritative voice. People think it’s real.”

He says this is because people tend to rely on automation. “It’s coming from a computer, so it has to be fine. That’s why guardrails need to be installed.”

Claes de Vreese, Professor of AI and Society at the University of Amsterdam, also agrees with the potential dangers of AI. “Consider using AI in armed situations, in wars. For example, an armed drone can be programmed to decide when an action can be taken.”

It also mentions phishing emails (fake emails aimed at defrauding people or companies) generated by programs like ChatGPT. “These could look exactly like fundraising emails sent during the US election campaign, for example.”

European legislation

Catelijne Muller believes the open letter from tech celebrities could have “enormous impact” on the AI ​​debate. “Dynamic is right,” she says, referring, among other things, to the European regulations for artificial intelligence that are currently being studied.

At the time, Muller says, there wasn’t a lot of interest in general AI like ChatGPT or AI that was developed without a specific purpose in mind. “This pops up abruptly in the middle of the legislative process and is therefore a critical issue.”

Frits Vaandrager is head of Software Science at Radboud University and one of the signers of the open letter. He says recent reports highlight that ChatGPT-like models exhibit very intelligent behavior.

“I am concerned about the billions of dollars invested by Microsoft, which is quick to develop tools to be used in a commercial situation, inter alia because people want to make money. There is no way to control this technology,” says Vaandrager.

“We don’t know exactly what the systems are trained on or how they are built,” adds Catelijne Muller. “All this happens behind closed doors of trading companies. This is worrying.”

Vaandrager, who signed it, had no illusions that the letter would be helpful, but signed it anyway because of his concerns: “I’m not an expert, I’m just a worried scientist”.

De Vreese also doesn’t think a temporary hiatus in AI development is the golden key. “It’s important to really talk about this now. We are on the eve of risky scenarios, perhaps the evening has already begun.”

Source: NOS

follow:
\