Why do we need to rethink AI now? It’s not a disaster, but beware

Experts led by Turing Prize winner Yoshua Bengio, a professor at the University of Montreal in Canada, and Stuart Russell of the University of California at Berkeley, in the United States, wrote a letter to laboratories requesting that education be continued for at least to suspend a period of time. Six months of artificial intelligence (AI) systems more powerful than GPT-4 (OpenAI’s latest productive AI model).

Many Spaniards are also among the signatories. Carles Sierra, director of the Artificial Intelligence Research Institute of the Supreme Council for Scientific Research (CSIC), and Pablo Jarillo-Herrero of the Massachusetts Institute of Technology (MIT), agree that “the necessary action has not yet been taken” . Mass public acceptance of this AI release.

In his statements to EFE, Sierra acknowledges that he has “a growing concern about this kind of arms race” involving technology companies. Not only OpenIA develops generative AI models, but also Google with Bard or Meta with LLaMa.

Read more: Italy becomes the first country to block ChatGPT’s AI

IT IS NOT A DISASTER

It’s not about disaster, Sierra says, but “there are companies that invest a lot of money, they want to make money from what they do, and they fight over who gets the biggest slice of the pie.” “They’re brainless in the process.”

There is a lack of evaluation, and without evaluation we would not know what consequences this AI could have, confirms the CSIC expert, comparing it to the process of researching and approving a drug; Regulators take years to approve them, and only after they pass three phases of clinical trials (a quarter of pharmacovigilance).

“Companies release versions every month — OpenIA is already working on ChatGPT-5 — making new models available to everyone, not industry,” he laments.

Jarillo-Herrero is also concerned about the speed at which this AI is advancing, recalling that there has been interest in a moratorium on the use of the CRISPR gene-editing technique, which has been advancing much faster than “mankind” for some time could digest ago. ‘ it could”. and some apps can get out of hand”.

“With such disruptive technologies, it is appropriate to understand and anticipate the potential consequences of using them, and to regulate them,” he told EFE.

Both experts agree that artificial intelligence, including productive ones, can take advantage of this, but as Sierra warns, these systems are trying to make sure the result is believable and not necessarily true, and it looks like a human said it; Herein lies the risk.

Based on machine learning and concerned with privacy and use of personal data, these systems learn from millions of texts, images or videos posted on the Internet, and developers store data from thousands of “chats”. On users to improve the following models.

BIGGEST FEAR, DISINFORMATION

Jarillo-Herrero, a professor of physics at MIT, focuses his concerns on disinformation. Hyper-realistic images of Pope Francis in the white coat or Donald Trump resisting arrest are two of the examples doing the rounds today.

“There was a lot of misinformation, but it was quite easy for an educated person to spot and discern. Now, with the use of AI, it is much easier to publish/distribute information that appears to be true at first glance, but is actually not true,” concludes Jarillo-Herrero.

Also “the information/text used to train this AI contains a lot of ‘bias’ – bias, just like with humans”, resulting in all kinds of false clichés in the generated responses.

The researcher argues that humanity in general has never been very effective in preventing unwanted scientific and technological progress, for example many countries have developed nuclear or atomic bombs.

“But there is a big difference between artificial intelligence and other dangerous developments such as nuclear bombs. Second, it requires very complex tools and materials that are not readily available even to governments.”

Instead, he points out that anyone with multiple computers can use AI. “For example, hackers must plan thousands of attacks using artificial intelligence, which can easily solve verification puzzles that previously required a human.”

“If realised, this six-month moratorium could help governments better understand the potential negative impact of AI and regulate its use. Perhaps more advanced companies could stop and think about how to counteract these negative effects,” the MIT scientists conclude.

Sierra warns of the danger of putting a system that can produce false information in the hands of a young person, also mentions the regulation of its use and recalls that sovereignty always belongs to the people: “I do not agree, it is forbidden, but I accept It. to organize”.

More on the topic: Artificial intelligence: how ChatGPT makes life easier for criminals.

ETHICS

The expert, who is also president of the European AI Association, is committed to the development of a clear code of ethics. Good practice documents from the European Union, OECD or US and UK companies and institutions have precedent, but there is now a need for a robust and transparent global code.

With that in mind, she explains that she was in touch with Stuart Russell, one of the letter’s sponsors, to explore how to navigate the letter.

At the European level, AI legislation must be brought back on the table to identify risks and set limits to productive AI.

The law, which is expected to take effect this year, bans applications such as facial recognition in open spaces and classifies the risks of artificial intelligence as high, medium and low risk.

Therefore, sectors such as education are identified where special measures and controls should be deployed (using AI to evaluate or assign students to schools is considered high risk).

But when the law was written, productive AI systems were still in development. Now that they’re so advanced, they need to go back and set the standard, Sierra concludes.

Source: EFE

Source: Ultimahora

Source link

follow:
\