This week, OpenAI celebrated the one-year anniversary of ChatGPT. But for a while last month, it looked like the company’s future was in jeopardy due to a major crisis. It revealed an internal struggle in which the last word has not yet been spoken.
Think of it as a battle between speeding up or slowing down the development of artificial intelligence (AI). “I think there are a lot of people who believe that the faster technology develops, the quicker we will solve some of the big problems,” says Jelle Prins, co-founder of an artificial intelligence startup. “You shouldn’t delay this too long.”
Wim Nuijten, director of the Artificial Intelligence Institute at TU Eindhoven, criticizes the events at OpenAI. “Snow appears to have triumphed over the responsible adoption of technologies that could have huge impacts.”
smarter than humans
To understand what it is and its importance, it is important to first understand what the ultimate goal of OpenAI is. The company wants to work on what we call “AGI” (artificial general intelligence). Computers smarter than humans.
If this is successful (opinions differ greatly), it could transform our entire society. It is thought that this could bring many benefits to humanity. But there are also concerns. The biggest fear is that AI may decide what is best for us and could even theoretically choose to destroy humanity.
It could raise fears that OpenAI is actually doing something that could potentially pose an existential threat. “Will it destroy all of humanity? “I’m not afraid of it,” replies Antske Fokkens, professor of language technology at Vrije University. He is more concerned about the dangers that already exist, such as the spread of misinformation.
OpenAI CEO Sam Altman, who was abruptly fired last month, believes the positive impact will outweigh the negative. But not everyone at OpenAI agrees.
The face of the “counter movement” is Ilya Sutskever, one of the leading artificial intelligence researchers and co-founder of the company. An article in The Atlantic shows that OpenAI is increasingly confident in the power of the technology it is developing. But it also strengthened his belief that the dangers were great. He was also the one who fired Altman on behalf of the board, which he regretted.
“There is a very important third aspect missing from this story,” says Jelle Zuidema, an associate professor of explainable artificial intelligence at the University of Amsterdam. “So it’s the group that doesn’t want to talk about the existential threat because they think it’s exaggerated, but they’re worried about what impact it’s going to have right now.”
An “Artificial Intelligence Breakthrough”
There is also a lot of speculation about whether OpenAI has made a technological “breakthrough” this year that could make systems even more powerful. This is because of a report by The Information and Reuters. This is the breakthrough also called “Q*” (pronounced Q-Star). Among other things, it will be able to solve mathematical sums that other AI variants find too difficult.
It is said that this alleged development has raised concerns among employees that adequate security measures have not been taken. The researchers are said to have reported this to the board in a letter. Technology newsletter Platformer suggests that these concerns contributed to Altman’s firing, although it reported that the board never received such a letter.
According to TU Eindhoven’s Nuijten, this would be a breakthrough in the way AI reasons, and is something that has been in the works for decades. Although it is difficult for him to evaluate whether OpenAI is really there yet. “It can then prove certain theories and make new inventions based on the knowledge and rules we have about logic and the physical world.”
Zuidema points out that such news about technological breakthroughs could also serve another purpose, namely to convince the world that OpenAI is ahead of the competition.
Also discuss about new rules
All three experts agree that tech companies cannot be trusted to develop AI without oversight. There must be regulations. This is an issue that is being worked on, for example, in Europe, but it is also a challenge in itself.
Member of the European Parliament Kim van Sparrentak (GroenLinks) is participating in the negotiations and says it will be exciting. Germany, France and Italy do not want very strict rules for fear of hindering European companies. Van Sparrentak notes that technology companies, including OpenAI, are now engaged in strong lobbying efforts.
Source: NOS
Jason Jack is an experienced technology journalist and author at The Nation View. With a background in computer science and engineering, he has a deep understanding of the latest technology trends and developments. He writes about a wide range of technology topics, including artificial intelligence, machine learning, software development, and cybersecurity.