OpenAI has made a breakthrough in artificial intelligence that “could threaten humanity”: Reuters

Before OpenAI CEO Sam Altman was away from the company for four days, several employee researchers sent the board a letter warning of the discovery of artificial intelligence by a powerful man that they believe could threaten humanity, two said people familiar with the matter told Reuters.

The letter and the previously unreported AI algorithm were a major development before the administration ousted Altman, the face of generative AI, the two sources said. Before his triumphant return on Tuesday evening, more than 700 employees threatened to quit in solidarity and join him at Microsoft.

Sources cited the letter as a factor in a long list of board complaints that led to Altman’s firing. Reuters was unable to view a copy of the letter. The researchers who wrote the letter did not immediately respond to requests for comment.

According to one of the sources, this is an experienced manager. Mira Murati told the team about the Q* project on Wednesday and said a letter had been sent to the board in advance of this weekend’s events.

After the news was published, an OpenAI spokeswoman said Murati told employees that the media would report it, but did not comment on the accuracy of the information.

The creator of ChatGPT has advanced Q* (pronounced Q-Star), which some internally believe could be a breakthrough in the quest for super intelligence, also known as artificial general intelligence (AGI), one of the people said to Reuters. OpenAI defines AGI as AI systems that are smarter than humans.

The new model, equipped with vast computing resources, is capable of solving a number of mathematical problems, the person said on condition of anonymity because he was not authorized to speak on behalf of the company. Although it only solves math operations for primary school students, passing these tests makes researchers very optimistic about future success, P* said, the source said.

Reuters could not independently verify the Q*’s reported capabilities.

SUPER INTELLIGENCE

Researchers see mathematics as a frontier that must be overcome in the development of artificial intelligence.

Today, generative AI is good at writing and translating languages, statistically predicting the next word, and the answers to the same question can vary wildly. Gaining the ability to perform mathematical calculations – where there is only one correct answer – implies that AI would have greater reasoning skills and resemble human intelligence. This could, for example, apply to new scientific research, AI researchers believe.

Unlike a calculator that can solve a limited number of operations, IAG can generalize, learn, and understand.

In their letter to the board, the researchers emphasized the power of AI and its potential risks, the sources said, without identifying the exact security concerns highlighted in the letter. Computer scientists have long debated the danger posed by superintelligent machines, such as if they might decide they are interested in destroying humanity.

With this in mind, Altman led the effort to make ChatGPT one of the fastest-growing software applications in history, attracting the investments – and computing resources – Microsoft needed to move toward superintelligence (IAG).

Altman not only announced a series of new tools during a demonstration this month, but also indicated his confidence in them last week during a meeting of world leaders in San Francisco. IAG was within reach.

“Four times in the history of OpenAI, most recently in the past two weeks, I have had the opportunity to be in the room where we push aside the veil of ignorance and the frontier of discovery, and that is “I am the honorary professional of my life,” he said at the Asia-Pacific Economic Cooperation Summit.

A day later, the board fired Altman.

*Reuters. Anna Tong, Jeffrey Dastin, Krystal Hu.

Source: La Neta Neta

follow:
\