The firing of OpenAI CEO Sam Altman has been enriched with new elements, resulting in an artificial intelligence project called “Q*” (pronounced “Q Star”) that will “put humanity at risk.” The disclosures come from sources within the company that creates ChatGPT and are reported by. Reuters. If confirmed, the information would help reconstruct the unexpected dismissal of CEO Altman, who was reinstated following a letter signed by 700 employees calling for his reinstatement and thanks to the intervention of Microsoft, which also holds 49 percent of OpenAI shares.
Researchers’ letter and warning regarding the Q* project
It all happened four days before Altman was fired. Several researchers reportedly wrote a letter to the board warning about a new type of artificial intelligence that they believe has the potential to threaten humanity.
News received from sources Reuters They claim that Q* could be a breakthrough in the startup’s quest for what is known as artificial general intelligence (Agi). OpenAI defines Agi as autonomous systems that outperform humans on the “most economically valuable” tasks. The researchers mentioned this project in a letter, but Reuters He stated that he could not review a copy of the letter. The staff member who wrote the letter did not respond to requests for comment.
The new model looks like it can solve some math problems with large margins for future improvements. Currently, generative AI is effective in writing and translation; for example, it can manage to statistically predict the next word, and the answers to questions asked of it can vary greatly.
However, new mathematical achievements, in which there will be only one truly correct answer, will bring the reasoning abilities of artificial intelligence closer to human intelligence.
Q* project: what we know about the “dangers”
New findings from the Q* project can be applied to new scientific research, for example. Unlike a calculator that can solve a limited number of operations, Agi can generalize, learn and understand.
In their letter to the council, the researchers also noted the potential danger but did not specifically address the safety concerns noted. Computer scientists have long debated the danger posed by highly intelligent machines, such as whether they decide it is in their interest to destroy humanity.
The researchers also reported the work of a team of “Artificial Intelligence Scientists” whose existence has been confirmed by multiple sources. The group, formed by merging the previous “Code Gen” and “Math Gen” teams, was exploring how to optimize existing AI models to improve their reasoning and possibly perform scientific studies, one of the sources said. Reuters.
Continue reading on Today.it…
Source: Today IT
Karen Clayton is a seasoned journalist and author at The Nation Update, with a focus on world news and current events. She has a background in international relations, which gives her a deep understanding of the political, economic and social factors that shape the global landscape. She writes about a wide range of topics, including conflicts, political upheavals, and economic trends, as well as humanitarian crisis and human rights issues.