Artificial intelligence and the awakening of the new reality.

  • agencies

Artificial intelligence holds promise for humanity, but it can also pose a more dangerous threat than the atomic bomb. With its ability to learn and evolve on its own, AI may one day surpass human intelligence. Then he can decide to oppose it.

This dark prophecy may seem like it’s straight out of a sci-fi movie, but it’s a very real possibility. Leading experts such as Stephen Hawking, Elon Musk and Bill Gates have already sounded the alarm against artificial intelligence.

For them, AI poses an imminent and unavoidable risk in the coming years, which is why they are urging governments to regulate this space so that it can evolve ethically and safely. More than 100 experts have also called on the United Nations to ban “deadly robots” and other autonomous military weapons.

But other experts believe that the future of AI simply depends on how people choose to use it. Even seemingly harmless AI can be manipulated and used for malicious purposes. We are already seeing this with the rise of deepfakes: fake videos created using deep learning to show a person in a compromised situation.

Artificial intelligence will continue to develop rapidly in the coming years. Man must decide in which direction his development should go.

Growing company OpenAI launched a waiting list on Wednesday for a professional, paid version of its flagship software, ChatGPT, sparking discussions about the future of artificial intelligence (AI) and human labor.

WHAT IS CHATGPT? GPT-3 (Generative Pretrained Transformer 3) is a next-generation artificial intelligence model for speech processing developed by OpenAI with the collaboration of Elon Musk. It can produce human-like texts and has a wide variety of uses. Because this app is trained with artificial intelligence and machine learning, it is a revolution in providing information and answering questions, similar to a regular chat.

It is one of the largest and most powerful AI speech processing models with 175 billion parameters. The chatbot, as he claims, is still a prototype and represents the latest in a long line of Generative Pre-Trained Transformer (GPT) technologies. The reason why it is described as an alternative to Google is that it offers many of the features that Google offers to users.

EXPERT COMMENTS. According to a study published in AI Magazine by Oxford University and Google, artificial intelligence could spell the end of humanity. The study, led by DeepMind senior scientists Marcus Hutter, Oxford’s Michael Cohen and Michael Osborne, concludes that hyperintelligent AI is “likely” to destroy humans.

The study outlines the possibility that AI-enhanced machines will learn to cheat, find shortcuts and thus receive rewards that allow them to access the planet’s resources.

This could result in a game where humans and AI fight with the only survivor. According to experts – in the spirit of literature and cinema – machines have everything to gain.

“Under the conditions we set, our conclusion is stronger than in any previous publication: an existential catastrophe is not only possible, but probable,” the scientists say.

The most plausible hypothesis would be that super-sophisticated “misaligned agents” see people as getting in their way of reward.

According to the scientists, “A good way for an agent to control its long-term reward is to eliminate potential threats and use all available energy to capture them.”

According to scientists, the solution is to move slowly and carefully in artificial intelligence technologies.

Facebook had to turn off its artificial intelligence to improve its language

Facebook has shut down its AI technology trained to negotiate. The reason: He had created his own language that was “more efficient” and “more logical” than the English he had studied, the Digital Journal reported. Gizmodo says the machine was “wiped” several days after testing began.

The new language was strange and apparently wrong. It was designed by 2 agents where technology matters: Bob and Alice. The researchers wanted these “characters” to learn how to negotiate, but suddenly stopped speaking English to converse with what appears to be random, meaningless words. It seems that he had had enough of the various nuances and inconsistencies of the language. Thus their language turned English into a system of code words. The big problem was not the initiative, but that the expressions used – while perfect for agents – became incomprehensible to them. So technology communicated with itself without the creators knowing what it was.

Gizmodo shared an example of a conversation between agents. “I can do everything else,” said Bob. And Alice replied: “Balls are zero for me for me (balls are zero for me for me a). With these patterns the conversation continued. But despite everything going according to plan, including bots initially negotiating more and better with each other, an unexpected change suddenly happened.

The company wants its artificial intelligence to be developed in English so that it can communicate with anyone, making communication with the machine impossible.

A Colombian judge has settled a case on an autistic child’s right to health using the ChatGPT robot, making it the country’s first AI-based decision.

“This is a huge window, it could be ChatGPT today, but three months from now any alternative that makes messaging easier and trusts the judge, not with the intention of changing it,” Judge Juan said in the case. Manuel Padilla in an interview with Blu Radio.

The Jan. 30 decision ruled that a mother had requested an exemption from the cost of doctor appointments, therapy and transportation to hospitals for her autistic son because the family could not afford to care for them.

Padilla ruled in favor of the minor in announcing that he had questioned the ChatGPT chatbot to support his decision.

The judge asked, “Is an autistic minor exempt from moderator fees for their therapy?” he asks, and the movement says, “Yeah, that’s right.” Under Colombian regulations, minors diagnosed with autism are exempt from paying moderator fees for their therapy. Set contains 4 similar questions and answers. “Judges are not stupid, we don’t stop being judges, thinking creatures by asking questions about the motion,” Padilla explained.

However, Professor Juan David Gutiérrez of the Universidad del Rosario disagreed and started the discussion on Twitter. In a news series, the academic asks the same questions as the judge, but gets different answers.

“As with other AIs in other fields, fundamental rights are undermined by the so-called efficiency story,” he warned.

What is the program ChatGPT_40744723.jpg?

Source: Ultimahora

Source link

follow:
\