Antonio Salgado Borge
“The godfather of artificial intelligence leaves Google and warns of the dangers that await us.” Here’s how the New York Times conducted an interview with Jeffrey Hinton, one of the pioneers of artificial intelligence and the creator of fundamental aspects of this technology. Dr. Hinton told this American newspaper that if he decided to leave Google now, it would be to be able to speak freely about the big risks ahead.
The interview with this scientist went around the world (in Mexico it was reported by Aristegui Noticias). But it’s not just Geoffrey Hinton who is concerned. A similar alarmist position was taken by Elon Musk. According to a note published by The Wall Street Journal last Monday, this billionaire decided to accelerate his investment in AI to avoid the risks associated with passing it into the hands of third parties.
We live in a world rife with conspiracy theories, so it would be a good idea to treat any doomsday scenario with a certain amount of skepticism. However, the warnings of Dr. Hinton and Elon Musk deserve special attention. And they are because both people have privileged access to first-hand information.
Faced with these anxieties, it is worth pausing to ask yourself what type of risk we are talking about. To try to answer this question, let’s start by noting that, in terms of time, AI risks can be divided into two main categories.
In the near future, the most imminent risk is a flurry of misinformation and fakes that, in the extreme case, will make it impossible to tell the truth from the lies — the issue I spoke about here two weeks ago. Another serious risk is massive job losses. For example, IBM announced that over the next five years it plans to replace up to 30% of back office operations such as human resources.
In the long term, the main risk that has been talked about is that AI will become a real threat to humanity. For example, by becoming more intelligent than humans, having consciousness and self-awareness, or being able to program themselves, humans will eventually become an obstacle or, at best, irrelevant.
Keeping in mind the distinction between short-term and long-term risks, it’s easy to see that neither Dr. Hinton nor Elon Musk are worried about the former: misinformation or job loss don’t seem to be their priorities. If that were the case, the former would have left Google ten years ago, and the latter would not have turned Twitter into a dunghill. But it also makes no sense to assume that they think about long-term threats: if they care about the distant future, then why is it so urgent?
To get our answer, we need to consider one more difference. This is the difference between generative artificial intelligence and general artificial intelligence.
Generative AI is the technology behind a large language model like Chat GPT. While generative AI may appear to be as intelligent as a human, or more intelligent than a human, this technology actually replicates only one of our many cognitive functions.
In fact, language models feed on information so that they can recognize patterns. Then, based on this information and patterns, they can predict or fill in the gaps. Although Chat-GPT and other language models take this process to an impressive level – for example, when Chat-GPT offers us the story of Julio Cortazar as if it were a song written by Bad Bunny – the structure is very similar in that they use text predictors our mobile phones. No more no less.
With this in mind, it is easy to see that Generative AI is limited to one of the many human cognitive functions. For example, this technology is not conscious; that is, he has no subjective experiences. As the philosopher Thomas Nagel said, there is something unique about being a bat that is different from being a dolphin or a human. And, at least for now, this is not the case for generative AI. To this we must add that the Generative AI does not have the ability to plan and mint. He also does not learn from his surroundings: all the information he works with, at least for now, has come from the Internet, and therefore he does not have the opportunity to know the world first hand.
The existential threat that implies the destruction of our species does not come from Chat GPT and other versions of Generative AI. This threat comes from the possibility of general AI. Although there is no standard definition, general AI is usually understood as an artificial intelligence capable of performing various cognitive functions and doing it better than humans. And we know for sure that the main developers of AI are actively working on the development of this type of technology.
Although he didn’t talk about it directly, in an interview with The New York Times Geoffrey Hinton hinted at something like this: “Some people believed that these things could actually become smarter than people. But many thought it was too far. And I thought it was too far. I thought it would be in 30-50 years or even more. Obviously I don’t think so anymore.”
A similar conclusion can be drawn in the case of Musk. The Tesla founder warned then-Google CEO Larry Page to avoid developing AI with the intelligence of a deity. According to Musk, Page dismissed the warning and ended up calling him a “specialist”; that is, as someone who thinks that people have some special or important place that needs to be protected. I never thought to talk about it, but in this case I sympathize with the position of the owner of Twitter.
Let’s recap. Geoffrey Hinton and Elon Musk – no matter what you think of them – are first-hand insiders who are not concerned with the risks associated with Chat GPT, but with general AI. However, it is important to note that his approach to the problem is nevertheless very different. Musk adopted a libertarian approach—without government intervention—much like his friend Peter Thiel. On the contrary, Hinton believes that this risk cannot be dealt with without strong and determined government regulation, the approach taken by the European Union. Including the increasingly popular idea among some scientists of creating a center similar to the indispensable CERN.
We will have time to discuss both options in detail in the following parts of this column. For the purposes of this analysis, it is important to note that the truth is that core AI developers are rapidly working on general AI, that this may imply existential risk, and that we have reacted late.
According to a survey conducted three years ago among prominent artificial intelligence and security researchers, virtually everyone at the time believed there was at least a 10 percent chance that the technology posed an existential risk to humanity.
Someone may argue that 10% is not so much. But this can be answered in two broad ways. First, three years later, given the speed with which AI has evolved, and bearing in mind the attitude that Musk has, we can conclude that this risk has not decreased, but increased.
The second way to answer is to present this perspective with the help of an analogy given by the famous specialist Casey Newton: if you were told that there are people who are able and willing to open a portal for communication with another dimension – something wonderful and surprising, but the likelihood that a demon will come out of this portal to end the world is at least 10%, would you sit indifferent, arms crossed, waiting to know if it happened?
If there’s one thing I’m sure of, it’s that I’m not sure either.
Source: Aristegui Noticias

John Cameron is a journalist at The Nation View specializing in world news and current events, particularly in international politics and diplomacy. With expertise in international relations, he covers a range of topics including conflicts, politics and economic trends.