Antonio Salgado Borge
Something important has to happen for the US Senate to call people to testify before its members. Thus, the appearance of the director of Open AI – the company behind the ChatGPT language model – an IBM representative and a well-known professor to address the inevitable risks of artificial intelligence is not trivial or inappropriate.
This event was generally well received. And if that’s the case, it’s because it sends several positive signals that deserve attention.
The first and most obvious is that the Senate has openly acknowledged the risks of artificial intelligence (AI), as well as its interest in understanding them with appropriate seriousness.
Much has been said about how lawmakers seem to be showing signs that they have learned an important lesson. A couple of decades ago, the phenomenon of social networks was practically ignored. By the time they wanted to react, it was already too late. This story has a sad and well-known ending.
Another positive sign is related to the moment when this phenomenon occurred. The US Senate did not wait for the Internet to be saturated with false content created by artificial intelligence, for thousands or millions of jobs to be eliminated, or for an intelligence capable of thinking like or better than a person to appear. Instead, this institution has chosen to take action when the risks of AI are just starting to materialize.
In addition to the two previous signs, it must be added that, unlike what happened in the speeches of the leaders of Facebook or Google, in this case the senators chose not to ridicule themselves. Let’s remember that a few years ago they gave us jewelry in which you could complain to the CEO of Google about problems with their Iphone – an Apple product – or ask the CEO of Facebook how his company makes money by offering a “free” service. . Last week, the Senate avoided such embarrassment and was at least able to ask relevant questions.
Finally, the position taken by all sides during the speech was also taken as a positive sign. Sam Altman, CEO of Open AI, has openly called on senators to regulate artificial intelligence to contain its risks. In turn, those on the Senate committee solicited proposals from Altman in a friendly and moderate tone. This indicates, at least in principle, the openness of all participants.
The positive signs that we have considered deserve attention. However, they give us an incomplete picture of what happened. And the fact is that upon closer examination, you can see that the appearance also leaves a semi-bitter and disturbing aftertaste.
Let’s start by acknowledging that while members of the Senate committee demonstrated that they understand that AI involves inevitable risks and that they sense ways in which they can materialize, it is also true that their questions were far from sophisticated and lacked significant technical character. support.
It is well known that the work of AI is incredibly complex. Even experts admit that there are “black boxes” in the operation of this technology; that is, that there are aspects and results that they do not fully understand. Such soft and sensitive questions to Sam Altman may be due, at least in part, to the Senate’s lack of resources when it comes to addressing this technical complexity with the necessary depth.
Two things are true. In terms of knowledge, the relationship between tech companies and the US Congress remains deeply asymmetric. And this asymmetry does not get smaller when one considers that the relevant knowledge is not in the hands of the observer, but in the hands to be observed.
Another negative reading of this appearance is that OpenAI’s CEO’s open call to regulate AI does not automatically mean that this executive or this company really wants to be regulated or that it will become so in the short or long term.
First, even if we take Sam Altman’s words at face value, there are other companies that may be more skeptical and resistant to regulation than his client. To this we must add that in the past, big tech companies like Facebook have sought to clear their face by openly advocating regulation and then, behind the scenes, negotiating with legislators seeking to repeal it.
That’s not all. Even assuming that Open AI understands the risks of its technology and is genuinely concerned, this clearly has not been and will not be enough to slow down its expansion drive one iota.
It is a well-documented fact that the public race for mass-market AI was started by this company. For example, Google suspended the release of several of its products based on this technology, at least in part, because it was risky to publish them without sufficient tests or controls to limit them. Everything changed when Open AI brought ChatGPT to market. Just a few days ago, Google announced the integration of its AI into several of its products. Microsoft soon responded in kind. It is easy to see that this race will tend to accelerate.
This fierce competition indicates that even if the US Congress immediately starts debating how to minimize the risks associated with AI and formulating appropriate legislation, it looks like its speed of reaction will not match what it is worried about. In any case, every day that passes without a well-thought-out and thought-out structure matters and can be decisive.
A final negative sign to take into account is that while few in their right mind believe that the best approach to managing AI is to turn to self-regulation, the question remains of the type of regulation or institutions that need to be controlled.
The problem is that, given the legitimate concerns that AI generates, initiatives or proposals, including those of the UN or the OECD, begin to form a cacophony from which it is difficult to link to anything specific. As is often the case in such cases, probably the best model we have is the strict framework developed by the European Union. An added benefit is that implementing or copying this model can be relatively fast. But it remains to be seen whether the United States, with its historically weak approach, or countries such as China or Russia, with their authoritarian approaches, will be willing to take such measures.
With these signals on the table, it’s time to cut the cash. We have seen that the appearance of the CEO of Open AI and others in the US Senate has positive aspects that deserve attention. Senators and senators seem to have taken a proactive approach and knew how to ask the right questions. It is also true that all parties were open: the Senate, with its willingness to understand and act on expert advice, and Open AI, openly asking for regulation.
But we have also seen that this appearance leaves a semi-bitter aftertaste. In terms of knowledge, the relationship between those who make laws and the people or companies they seek to control remains deeply asymmetric. We know that big tech companies often call for regulation publicly and then fight it privately. And there is still no consensus in the United States or around the world on how to contain the risks associated with rapidly advancing technology.
Source: Aristegui Noticias

John Cameron is a journalist at The Nation View specializing in world news and current events, particularly in international politics and diplomacy. With expertise in international relations, he covers a range of topics including conflicts, politics and economic trends.