Chomsky praises advances in artificial intelligence and warns of dangers

In a joint opinion column published this Wednesday in the New York Times, the three intellectuals praise the new ChatGPT (OpenAI), Bard (Google), and Bing (Microsoft) for their problem-solving abilities and their “worries.” they characterize these systems by their “immorality, ignorance and linguistic inadequacy.”

“We fear that machine learning, the most popular and fashionable variant of AI, will demean our science and ethics by implanting a fundamentally flawed understanding of language and information into our technology,” they wrote.

While these programs are hailed as the “first flashes of the horizon” of an artificial intelligence where the machine mind has surpassed the human brain in quantity and quality, intellectually, artistically and creatively, they say they have not yet reached this point.

According to Noam Chomsky, Ian Roberts and Jeffrey Watumull, using programs like ChatGPT can be useful in some “limited” areas, such as computer programming or “recommending rhymes for easy strings”, but their profound differences from humans are in reasoning and usage. The language imposed “significant limitations on what these programs can do.”

According to these scientists, “These programs are stuck in a prehuman or nonhuman phase of cognitive evolution.”

“Their greatest shortcoming is that it is not the most critical ability of any intelligence: not only to say what is, what is and what will happen — that is, to describe and predict — but also what is not happening and what might happen . . They affirm that they are “a sign of real intelligence” before emphasizing that these are components of an explanation.

You may be interested in: Artificial intelligence can improve Spanish in science

Unlike machines, they say, “human reasoning relies on possible explanations and error correction, a process that progressively narrows the possibilities that can be rationally evaluated.”

In that sense, Sherlock Holmes’ Dr. Watson: “Once you’ve eliminated the impossible, whatever remains, however improbable, must be the truth.”

“However, ChatGPT and similar programs cannot distinguish between the possible and the impossible, which is inherently limitless in what they can ‘learn’ (i.e. remember),” they say. Therefore, they argue that the predictions of machine learning systems are always superficial and will be doubtful.

More details: They create a way to know which jobs need to be replaced by robots

Finally, they argue that true intelligence is capable of moral reasoning, while chatbots struggle and continue to struggle to balance content creation with distancing themselves from what is morally objectionable.

As an example, they cited the predecessor to ChatGPT (Thai), released by Microsoft in 2016, which “filled the internet with racist and misogynistic content”.

As such, they affirm that because the program is not based on moral principles, ChatGPT severely restricts its creators from entering into new or controversial discussions.

They are either overproduced (they fail to present both the truth and the facts, they endorse both ethical and unethical decisions) or they are underproduced (they show a lack of commitment to decisions and indifference to results),” they conclude.

Source: EFE.

Source: Ultimahora

Source link