Elon Musk
Eliezer Yudkowsky, an expert in artificial intelligence, thinks the U.S. government should implement more than a six-month "pause" on AI development, as had previously been urged by several tech visionaries, including Elon Musk. NTB: Carina Johansen via Reuters

Eliezer Yudkowsky, an expert in artificial intelligence, thinks the U.S. government should implement more than a six-month "pause" on AI development, as had previously been urged by a number of tech visionaries, including .

Yudkowsky, a decision theorist at the Machine Intelligence Research Institute recently argued in a Time op-ed that the letter signed by the CEO of Twitter understates the "seriousness of the situation" as AI may purportedly become smarter than humans and turn against them.

More than 1,600 people have signed the open letter published by the Future of Life Institute, including Musk and Apple co-founder Steve Wozniak.

It requests that any AI systems being developed that are more potent than the GPT-4 system be put on hold.

The letter argues that "powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable," which Yudkowsky disputes, New York Post reported.

"The key issue is not 'human-competitive' intelligence (as the open letter puts it); it's what happens after AI gets to smarter-than-human intelligence," he wrote.

"Many researchers steeped in these issues, including myself, expect that the most likely result of building a superhumanly smart AI, under anything remotely like the current circumstances, is that literally everyone on Earth will die," Yudkowsky claimed.

"Not as in 'maybe possibly some remote chance,' but as in 'that is the obvious thing that would happen.'"

Yudkowsky believes that AI might rebel against its creators and disregard human lives.

"Visualize an entire alien civilization, thinking at millions of times human speeds, initially confined to computers — in a world of creatures that are, from its perspective, very stupid and very slow," he wrote.

He continued by saying that six months is not enough time to develop a strategy for coping with the quickly evolving technology.

"It took more than 60 years between when the notion of Artificial Intelligence was first proposed and studied, and for us to reach today's capabilities," he continued.

"Solving safety of superhuman intelligence — not perfect safety, safety in the sense of 'not killing literally everyone' — could very reasonably take at least half that long."

Yudkowsky's proposal on this issue is to have international cooperation to shut down the development of powerful AI systems.

He claimed doing so would be more important than "preventing a full nuclear exchange."

"Shut it all down," he wrote. "Shut down all the large GPU clusters (the large computer farms where the most powerful AIs are refined). Shut down all the large training runs. Put a ceiling on how much computing power anyone is allowed to use in training an AI system, and move it downward over the coming years to compensate for more efficient training algorithms. No exceptions for governments and militaries."

Yudkowsky issues this warning at a time when AI is already making it more challenging for individuals to determine what is real.

© 2024 Latin Times. All rights reserved. Do not reproduce without permission.