90 seconds to midnight
According to the Bulletin of Atomic Scientists, humanity is at its closest from global catastrophe for the second year in a row AFP

Humanity is as close to "global catastrophe" as it's ever been for the second year in a row, according to the Bulletin of the Atomic Scientists.

The group revealed its Doomsday Clock forecast for 2024, keeping it at 90 seconds to midnight as a result of different dangers that, they say, pose existential threats to humanity.

Among the main reasons are: the nuclear threat in Russia's invasion of Ukraine; Hamas' attack on Israel on October 7 and the ensuing war, which has spilled over to the region; a worsening climate crisis and its related disasters; and the potential danger of generative artificial intelligence (AI).

"Last year we expressed amplified concern by moving the clock to 90 seconds to midnight, the closest to global catastrophe it has ever been," said Rachel Bronson, CEO of the Bulletin group.

"Leaders and citizens around the world should take this statement as a stark warning and respond urgently, as if today were the most dangerous moment in modern history. Because it may well be," adds Bronson's statement.

The Bulletin of the Atomic Scientists founded the Doomsday Clock in 1945 to bring visibility to the different, human-made crisis that could threat to end with civilization. Albert Einstein, J. Robert Oppenheimer and scientists from the University of Chicago who helped develop the first atomic weapons in the Manhattan Project were among those involved. It changed from counting minutes to seconds in recent years to address the rapidly evolving events.

Regarding the atomic threat, the Bulletin said that, asides from the war, "China, Russia, and the United States are all spending huge sums to expand or modernize their nuclear arsenals, adding to the ever-present danger of nuclear war through mistake or miscalculation."

The Bulletin also addressed the dangers of AI, saying that its quick progress led "some respected experts to express express concern about existential risks arising from further rapid advancements."

The document focuses on AI's potential to "magnify disinformation and corrupt the information environment on which democracy depends," something that could become a factor preventing the world from "dealing effectively with nuclear risks, pandemics, and climate change."

It also delves into military uses of AI, saying it is already occurring in "intelligence, surveillance, reconnaissance, simulation, and training." "Decisions to put AI in control of important physical systems—in particular, nuclear weapons—could indeed pose a direct existential threat to humanity," the Bulletin warns.

The statement recognizes that countries have seen the importance of regulating AI "and are beginning to take steps to reduce the potential for harm," specifically highlighting efforts by the UN, the U.S. and the formation of a new UN advisory body.

The Bulletin has some advice on how to turn back the clock. It focuses on the need for leaders and nations to work together in "the shared belief that common threats demand common action."

"As the first step, and despite their profound disagreements, three of the world's leading powers—the United States, China, and Russia—should commence serious dialogue about each of the global threats outlined here. At the highest levels, these three countries need to take responsibility for the existential danger the world now faces. They have the capacity to pull the world back from the brink of catastrophe. They should do so, with clarity and courage, and without delay. It's 90 seconds to midnight," the report concludes.

© 2024 Latin Times. All rights reserved. Do not reproduce without permission.