This article was originally published at:

A joint statement issued by high-profile AI industry leaders, scholars, and even celebrities on Tuesday voiced concerns about the potential existential threat posed by artificial intelligence. They urged global prioritization of this risk, arguing its possible impacts could be on par with pandemics and nuclear warfare. This message was released by the Center for AI Safety.

The statement was endorsed by prominent figures in the tech industry and beyond, such as Sam Altman, CEO of OpenAI, AI pioneer Geoffrey Hinton, executives and researchers from Google DeepMind and Anthropic, Microsoft’s Chief Technology Officer Kevin Scott, renowned internet security expert Bruce Schneier, climate activist Bill McKibben, and musician Grimes, to name a few.

Unveiling the Possible Risks of AI

The document emphasizes the potential harm of unchecked artificial intelligence. Experts in the field concur that we are far from creating an artificial general intelligence that matches sci-fi narratives; current advanced chatbots primarily mimic patterns derived from their training data rather than exhibit independent thought. However, the influx of excitement and investment around AI has sparked appeals for preemptive regulation before catastrophic failures can occur.

This call for vigilance follows the notable success of OpenAI’s ChatGPT, which has intensified the tech industry’s competition around artificial intelligence. Subsequently, an increasing number of policymakers, advocacy groups, and tech insiders have raised concerns about the potential of the new generation of AI chatbots to spread misinformation and lead to job displacement.

Geoffrey Hinton, whose groundbreaking work has been instrumental in shaping modern AI systems, told CNN about his decision to leave his role at Google and critique the technology, citing that AI is becoming more intelligent than humans.

Not a Single-Threat Focus

According to Dan Hendrycks, the Center for AI Safety’s director, Tuesday’s statement, initially proposed by David Krueger, an AI professor at the University of Cambridge, does not suggest that society should focus solely on the threat of AI extinction. It also encompasses other risks associated with AI, such as algorithmic bias and misinformation.

Hendrycks likened the statement to atomic scientists warning about the potential dangers of the technologies they have created. He conveyed through a tweet that societies could handle multiple threats simultaneously, indicating that both present and future hazards should be addressed responsibly.

This article was originally published at: