‘We must stop, we are not ready for AI’

The annual SXSW technology and culture conference in Austin, Texas, is a magnet for entrepreneurs, inventors and software engineers eager to present and learn about the latest tech trends. From the unexpected success of Twitter’s 2006 launch to the cryptocurrency craze and the meteoric rise and fall of the metaverse — SXSW has been a showcase for groundbreaking innovations.
This year, artificial intelligence (AI) dominated the conference which ended over the weekend, with countless presentations highlighting its applications in business, science, medicine, film, and music. AI technology appears to have permeated nearly every aspect of human life, promising to advance humanity toward a future of limitless potential with the emergence of Artificial General Intelligence (AGI).
3 View gallery
Eliezer Yudkowsky
(Photo: Wikipedia)
But for Eliezer Yudkowsky, the outlook is far from optimistic. The American-Israeli AI researcher arrived at SXSW as a doomsday prophet, warning in a panel ominously titled “How to Make AGI (Artificial General Intelligence) Not Kill Everyone” that AI development must stop — or humanity will face extinction.
Yudkowsky, 45, who grew up in an Orthodox Jewish community in Chicago before leaving religion, founded the Machine Intelligence Research Institute (MIRI) in California two decades ago. The institute’s mission is to develop “friendly” AI systems.
However, during the panel discussion, Yudkowsky painted a far darker picture. According to him, if AI development is not halted, it will lead to the annihilation of humanity. “The AI I’m worried about is intelligent — not because someone gave it a gun—but because it can evolve beyond our control,” he warned. “Humans started as animals roaming the savannah and eventually built advanced technologies and weapons. An autonomous, non-human intelligence with similar capabilities is inherently dangerous.”
Moderated by entrepreneur Judd Rosenblatt, the panel also featured Samuel Hammond, a senior fellow at the American Innovation Foundation, and Nora Aman, an expert on AI safety mechanisms. While neither shared Yudkowsky’s apocalyptic vision, both expressed deep concerns about the existential threat posed by advanced AI.
3 View gallery
(Photo: Generated by ChatGPT)
“This is an entirely different beast from any previous technology,” Aman argued. “AGI can operate autonomously across multiple domains with a level of intelligence that surpasses not just individual humans but humanity as a whole. With such a system, we cannot afford to make assumptions. We cannot predict what will motivate it, how it will behave, or whether it will develop its own goals.”
Hammond echoed these concerns, warning that AI could destroy humanity through various means — without the need to negotiate with us. What worries him most, however, is the potential speed at which AI could spread, likening it to the COVID-19 pandemic but with far greater destructive potential.
“The pandemic was caused by a relatively simple virus that killed millions due to an optimal self-replication mechanism,” he said. “Now, imagine a virus that can learn, run repeated simulations, and choose the perfect moment to strike. AI has that capability — and unlike a virus emerging from a single lab, this ‘AI virus’ could originate from multiple sources simultaneously, with differing motivations and objectives.”
3 View gallery
(Photo: Aaron Rogosin\SXSW)
Yudkowsky also criticized public misconceptions about AI, particularly the belief that AI will remain confined to an advisory role. “People think no one would be foolish enough to create an AI that acts independently,” he said. “They also assume there’s no need for AGI when specialized AI can handle tasks like language translation or medical diagnostics.”
According to Yudkowsky, these assumptions have already been debunked. “Not only do we already have AGI systems delivering impressive results,” he said, “but some are also capable of making decisions and acting independently. We must stop everything. We are not ready. We do not have the technological capability to design a super intelligent AI that is polite, obedient and aligned with human intentions — and we are nowhere close to achieving that.”
Aman called for an international approach to AI regulation, similar to nuclear disarmament treaties, as the only viable solution to prevent a hostile AI takeover. “There are people deeply concerned about AI’s consequences — even some of the leading experts in the field,” she said. “But with the rapid pace of development, it is hard to identify mistakes in real-time.”
Experts argue that governments worldwide, including the current U.S. administration, appear fully committed to an AI arms race with no plans to slow down. In a speech delivered in Paris this past February, U.S. Vice President J.D. Vance dismissed concerns about AI’s dangers and predicted that it would drive progress rather than cause widespread job loss.
Hammond attributed this stance to a lack of understanding and the government’s fear of falling behind China in the global AI race. “If you think the debate over transgender rights is divisive, wait until discussions about transhumanism begin,” he warned. “The government sees AI as a path to medical and industrial advancement but overlooks its potential to manipulate human genomes and create ‘super-babies.'”
Yudkowsky, as usual, was more blunt: “What prevented World War III was the mutual fear of nuclear annihilation. If we continue to build super intelligent machines, no one will have a good day — anywhere on earth.”
Drawing on his knowledge of AI and his readings in history, Yudkowsky believes AI poses a greater threat than any political conflict. “The human brain is flawed. We are plagued by self-doubt and the fear of failure,” he explained. “Machines, by contrast, pursue their objectives relentlessly. When the Stockfish chess engine plays against a human, it does not hesitate. AI systems will operate similarly — without second-guessing or considering the consequences.”
Yudkowsky’s message is stark: If humanity wishes to survive, it must recognize the existential danger posed by AI. “The first step is to lock down all the hardware supporting advanced AI development,” he declared.
“If AI continues to proliferate beyond those limits, legal action must be taken against those involved in its development — and advanced AI research must be outlawed. This won’t stop progress entirely, but it will slow things down significantly compared to the current pace. It’s not an easy or pleasant solution — but it’s the only path forward unless humanity is ready to surrender to extinction.”
link