
Between Panic and Acceleration – Between Money and Narratives
Three years after the introduction of ChatGPT, the discussion about artificial intelligence (AI) has shifted from an academic fringe topic to a central societal and political debate. Voices such as Geoffrey Hinton and Yoshua Bengio have warned of the potentially catastrophic consequences of unregulated AI development. Dystopian scenarios, painted by former OpenAI employees, have prophesied the end of humanity by a superintelligence as early as 2030. In contrast, there is a utopia in which humanity has everything it desires and explores space to build data centers on foreign planets. Questions about tech company monopolies and political order are not being asked. Yet, when we will reach Artificial General Intelligence (AGI) and how it will function is still purely speculative. So where does the panic about uncontrolled AI takeover come from?
The Genesis Phase – Follow the Money
The discussion about existential risks did not arise organically; rather, it was significantly shaped by a well-funded network closely connected to the Effective Altruism (EA) movement. Billionaires such as Dustin Moskovitz, Jaan Tallinn, and the now-convicted Sam Bankman-Fried have invested hundreds of millions of dollars in organizations researching existential risks from AI. This funding has influenced not only the research agenda but also public discourse and political decisions. Perhaps the best example is California’s Bill SB 1047. However, the ideological foundations of the EA and existential risk movement do not reflect a societal consensus. Transhumanism, total utilitarianism, and longtermism lead to current needs and well-being being set aside in favor of a hypothetical future benefit.
The public panic about existential AI risks often does not align with the reality of current technological developments. Instead, it distracts from urgent, present-day issues such as algorithmic bias, data privacy risks, and societal participation in the use of AI. The continued absence of a dystopian superintelligence has led to a backlash in public debate. Instead of an AI Safety Summit like the one held in London in 2023, Paris invited participants to an AI Action Summit at the beginning of 2025. This marks a shift from a risk-based to an action-oriented perspective.
The Race Against Autocratic Superintelligence
In the US, the new administration has clearly signaled that AI safety belongs to the past and that the system conflict with China takes precedence. The release of the ChatBot R1 by DeepSeek in January, which had similar capabilities to major American models and was also open source, added fuel to the fire. It was trained for considerably less money and with inferior chips compared to Silicon Valley models. This not only led to a drop in the stock prices of key companies like Nvidia but also to a Sputnik-like panic moment. Suddenly, major AI companies argued against regulation and, for example, in favor of fair use of copyrighted material in the name of system competition. Nothing is more important than creating a democratic AGI before China catches up and an autocratic superintelligence takes over.
This "Zoomer" mood has also spilled over into the EU. Only a year after the adoption of the EU AI Act, legislators and institutions are already backtracking. The European Commission’s Chief Technology Officer, Henna Virkunnen, has promised, for example, that the Commission will examine which administrative burdens and reporting requirements it could reduce. The Commission also plans to work with technology companies to identify where regulatory uncertainty is hindering the introduction and development of AI, in order to "roll back a set of digital regulations."
From the Wave of Panic to a Democratic Vision
It becomes clear: Riding the wave of public panic does not necessarily lead to sustainable and effective regulations. We must be careful that the debate, which is currently still dominated mainly by the Anglo-American sphere, does not enter the European space unnoticed. It must therefore be more transparent where research funds for existential risks come from, who is advising politicians, and whether the narratives selling panic are based on a speculative future. What is needed is a democratic vision that does not put dreamed-up cyborgs at the center, but rather focuses on people in the here and now and in the future.
The Foundation for Freedom in Germany and the World
https://www.freiheit.org/global-innovation-hub-taipei/discourse-existent...
Based on the principles of liberalism, the Friedrich Naumann Foundation for Freedom offers political education in Germany and abroad. With our events and publications, we help people to become actively involved in political affairs. We support talented young students with scholarships. Since 2007, the addition "for freedom" has become an established part of our foundation's name. After all, freedom isn't exactly in trend these days. This makes it all the more important to campaign for freedom and to take on the responsibility that goes hand in hand with it. We have been doing this since our foundation on May 19th, 1958. Our headquarter is based in Potsdam, and we maintain offices throughout Germany and in over 60 countries around the world.