The Paradox of AI-safety in an AI-landscape dominated by Techno-liberalism and Techno-nationalism
In his article The adolescence of AI, Jan 2026, D. Amodei states that “Humanity is about to be handed almost unimaginable power, and it is deeply unclear whether our social, political, and technological systems possess the maturity to wield it.” But what does he mean by “humanity”? Does it only include people who happen to live in the so-called democratic world? Is the third world included in this “humanity”? What unimaginable power will a factory worker in Bangladesh, a homeless person in L.A., or millions and millions of other people who live under the poverty line in the world, have thanks to these technological advances?
Amodei’s techno-liberal discussion is typical of tech-people and academics who have gradually understood that they are capable of creating a powerful technology with far reaching consequences but without fully understanding their own role. At its heart, techno-liberalism wants to be in control of the solutions, opposing government control and regulation, at least at early stages of its development. Amodei’s view of AI risks, which are widely recognized within the AI community, are closer to a traditionally American conservative view on governance. In one passage, he believes that AI risks have to be addressed surgically, by companies, and volunteers i.e. within the tech ecosystem, but only to some extent by governments: ”government actions will also be required to some extent, but these interventions are different in character because they can potentially destroy economic value or coerce unwilling actors who are skeptical of these risks (and there is some chance they are right).”
Furthermore, and in full agreement with techno-liberal ideology, he advocates for technology over the law in the sense that he believes that engineers are capable of dealing with governance issues. He claims that “it is common for government regulations to backfire or worsen the problem they are intended to solve”. And that “the most constructive thing we can do today is advocate for limited rules while we learn whether or not there is evidence to support stronger ones.” In other words, governance of AI-risks is, in his view, primarily a responsibility and a privilege granted to engineers rather than democratic institutions.
The article also includes mixed messages. At times, government control of AI is not desirable while he acknowledges the need for collective constraints on what he calls “the single most serious national security threat we’ve faced in a century", which would require government attention. Similarly, he diverges from the usual techno-liberal globalist ideology in a way that is somewhat reminiscent of the Cold War Era’s traditional good vs evil discourse, describing AI risks in geopolitical terms. He asks how one would be safe in a world where there is a “country of geniuses” at a data center i.e. describing AI as a new superpower, adding that democracies must make sure that it is aligned with them.
Although never expressed explicitly, Amodei’s description of AI risks moves from “risks to humanity” to “risk to national security”, resonating with the usual Western ideology that the West, because it consists of democracies, or because it is powerful, should also be in charge and control of “powerful AI” and of humanity. In other words, by framing “powerful AI” as a geopolitical threat, he no longer speaks to humanity but to a generic Western or American defence/military/war complex that is factually responsible for multiple senseless conflicts, illegal and manufactured wars against particular countries across the world, almost all in violation of International law. In fact, he seems to identify “humanity” with the US national interests.
Inadvertently or not, Amodei’s framing of the dangers of “powerful AI” conjures images reminiscent of the US/Western illegal war on Iraq (just to pick one example) where threats were manufactured in order to motivate an attack that only resulted in disastrous consequences. Note that our point is not that “powerful AI” does not pose a threat to humanity. Amodei’s description of the whole issue in geopolitical and West-centric terms resonates with Western Colonialist approach to the rest of the world. What is the guarantee that the democracies will not use the powerful AI technology against “the others”? And if they do (they do) why are those people excluded from the notion of “humanity” and "civilisation"?
Anthropics and the Department of War
In order to understand the stability of Anthropic’s views on AI-safety and humanity, let us examine the interactions between Anthropic and the US Department of War: Amodei’s article was published in January 2026, followed by an article in The Guardian1 and elsewhere. There, in regard to the threats of powerful AI, Amodei speaks of “a rite of passage” which “will test who we are as a species” but he is also hopeful that it “if we act decisively and carefully, the risks can be overcome” plus that “the odds are good… if we understand that this is a serious civilisational challenge”.
In February 2026, Le Monde2 published an article regarding Anthropic’s refusal to give the US government unrestricted use of AI despite the US government’s threat of forced compliance under federal emergency powers. Anthropic and other companies (OpenAI, Musk’s Grok, Google) were contracted ($200m) to supply models for military applications, however, while the others were cleared by Pentagon to use their products in “classified settings” i.e. without safeguards, Anthropic admitted that it is not AI companies but the department of war that makes military decisions but that they believed that “in a narrow set of cases… AI can undermine, rather than defend, democratic values”.
This created a friction with the US department of War’s expectation to use AI in a broader set of circumstances (mass surveillance, autonomous weapons, propaganda systems). By March, the conflict was widely publicized, followed by both internal and public debates on how much companies should cooperate with the Pentagon. Although the issues do not seem to have been resolved (at least not publicly), it looks like both sides can claim victory in that the Department of War will continue using highly advanced AI products, and Anthropic can continue differentiating itself from other companies by claiming “the moral highground”.
Was this a matter of principle for the Department of War or Anthropic?
A look at other major AI companies shows that the Department of War did not really need Anthropic’s AI products for anything that other companies did not provide them with. While Anthropic supposedly broke away from OpenAI to focus on AI safety, both continued with their own brand of techno-liberalism, accelerating and scaling up AI (with or without the risks), opposing early interventions while talking about some sort of regulation that will be needed at some point. Other companies (DeepMind, Meta, Mistral, xAI) are in principle variations of these two companies, oscillating between techno-liberalism and techno-nationalism, never losing sight of their profit driven goals. All of them may express safety concerns (who doesn’t?) despite having signed defence/war contracts, providing the Western military complex with more or less unrestricted AI technology to be used on a part of humanity that no longer is deemed human, or human enough. In short, if the major AI actors were techno-liberal from the start, requiring minimal control, their projected public image is now multi-faceted. While they still express concerns about AI-safety they have gotten involved in geopolitics of technology (externally) and infrastructure control (internally).
Our perspective
The question that must be asked is whether the latest conflict between Anthropic and the US Department of War is indicative of the stability of AI companies' nearly total monopoly on creating safeguards. Is techno-liberalism alone capable of creating genuine and reassuring safeguards for AI-safety in a multi- or bi-polar power landscape? Is this perhaps what we will see in terms of power and control over AI, powerful AI, or superintelligent machines? Does this short conflict signal the appearance of a culture of negotiation that will lead to some kind of meaningful settlement of AI safety rules regulated by democratic countries and their institutions?
It seems that neither the techno-liberal companies nor the techno-nationalist tendencies of the states or companies can guarantee meaningful, long-lasting and equitable AI safeguards. The existing safeguards, put in place by techno-liberal companies, are either due to product viability and function (not necessarily based on ethical considerations) or due to minimally imposed regulations on AI companies. But even if safeguards were really created to protect humanity from the negative impacts of AI technology, most, if not all, safety enthusiastic companies seem to be ready to remove at the request of the state (e.g. Department of War) to be used in mass surveillance, mass killings etc.
It then becomes quite obvious that since AI risk issues are handled within a tightly connected web of major AI companies that jointly control and build the infrastructure, models and access pathway, we cannot safely assume that they alone guarantee to mitigate the AI risks for all. Worse, these companies not only define what the safety issues related to loss of control are but also what information(knowledge, relevance, truth) is, the labor that leads to it, the conditions of reasoning and decision making and indeed what constitutes intelligence. Power still remains opaque, privately governed and concentrated in a small group of AI actors that have given themselves the authority to structure the intelligibility of the world. How can we be safe if safeguards can be removed depending on who has the power or who is the highest bidder?
Footnotes & References
- The Guardian Wake up to the risks of AI, they are almost here, Anthropic boss says 2026.
- Le Monde Anthropic’s refusal to give the US government unrestricted use of AI 2026.