The European AI act, reputedly the world’s first comprehensive AI regulation, was first drafted on April 21, 2021 and entered into force in August 2024 with phased implementation (with most obligations to take effect in 2026-2027). The Act uses a risk-based framework where stricter rules apply if the risk for potential harm is higher. Some AI systems (emotional recognition in workplaces, predictive policing or mass surveillance) are banned due to "unacceptable risk,” others are deemed high-risk (AI in critical sectors such as healthcare or transport), limited-risk (chatbots, deepfakes) and minimal-risk (video games). Furthermore, the developers of generative AI systems (of the likes of GPT or Claude) are required to disclose training data and mitigate systemic risks. High-impact foundation models are said to face stricter testing and documentation requirements. The penalties can be up to €35 million or 7% of global annual turnover, whichever is higher.
Although the AI Act was going to apply major obligations for “high-risk” AI systems only by 2026 or later, the European Commission is already contemplating delaying parts of it after mounting pressure from influential tech companies and the US administration.1,2,3,4 The proposed changes would include a one-year “grace period” for generative AI firms and delayed enforcement of transparency rules until 2027, ostensibly to ease the regulatory burden on industry. Critics warn that this could undermine legal certainty and delay protections for people.
The one-year grace period is given to companies breaching the rules on the highest-risk generative AI systems that have already flooded the markets with their products (text- or image-based). It is said to provide time for these companies to adapt their practices within a reasonable time without “disturbing the market,” 5 along with delaying the imposition of fines for violations of the AI transparency rules.
One reason for the proposed delays and relaxed rules in the EU AI Act is the US presidential administration's seemingly protectionist threats to impose tariffs on countries with tech regulations or digital taxes that are deemed harmful or discriminatory towards American technology. Another reason is lobbying groups6 acting on behalf of tech giants like Meta, who claim that “Europe is heading down the wrong path on AI” as the EU AI Act was introducing “legal uncertainties” and measures that exceeded the provisions of the Act, motivated by which Meta would not be signing the commission’s Codes of Practice for general purpose models.
The European Commission has argued that the demands to “stop the clock” are not just due to pressure from the US-administration or US companies. Rather they argue that the current administration’s signalling of a rollback on sustainability policies strengthened the European factions and industry leaders who already opposed sustainability legislation. In other words, the claim is that the US administration was an indirect cause of the calls for the delay in EU AI Act. Whether a cause or a correlation, it should be noted that on October 6, 2025, 46 large European companies (source) sent a letter co-authored by the CEOs of Total Energies and Siemens, to the European Commission arguing that the EU’s Corporate Sustainability Due Diligence Directive (CSDDD) “harms competitiveness”. Furthermore, the companies request other deregulations and reforms on competition rules in order to allow more strategic mergers. The two-year pause is described as an indicator of Europe being “serious about its simplification and competitive agenda” (see letter).
The consequences of further delays
It seems quite clear that the proposed delay in the implementation of the Act is harmful if not self-defeating. Firstly, the pause brings into question the credibility of the EU's stated goals and ambition to proclaim global leadership on frontier-model accountability. Secondly, it undermines the Commission’s core goals of protecting the rights and safety of its citizens and the promise of legal certainty. For instance, based on the Act’s own classification of harm from unacceptable to high to limited and minimal, a deferral extends the exposure to prohibited AI systems that are already deemed unacceptable as it will allow them to operate before the bans take effect. This is in principle a complete violation of the Act’s Article 5 that operationalizes Articles 1 and 2 of EU’s charter on human dignity, integrity and privacy. 7
Furthermore, regarding the Act’s stated obligations for High Risk AI systems which required the establishment of risk-management systems, data and data-governance standards, transparency information to users, human oversight, just to mention a few, the delay will allow the deployment of high-impact, high-risk AI systems in all possible fields such as healthcare, education, law enforcement, employment, etc.; and to operate without full safeguards, and to remain unassessed longer, thereby violating EU charter’s core principles of safety and protection for its citizens (Articles 9-15). 7
Naturally, even the Act’s current guidance for General Purpose AI systems (GPAI) which had a built-in grace window to comply with the rules only by August 2, 2027, will be affected by further delays, thereby eroding the EU’s own standard setting goals which were meant to make frontier models transparent and accountable. Simply put, EU’s transparency and system-risk controls of the most powerful models will be delayed even further, removing any hopes of proper regulation until it is too late; i.e., until sensitive citizen-data is collected and used in ways that are harmful to the EU citizens’ integrity and privacy in unforeseen ways.
Clearly, the list of violations of the EU’s principles in its AI Act, which was claimed to be the first comprehensive human-centered protection, can unfortunately be made very long. The bottom line is that removing human oversight, relaxing the requirements that were set to guarantee a human centered approach to AI, and to postpone the functionality of actors who violate the rules from the onset, will weaken the deterrence giving the non-complaint actors a gain advantage. The delay will also give member states and companies, many of whom were divided on the matter, an opportunity to push for softer interpretations of the rules and regulations, particularly if the EU itself relaxes the implementation deadline.
Practical advantages gained by AI developers due to the grace period
Notwithstanding the fact that a delay in the implementation of the Act does not suspend the General Data Protection Regulation (GDPR) or other EU laws, the “advantage window” is mainly about copyright-governed web mining, opacity around training disclosures, and the timing of GPAI’s systemic-risk obligations in form of documentation, public summaries, adversarial testing and penalties.
Looser Data Limits: Currently, the Act has multiple requirements related to Text and Data Mining (TDM). There are requirements related to technical documentation and linked to copyright policy; also mandating “machine-readable” notices for data owners who wish to opt out; and testing, training process and results, and public training-data summaries. Article 53 of the AI Act explicitly requires “a summary of the content used for training” and a “copyright compliance policy.” During the delay, the companies can continue to crawl and process web datasets under Article 4 of Digital Single Market (DSM) Directive as long as they log that they did not encounter opt-out flags. They can also mirror and pre-process corpora where rights-holders have not reserved their TDM rights, and reuse earlier crawls made before such reservations were implemented. Given the DSM, TDM exception still applies even after the Act is enforced, but “proof of compliance” that will be introduced with the Act, will now be postponed. In short, developers can continue ingesting and re-mixing data before disclosure and documentation duties of Articles 52-53 of the AI Act apply. Minimal Public Auditing: A delay in enforcement gives AI developers an opportunity to complete one or more large-scale pretraining cycles without public disclosures, as stipulated by Articles 54-55 (Systemic-risk GPAI models), and before systemic-risk duties for large models such as adversarial testing, incident reporting and continuous risk-mitigation become compulsory (Article 55). Minimal Transparency: AI developers can expand modalities to include text, image, and audio, and generate synthetic corpora aimed at improving reasoning or tool-use without revealing corpus composition as would be expected once transparency obligations in accordance with Articles 52-53 take place. Another effect of the delay is the postponement of watermarking and labelling of AI-generated content (Article 50). In conclusion, no immediate public training-data summaries are required for LLM/GPAI pre-training. Minimal Reporting: Under Article 55 of the Act, the largest models are required to comply with the EU’s safety and accountability framework. With the delay, the high-impact model developers can continue building, testing and deploying their models without external visibility, incident reporting, or standardized risk-mitigation obligations, thereby weakening the transparency and accountability mechanisms the EU originally designed. The delay also defers the establishment of the EU AI Office’s supervisory authority, so enforcement infrastructure itself lags. Ecosystem Advantage: Since the delay slows enforcement and oversight, large AI developers can expand APIs, SDKs, and integrations ahead of competitors. This allows them to collect more real-world data for alignment and fine-tuning, thereby strengthening their technical moat before conformity assessments and risk controls apply.Technological Rationality of Domination
The delay in enforcing the EU AI Act reflects the deeper entanglement of law, power and capital in modern technocratic societies. From the perspective of the Frankfurt School’s line of thinking, laws and regulations are not neutral rational instruments, but a site where instrumental reason (i.e. the tendency to subordinate thought to technical and economic ends) comes to dominate. Regulation, timely or delayed, becomes part of the machinery that reproduces the very power it was meant to restrain.
This is succinctly demonstrated in the context of the delay in the implementation of the EU AI Act. The EU Commission’s willingness to postpone regulation after geopolitical pressure from the US and lobbying from major technology firms from within and without the EU, is illustrative of the fact that rationalization in the service of capital supersedes the emancipatory intent of legal order. The EU’s stated goal to ensure human centric AI has thus become absorbed into the logic of global competition and innovation races. Legal frameworks are thus not used as a moral limit on domination but an administrative management of risk.
One could argue that this is a clear case of what Habermas and Marcuse viewed as the colonization of the political sphere by economic imperatives. The public justification that the delay will “give industry time to adapt” masks something much more sinister: a power asymmetry where multinational AI corporations, thanks to their command over data and computations power and infrastructure, have greater discursive leverage than the democratic institutions that are supposedly put in place to regulate them.
If the laws are meant to safeguard communicative rationality in a free society, offering its citizens the ability to deliberate about technology, the formulation of why the Act is delayed masks the democratic institutions’ blatant yielding to technocratic bargaining between the European claimed human-centric approach versus that of Silicon Valley with its complete focus on economic gain at the cost of citizens’ integrity and safety.
Therefore, the deferral of the EU AI Act is not a bureaucratic glitch but a symptom of a broader issue: capitalism’s drive for innovation and financial gain continuously outruns the normative frameworks designed to harness it.
Footnotes & References
- Rankin, J. EU could water down AI Act amid pressure from Trump and big tech. The Guardian 2025.
- Nolan, B. EU considers weakening landmark AI Act amid pressure from Trump and U.S. tech giants, news report says. Fortune 2025.
- Reuters, EU weighs pausing parts of landmark AI act in face of US and big tech pressure, FT reports. Reuters 2025.
- Chee, F. Y. and Mukherjee, S. Code of practice to help firms comply with AI rules may apply end 2025, EU says. Reuters 2025.
- Financial Times. EU set to water down landmark AI act after Big Tech pressure. Financial Times 2025.
- Reuters. Tech lobby group urges EU leaders to pause AI Act. Reuters 2025.
- European Commission. Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonised rules on artificial intelligence and amending Regulations (EC) No 300/2008, (EU) No 167/2013, (EU) No 168/2013, (EU) 2018/858, (EU) 2018/1139 and (EU) 2019/2144 and Directives 2014/90/EU, (EU) 2016/797 and (EU) 2020/1828 (Artificial Intelligence Act) (Text with EEA relevance). EC 2024.