Technology is a two-sided phenomenon: on the one hand the operator, on the other the object. Where both operator and object are human beings, technical action is an exercise of power. Where, further, society is organized around technology, technological power is the principle form of power in the society.
Today, Big Tech is delivering new technologies at an ever-accelerating rate – motivated by a race for the best AI, the most performant model, and, arguably, for monopoly over the market. Indeed, as they exist at the moment, digital ecosystems create self-perpetuating cycles of escalating benefits for Big Tech, ensuring their continued growth and the increased difficulty for independent development of AI. While smaller companies or academic institutions lack the financial means or the computational power to develop technologies with competitive performance, larger bodies that may start out as philanthropic initiatives have usually naturally evolved, or been co-opted, back into the for-profit paradigm.1 This has led to the sector being dominated by a handful of influential actors who share a narrow, homogeneous set of objectives, that arguably center on seeking command and dominance over the market. What inevitably follows from wanting control over the market, however, is the desire for control over the policy that governs and facilitates their dominance.
Although this may seem common in today's world (either described natural selection or necessity for the market to dictate and form its own set of rules), the social/democratic control over the development of high-tech tools, what they are intended for, and in particular their very nature, is dangerously lagging behind. Traditionally, legislation is supposed to form the foundation upon which social harmony relies and society operates; and its timeliness can become a central factor determining its success at doing so. Today, legislation around technology faces unprecedented challenges, where legal progress is both too slow to keep up with AI advancements, too rapid for the general public to follow, and perhaps too strongly influenced by the very same developments it may attempt to govern. And if the general public cannot comprehend what is happening to the protection of their private data, civil and individual rights – if the information received by the public is already dominated by the tech-companies that are able to direct certain information to certain groups, or manipulate the said information – participatory democracy will be directly compromised. Whereas this phenomenon is principally initiated in Western countries, it does have international impact, unfairly affecting developing countries and systemically disempowered groups.
The Policy Influence Challenge
One major problem regarding the regulation of big tech companies that has been pointed out by legal experts as an obstacle to proper AI regulation is the unprecedented pace of technological progress. Another problem is that the progress in the field is limited to and dominated by the main actors of Big Tech, i.e. a select most influential companies, that are generally based in the U.S. and that, thanks to their position, exert international influence on the deployment, regulation or deregulation of AI.
In this case, it's important to take a closer look at the interactions between Tech companies and the process of policymaking regarding AI. An important framework for understanding the process of policymaking, the Multiple Streams Framework (MSF), developed by John Kingdon,2 lays out three principal streams that constantly shift and are intrinsically linked – and which, together, set the ground for policy change. In this framework, regulation is determined by (1) policy problems (a societal issue that has come to the attention of policymakers), (2) policy solutions (policy entrepreneurs will often develop, and advocate for, policy ideas to address such problems) and (3) political conditions (the most ambiguous of the three, this stream also probably carries the most weight, as political will is what determines whether policymaking efforts will go towards addressing said problem). In summary, policymaking is a somewhat complex process, but considering these three streams, it is mostly the result of an issue and accompanying solutions being within the scope of what both lawmakers and political actors want to accomplish. If these conditions are fulfilled, chances are, considerable action will follow.
This framework, which can help clarify the source of danger in Big Tech's expansive role today, reveals a fundamental flaw: in their current position, these companies occupy central roles in each of these streams.3 By exerting their influence in all three, they become key actors capable of reshaping the policy landscape. We have already seen numerous, quite public, attempts by Big Tech to interfere with policymaking in various ways. For instance, when the EU was drafting the EU AI Act, the first comprehensive AI law worldwide, and their follow-up guidelines on General-Purpose AI, tech giants advocated for delaying their implementation, and to water down the protections they provided. 4,5,6 Similarly in Canada, Big Tech companies have historically followed patterns of using intimidation and subversion tactics to evade regulation and prevent accountability.8 Furthermore, in the U.S, Big Tech's lobbying habits are only becoming stronger with the new administration.9,10,11,12,13,14 With influence on government, the companies have a solid track record of molding state rules and federal initiatives by now,15,16,14 as well as starting to take on political roles at the international level. 17,18,19 Some of these interferences are also enabled by the Tech giants' direct presence in government. This is exemplified by Elon Musk's former role as head of the Department of Government Efficiency (DOGE), in which he tried to dismantle many agencies and departments that had affected or restricted his own companies' power and influence.20,21 Such instances should naturally raise questions about conflicting interests and the undermining of public trust in the impartiality and integrity of governmental decision-making bodies, especially if governments prioritize corporate interests over the peoples'.
The Path Forward
Throughout these instances, independent organisations and civil societies have been vocal about their opposition to Big Tech's central role and influence on policy.22,23,24,25,7 Without stronger, wider-spread AI literacy, however, we are missing out on the more important impact that widespread public action could offer. Technology is a two-sided phenomenon: on the one hand the operator, on the other the object. But when only the operator seizes the power yielded by this relation, the asymmetric power dynamic in place can become dangerous. The public, the object here, must become aware of just how much can be done to better all of our conditions and protections, and demand a more equitable share of technological agency.
Footnotes & References
- See OpenAI's current model compared to their start as a non-profit, or Anthropic's start compared to, today, Amazon and Google being some of their principal investors.
- Hoefer R. The Multiple Streams Framework: Understanding and Applying the Problems, Policies, and Politics Approach. J of Pol Practice & Research. 2022;3(1):1–5. doi: 10.1007/s42972-022-00049-2. Epub 2022 Feb 22.
- Khanal S, Zhang H, Taeihagh A. Why and how is the power of Big Tech increasing in the policy process? The case of generative AI. Policy Soc. 2025;44(1):52–69. doi: 10.1093/polsoc/puae012. Epub 2025 Jan.
- EU AI Champions. Stop the Clock - Open Letter. 2025 Jun.
- Kroet, C. US tech giants ask European Commission for 'simplest possible' AI code. Euronews. 2025 May.
- Stengg, W. on behalf of the European Commission, meeting BM, Microsoft, Amazon, Meta, Google, OpenAI representatives. Meeting minutes. 2025, May.
- EDRi, Access Now & The European Consumer Organisation. EU legislators must close dangerous loophole in AI Act. 2023 Sep.
- Honourable Fry, H., Chair, Canada House of Commons. Tech Giants' Intimidation and Subversion Tactics to Evade Regulation in Canada and Globally: Report of the Standing Committee on Canadian Heritage. 44th Parliament, 1st Session. 2024 Nov.
- Common Cause. Big Tech is Donating Millions to Trump’s Inauguration. 2025 Jan.
- Minkin, A. Big Tech Cozies Up to New Administration After Spending Record Sums on Lobbying Last Year. Issue One. 2025 Jan.
- Open Secrets. Internet Summary. 2024.
- America PAC - Financial Summary, 2023-2024 period. Federal Election Commission of the United States of America.
- Feiner, L. Meta had its biggest lobbying quarter ever. The Verge. 2024 Apr.
- Minkin, A. Big Tech Lobbies for a Seat at the Table as the 119th Congress Sets Its Tech Policy Priorities. Issue One. 2025 Apr.
- Feathers, T. and Ng, A.Tech Industry Groups Are Watering Down Attempts at Privacy Regulation, One State at a Time. The Markup. 2022 May.
- Merican, D. Elon Musk’s PAC spent an estimated $200 million to help elect Trump, AP source says. AP News. 2024 Nov.
- Laforge, G. Big Tech's Foreign Policy Takeover. Tech Policy Press. 2025 May.
- Shalal, A. and Roulette, J. Exclusive: US could cut Ukraine's access to Starlink internet services over minerals, say sources. Reuters. 2022 Feb.
- Lambert, J. and Tanis, F. DOGE dismantling foreign aid agency started by George W. Bush. NPR. 2025 Apr.
- Allyn, B. Elon Musk's DOGE takes aim at agency that had plans of regulating X. NPR. 2025 Feb.
- Darmiento, L. These departments investigating Elon Musk have been cut by DOGE and the Trump administration. Los Angeles Times. 2025 May.
- Open Letter: Canada cannot afford to concede more to foreign tech giants. Friends of Canadian Media. 2025 Jul.
- EDRi and associations. Open Letter calling the EU to resist Big Tech bullying. 2025 Jan.
- European Center for Not-for-Profit Law. Open Letter Against Attempts to Delay or Reopen the AI Act. 2025 Jul.
- European Center for Not-for-Profit Law. Open Letter: the AI Act Must Protect the Rule of Law. 2023 Sep.