A battle to regulate Artificial Intelligence is taking place in the EU and it seems like big tech is winning.
MEPs and Civil Society organizations raise alarm on the lobbying effort to weaken the AI act. I interviewed MEP Kim van Sparrentak about the act and the efforts to weaken it.
**This article is written in a bit more of a news-piece / feature interview way that’s a bit different from the other posts on this blog so far. Just trying some new things out - feedback is always welcome!***
Update: A deal was reached on the AI act on December 8th - check out the summary by the European Digital rights network here.
“It’s ridiculous not to regulate foundation models” - MEP Kim van Sparrentak
Back in the beginning of von der Leyen’s presidency, she expressed the ambition of creating the world’s first and most ambitious regulation of artificial intelligence anywhere in the world. Right now, the AI act is in the final stages of debate and the final text likely to come from these deliberations looks very different from just a few months ago.

I interviewed Dutch MEP Kim van Sparrentak from the Greens-European Free Alliance who explained to me that key parts of the regulation are being changed under the pressure of corporations such as Microsoft and Google.
The main negotiations centre around the regulation of “foundation models”, such as the model behind OpenAI’s ChatGPT released earlier this year. Foundation models present particular risks as they can be applied in a diversity of ways. Corporate Europe Observatory (CEO) described that risks include recreating existing social prejudices, biases, and inequalities as the foundation models are applied. Considering the novelty and potential power of the technologies, it’s unlikely we know all of the risks that it might present.
Currently, the AI act resembles legislation that contains a tiered list of requirements depending on the potential risks associated with applying the technology, rather than the underlying model itself. According to this approach, foundation models would be subject to regulation only when they are highly capable and applied to high-risk applications. Van Sparrentak instead argued that since “foundation models can do anything, you can’t say if they’re high risk or low risk. Instead, we should call them general purpose and regulate them”.
Van Sparrentak said that without regulation, foundation models “could potentially be used in very dangerous ways. For example, right now there are very few rules for cybersecurity and companies won’t know if the models go rogue”
Big Tech’s intense lobbying
Van Sparrentak noted that big tech companies such as Google and Microsoft (who has a 49% stake of OpenAI) have been stepping up lobbying efforts in recent weeks. CEO reports that this year, 86% of meetings on AI with high-level commission officials were with industry representatives.
CEO’s “Byte by Byte” report reveals a number of internal documents highlighting big tech’s attempt to soften the legislation.
According to lobbyfacts, which registers the efforts of firms to lobby at the EU, Microsoft spent between €7-8 million between June 2022 and June 2023.
The AI act initially contained requirements for external controls and vetting by independent auditors to assess the potential risks of highly-developed foundation models. “We’re really not asking them for much - we just want them to have their systems tested and to make sure they’re secure”, van Sparrentak said.
The documents retrieved by CEO through freedom of information requests showed that lobbying efforts from Microsoft fought against external auditing requirements even before the intent to regulate them was publicly known. Rather than external auditing, tech companies are arguing for self-regulation.
Europe’s own companies are influencing negotiations as well.
Earlier this year, dozens of CEOs from large European corporations signed a letter arguing that the AI act would hamper EU competitiveness and innovation. However, the letter may present a conflict of interests. Euractiv reports that one of the letter’s main writers previously worked as France’s digital state secretary and is now working as chief lobbyist for Mistral, a leading French AI company. This raises a concern that there is a revolving door between those with the task of regulating these technologies and those profiting handsomely from them.
In November it was revealed that Germany, France, and Italy stonewalled discussions on the AI act to prevent regulation of foundation models.
This may have to do with the fact that Sam Altman, CEO of OpenAI, has been openly flirting with the idea of establishing a headquarters in Paris. Altman himself is a rather oxymoronic character when it comes to regulating AI. Outwardly, he’s been calling for a careful and regulated approach to the development of AI and has been very open about its potential dangers. However, it was revealed that Altman has been lobbying for weaker regulations in Europe behind the scenes.
Like many developers in the AI space, Altman’s concerns have primarily been about the risk of AI systems that could potentially overthrow the human race. What is side-lined in these discussions is how these technologies might contribute to inequality, reduce labour rights, or consolidate capital in an ever-smaller share of corporations.
Let’s not forget that OpenAI is seeking a market valuation of $86 billion while Microsoft’s current market capitalisation is estimated to be over $2.75 trillion - the second largest in the world.
Calls for regulation from civil society
Different stakeholders from civil society have been calling for regulating foundation models over the past months. In an open letter signed by academics and experts on AI safety and ethics, signatories state that big tech companies should not regulate themselves and call such a position by legislators as “misguided”.
Agnes Callamard, secretary general of Amnesty International, stated in response to Germany, Italy, and France’s slowing of negotiations that “the EU must not falter at this final hurdle, and EU Member States, such as France, Germany and Italy, must not undermine the AI Act by bowing to the tech industry’s claims”.