Loading stock data...

Negotiations among European Union lawmakers on a risk-based framework for regulating artificial intelligence are currently perched on a precarious edge, with powerful actors at the table sensing a delicate balance between safeguarding fundamental rights and enabling innovation. In a roundtable yesterday organized by the European Center for Not-For-Profit Law and the civil society association EDRi, Brando Benifei, a Member of the European Parliament and one of the Parliament’s co-rapporteurs for AI legislation, described the negotiations on the AI Act as being in a “complicated” and “difficult” phase. The roundtable highlighted the closed-door trilogue format—the main method through which EU law is negotiated among Parliament, the Council, and the Commission—and underscored the friction points shaping the consensus. The core disputes revolve around a brief, prohibitive list of AI uses (Article 5), the scope and depth of fundamental rights impact assessments (FRIAs), and the scope of exemptions for national security practices. Benifei indicated that Parliament has red lines on these issues and expects the Council to offer meaningful movement; without such movement, a timely conclusion would be unlikely.

The trilogue landscape: actors, process, and the path to compromise

Trilogues, the three-way negotiations that bring together the European Parliament, the European Council (representing the member states), and the European Commission, are traditionally where the bulk of EU law is forged. They are notorious for their complexity and, at times, their opacity, with progress often guarded by a mix of strategic concessions and behind-the-scenes bargaining. In the current AI act process, three main actors define the contours of possible outcomes: the Parliament’s co-legislators led by Benifei, the Council of member-state governments, and the Commission, which is tasked with presenting the initial legislative text.

The ongoing talks come at a moment of heightened attention to artificial intelligence across Europe and beyond, with generative AI models drawing unprecedented public and political focus. The discussions touch on the balance between risk-based regulation and maintaining an environment conducive to innovation, research, and industry competitiveness. The Parliament argues for strong safeguards that protect fundamental rights and democratic freedoms, while the Council emphasizes national sovereignty considerations and the need to safeguard security and public interests. The Commission is caught in the middle, attempting to reconcile the text with its own regulatory ambitions and its mandate to present a workable draft. The roundtable yesterday reflected this tension, with Benifei noting the impossibility of concluding the file by the current timeline unless there is meaningful compromise on core issues.

Civil society groups, represented by organizations like EDRi, have offered a robust critique of the current state of play. They argue that several core protections are being rebuffed by member states, raising concerns about the risk of AI-enabled harms and the erosion of fundamental rights. The process’s lack of transparency in trilogues has also been a recurring theme, with critics warning that industry lobbyists are exerting disproportionate influence in shaping the legislative outcome. The conversation around foundational models has intensified the pressure, as both European startups and large US tech companies push for carve-outs or more permissive rules that could dilute the act’s intended protections.

Core sticking points: prohibitions, FRIA, and security exemptions

Three central tensions dominate the negotiations: a concise list of prohibited AI practices (the Article 5 bans), the design and scope of fundamental rights impact assessments, and the carve-outs related to national security by member states. Each of these domains is populated with sub-issues that are fueling a high-stakes stalemate.

Prohibitions on AI practices (Article 5) and the rights-at-risk calculus

The debate over Article 5 centers on a brief but powerful list of AI uses that would be prohibited or tightly restricted. Parliamentarians insist on strong prohibitions where AI could meaningfully threaten fundamental rights or public safety. They argue that without firm prohibitions, certain dangerous practices could proliferate with insufficient guardrails. The Council, however, seeks to narrow or calibrate these prohibitions to avoid overreaching bans that could stifle legitimate applications and innovation. Benifei’s position is clear: there must be a firm boundary around protections for citizens’ fundamental rights, and any drift toward loosening those protections would compromise the very foundation of the act. He warned that relaxing prohibitions too far would undermine essential safeguards, and he stressed that concessions on these points would jeopardize the act’s ability to be concluded in a timely manner.

Fundamental rights impact assessments (FRIAs)

FRIAs are central to the moral and legal architecture of the AI Act, designed to pressure developers and deployers to anticipate the impact of AI applications on fundamental democratic freedoms before deployment. Parliament has proposed robust FRIA requirements as part of a broader package of reforms to strengthen protections for fundamental rights. Civil society groups view FRIAs as a practical and proactive tool to prevent rights violations—an instrument to compel risk mitigation early in the development cycle. The Council’s current stance on FRIAs has been described as less protective by civil society representatives, who warn that insufficient clarity and overly permissive requirements could render the assessments symbolic rather than functional.

Benifei himself expressed optimism that the fundamental rights impact assessment framework could emerge in a form close to Parliament’s original proposals, but he also warned that the law enforcement exemptions embedded in the text demand more robust movement from the Council. He argued that continuing to tolerate positions that would limit the effectiveness of these assessments or undermine their enforcement would risk rendering the AI Act incomplete or ineffective in protecting rights. At the roundtable, the importance of maintaining pressure from civil society to prevent dilution of FRIA provisions was underlined as a decisive factor in achieving a workable compromise.

National security exemptions and the scope of carve-outs

Another sticking point concerns exemptions for national security practices. Lawmakers in Parliament have expressed unease about broad exemptions that would allow certain security-related uses of AI by governments to operate with minimal oversight. Civil society voices have warned that expansive exemptions could erode the protective safeguards intended by the act, undermining the policy’s ability to safeguard civil liberties and public accountability. This is a particularly sensitive area because it touches on the balancing act between national security needs and individual rights, with potential implications for the oversight environment within the EU.

Related domains: biometrics, risk classification, and export controls

Adding to the complexity are specific technical and procedural questions raised in relation to other aspects of the act. For example, member states currently oppose a full ban on remote biometrics in public spaces. There is also no consensus on requiring registration of high-risk AI systems used by law enforcement and immigration authorities, nor on creating a loophole-proof risk classification process for AI systems. The absence of agreement on restricting the export of prohibited AI systems beyond the EU boundary is another area of ongoing disagreement. In parallel, there is considerable discussion around whether bans on biometric categorization and emotion recognition should be extended to avoid unintended harms and rights violations.

These issues illustrate how the AI Act is not a single instrument but a mosaic of interconnected rules, each with technical implications and political sensitivities. The interplay between prohibitions, risk assessment, governance, enforcement, and export considerations creates a dense web of negotiations that require careful calibration to ensure a comprehensive yet workable regulatory regime.

Civil society’s assessment and proposed safeguards

Civil society organizations have remained vocal about the need to place fundamental rights and democratic freedoms at the core of the AI Act. They argue that the current state of play in the trilogues risks watering down essential protections through concessions that would tilt the balance toward government discretion or commercial interests at the expense of privacy, equality, non-discrimination, and other civil liberties.

Key civil society recommendations and persistent roadblocks

Sarah Chander, senior policy adviser at EDRi, outlined a lengthy set of core recommendations aimed at safeguarding fundamental rights from AI overreach. She pointed to unmatched areas where Member States appear to be resisting what civil society sees as necessary safeguards, such as a comprehensive ban on remote biometric identification in public settings, a clear framework for registering high-risk AI deployments by law enforcement and border authorities, and a robust, loophole-proof risk classification framework for AI systems. Chander also flagged the lack of agreement on measures to limit exports of prohibited systems outside the European Union, indicating that while some protections exist on paper, practical enforcement and international reach remain unresolved.

Benifei echoed the need for a strong FRIA framework and emphasized that the agreement requires a careful balance: it is essential to protect fundamental rights, yet it is equally important to avoid an overly rigid or impractical regime that could hinder legitimate uses of AI. He underscored the importance of not allowing prohibitions to become a license for unrestrained government action on sensitive issues. The civil society participants stressed the importance of a “real” FRIA—one that proactively influences design and deployment choices to protect democracy and human rights.

Lidiya Simova, policy advisor to MEP Petar Vitanov, highlighted that FRIAs have faced substantial pushback from the private sector, which fears an undue burden on companies. While she noted that trilogues have not yet fully tackled this issue, she suggested that parliamentarians expect further resistance in this area, including potential attempts to exempt private firms entirely from conducting FRIAs. However, she viewed the prospect of further watering down FRIA obligations as a “longer shot,” arguing that meaningful obligations should translate into consequences if not met.

Simova also emphasized that the core challenge goes beyond individual provisions; it reflects a larger structural question: the EU’s attempt to safeguard fundamental rights through a framework designed for product safety. This has been a longstanding critique of the EU approach, and she warned that restoring harmony between rights protections and product safety is not straightforward. The long history of amendments, revisions, and diverging visions on the AI file indicates deep-seated disagreements about how best to reconcile the protection of fundamental rights with a dynamic and rapidly evolving technology landscape.

Foundational models, industry lobbying, and the regulatory debate

The regulation of foundational models—large, multi-use AI models that underpin many downstream applications—has become a focal point in the lobbying battle shaping the AI Act. Benifei stressed that the question of how to regulate generative AI and foundational models is a major source of disagreement, partly driven by intense lobbying from industry. He observed that governments themselves are under pressure from various interest groups and that while this lobbying is legitimate, it must not derail the ambition to regulate responsibly.

A recent Euractiv report highlighted a pivotal moment in the negotiations: a meeting of a technical body within the European Council broke down after France and Germany pushed back against Parliament’s proposals for a tiered approach to regulating foundational models. The report named French AI startup Mistral and German startup Aleph Alpha as some of the players lobbying for carve-outs that would shield foundational models from stringent regulation. The Corporate Europe Observatory, a not-for-profit watchdog on EU lobbying, noted the scale of industry lobbying on the AI Act and pointed to the involvement of European companies such as Mistral AI and Aleph Alpha in Brussels, where they have established offices to advocate for looser rules for foundational models. They argued the push for carve-outs risks derailing the AI Act and undermining safeguards meant to protect human rights.

Mistral AI’s CEO Arthur Mensch acknowledged that there has been lobbying against upstream regulatory obligations on base models. He argued that the focus should be on regulating applications, not the underlying infrastructure, and he expressed optimism that regulators are recognizing this perspective. He also suggested a framework for accountability: downstream deployers should be able to verify a model’s behavior in their specific use case, with the provider offering evaluation, monitoring, and guardrails to assist in verification. Mensch asserted support for AI application regulation while criticizing the prevailing approach to foundational model regulation as overly burdensome and biased toward large, US-based firms.

Aleph Alpha, another key player named in lobbying shadows, did not respond to requests for comment by the time of reporting. Open discussions about lobbying’s influence reflect a broader concern: if foundational models are shielded from scrutiny, downstream harms could persist even as the market grows.

Responses from AI ethics and safety advocates, including Max Tegmark, president of the Future of Life Institute, warned against regulatory capture. Tegmark argued that efforts by major tech players to secure carve-outs for foundational models could render the AI Act ineffective or even a global misstep, diminishing Europe’s potential leadership in responsible AI governance. He urged lawmakers to stand firm in defending the act’s safeguards and to prevent corporate interests from compromising the protection of thousands of European companies and individuals.

The precise trajectory of foundational-model regulation remains unclear. French and German positions could end up hardening or softening the Council’s stance, depending on how Parliament maintains its insistence on accountability upstream to model creators. An EU source familiar with Council deliberations characterized the issues as “tough points” with little flexibility, though not explicitly declaring them as absolute red lines. The same source indicated there remains a possibility for a conclusive trilogue on December 6, as preparatory discussions continue and member states seek to refresh their mandate to the Spanish presidency. Technical teams from both the Council and Parliament continue to explore “landing zones” that could pave the way to a provisional agreement, while acknowledging that many sticking points remain highly sensitive for both sides.

Downstream implications and the governance question

Was the debate over foundational models about protecting upstream developers, or about ensuring accountability for downstream deployments? The arguments presented by Mensch, Tegmark, and others reveal a core tension: should regulation focus on the risk and accountability of the applications that users actually employ, or should it also impose meaningful duties on the creators and providers of the underlying foundational models? Regulators face the challenge of setting rules that are precise enough to be enforceable, yet flexible enough to adapt to rapid technological change. The current discussions suggest a preference in Parliament for robust governance that can hold model developers to account for potential biases, harm, and misuse, by ensuring transparency around training data, model capabilities, and risk-management tools, while ensuring that downstream operators have access to verification tools to maintain compliance.

Timeline dynamics, risks, and potential outcomes

The timing of the AI Act negotiations is crucial. The next trilogue is scheduled for December, and there is a sense of urgency given the EU’s political calendar, including European elections next year and potential reconfiguration of the Commission and Parliament thereafter. If an agreement cannot be reached in the coming weeks, there might be a compressed window to finalize the act before the Parliament dissolves for elections, or the negotiations could spill into a new legislative cycle under a different political configuration. An unresolved stalemate could therefore delay the EU’s ability to present a coherent, harmonized regulatory framework for AI at a time when global attention to AI governance is at a peak.

A broader geopolitical element underpins the negotiations as well. The EU has positioned itself as a “rule maker” in the global AI governance landscape, aiming to provide a robust, rights-based framework that could influence other jurisdictions’ approaches to AI regulation. The possibility that the United States could move quicker on some AI governance aspects, or that regulatory carve-outs could enable competitive advantages for certain firms, adds an extra layer of pressure to reach a solid compromise. The possibility that the Council might find ways to accommodate foundational-model carve-outs without undermining core protections remains a dynamic and contested part of the conversation.

Within the EU’s internal political dynamics, the Spanish presidency and the European Council’s preparatory work are central to shaping the eventual mandate for December negotiations. Technical teams from both the Council and Parliament continue to work toward a set of “landing zones” that could reconcile divergent positions and produce a provisional agreement at the next trilogue. However, even as efforts continue, observers acknowledge that the path to a final text will likely require concessions on multiple fronts, with each concession potentially triggering downstream political repercussions across member states.

What could tip the balance?

Several factors could influence the negotiating outcome in the near term. First, a more robust acceptance of fundamental rights impact assessments by the Council could unlock progress on a range of other FRIA-related questions and enforcement mechanisms. Second, renewed political will to limit or narrow national-security exemptions could create momentum for a broader set of protections across the act. Third, a meaningful, enforceable approach to prohibitions on high-risk uses could help align the various stakeholders around a shared risk framework. Fourth, clear, transparent guidelines on foundational-model governance that balance safety with innovation could reduce anxiety among member states concerned about over-regulation stifling European competitiveness.

A potential breakthrough would involve concrete language on the scope and depth of FRIA requirements, a more precise definition of national-security exemptions that preserves the act’s protective aims, and a well-calibrated approach to regulating foundational models that resists carving-out measures that would undermine the framework’s integrity. If negotiators can achieve such clarity, the AI Act could emerge as a credible governance instrument, shaping Europe’s AI future while preserving its distinctive commitments to human rights and democratic norms.

Broader implications: Europe’s leadership in AI governance and the global stage

The AI Act represents more than a regulatory instrument for a single technology sector; it embodies the European Union’s broader ambition to be a global standard-setter in AI governance. The negotiations reflect a deep commitment to aligning rapid technological progress with safeguarding fundamental rights, human dignity, privacy, and equality. The outcome will influence how European companies, as well as international firms operating in Europe, approach AI development and deployment. A robust framework that is both principled and practical could set a template for responsible AI governance that resonates beyond the EU’s borders.

At the same time, the negotiations highlight the tension between ambitious regulatory aims and the pragmatic needs of a technologically dynamic economy. Critics ask whether a risk-based framework can remain flexible enough to accommodate innovations while still delivering the predictability and clarity that businesses require. Proponents argue that a strong, rights-forward framework is not only a protective measure but also a competitive differentiator that could spur trust and adoption of AI technologies across European markets.

The conversation also intersects with other EU digital and data policies, illustrating how AI regulation fits into a broader ecosystem of rules governing data privacy, consumer protection, platform liability, and digital market competition. The interconnectedness of these policies means that progress—or the lack thereof on the AI Act—could have ripple effects across multiple domains of EU policy, influencing research funding, industrial strategies, and the global competitiveness of European tech sectors.

The human element: stakeholders, safeguards, and the future of rights in an automated world

Beyond the mechanics of negotiation and the technicalities of the regulatory framework lies a central human concern: how to ensure that the rapid deployment of AI technologies does not erode fundamental rights or undermine democratic values. The participating voices—parliamentarians, government representatives, civil society advocates, and industry players—each articulate a different emphasis on what constitutes responsible innovation. Parliaments’ insistence on stringent FRIA provisions, robust prohibitions, and accountable governance reflects a commitment to protecting individuals and communities from potential harms, biases, and abuses that AI systems could perpetuate.

Industry voices stress the importance of a predictable regulatory environment that encourages investment and innovation, warning against regulatory approaches that could impose undue burdens or stifle small and medium-sized enterprises. Civil society advocates emphasize the need for enforceable commitments and real-world safeguards that translate into meaningful protections for people’s liberties, privacy, and autonomy. The tension among these perspectives is not merely a policy dispute; it is a negotiation about how European society will integrate powerful AI technologies into everyday life while maintaining a strong commitment to human rights and democratic accountability.

The outcome of these negotiations will set a precedent for how the EU manages risk, drives innovation, and protects citizens in a world increasingly shaped by intelligent systems. The December trilogue could become a watershed moment—either paving the way for a robust, rights-focused AI Act that can serve as a global template or risking a stalemate that delays progress and weakens Europe’s leadership on AI governance.

Conclusion

In sum, EU lawmakers find themselves navigating a high-stakes, multi-faceted negotiation over the AI Act, with three principal battlegrounds—prohibitions on certain AI uses, the depth and enforceability of fundamental rights impact assessments, and national security exemptions—shaping the path forward. Civil society advocates push for strong, enforceable safeguards that prioritize citizens’ rights and democratic freedoms, while industry stakeholders press for clarity and practical regulation that supports innovation, competition, and market confidence. The handling of foundational models has added another layer of intensity, with powerful industry players pushing for carve-outs that could reframe the regulatory landscape.

As the next trilogue approaches in December, the stakes are high: a well-calibrated compromise could position Europe as a global leader in responsible AI governance, setting standards that others may follow. Conversely, failure to achieve a balanced agreement could delay timely regulation, weaken existing protections, and invite renewed uncertainty for developers, deployers, and the public alike. The fate of the AI Act—whether it emerges as a robust, rights-respecting framework or slides into a protracted impasse—will hinge on whether all sides can translate broad aspirational goals into precise, enforceable, and adaptable rules that keep pace with a rapidly evolving technology landscape while safeguarding the fundamental rights at the heart of European values.