Aligned Yet Different: A Deep Dive into the EU AI Act's Rules for High-Risk and General-Purpose Models¶
The European Union’s Artificial Intelligence Act is not another piece of tech regulation; it is a foundational legal framework poised to define the development and deployment of AI across the continent and beyond. Specific requirements came into force in February 2025, while it is scheduled for 2nd August.

Just the preamble of the Act is 44 pages long! It defines the purpose of the legislation (hold on to your breath when reading out loud):
The purpose of this Regulation is to improve the functioning of the internal market by laying down a uniform legal framework in particular for the development, the placing on the market, the putting into service and the use of artificial intelligence systems (AI systems) in the Union, in accordance with Union values, to promote the uptake of human centric and trustworthy artificial intelligence (AI) while ensuring a high level of protection of health, safety, fundamental rights as enshrined in the Charter of Fundamental Rights of the European Union (the ‘Charter’), including democracy, the rule of law and environmental protection, to protect against the harmful effects of AI systems in the Union, and to support innovation. This Regulation ensures the free movement, cross-border, of AI-based goods and services, thus preventing Member States from imposing restrictions on the development, marketing and use of AI systems, unless explicitly authorised by this Regulation.
Interesting to see the “free movement” of AI goods and AI Services mentioned. I have not seen "free movement" of privacy related data in the EU GDPR regulation.
As an EU and UK citizen, I am intrigued by the impact this legislation will have, and the diverging plans of the UK and EU governments.
For business leaders, developers, and innovators, understanding this new legal framework is important. The Act avoids a simplistic, one-size-fits-all approach, instead opting for a risk-based system by defining specific rules for the high-risk AI systems and the general-purpose AI models respectively.
One set of rules is designed to govern specific, high-stakes applications, while the other aims to manage the foundational technology that will power an entire ecosystem of future innovations. I explore where these regulatory paths converge and where they split.
Notable Definitions¶
High-Risk AI Systems:¶
AI systems are classified as high-risk if they are used as a safety component of a product or are themselves a product covered by specific Union harmonisation legislation.
AI systems in certain pre-defined areas are automatically considered high-risk, including those related to:
- Biometrics
- Critical infrastructure
- Education and vocational training
- Employment and workers’ management
- Access to essential services and benefits
- Law enforcement
- Migration, asylum, and border control management
- Administration of justice and democratic processes
Providers of high-risk AI systems must establish a risk management system, ensure data quality, provide technical documentation, ensure human oversight, and meet requirements for accuracy, robustness, and cybersecurity.
General-Purpose AI Models:¶
These are those models not classified as high-risk, naturally.
‘High-Impact capabilities’¶
Hold on to you butts to understand this definition:
means capabilities that match or exceed the capabilities recorded in the most advanced general-purpose AI models;
’Systemic risk’¶
means a risk that is specific to the high-impact capabilities of general-purpose AI models, having a significant impact on the Union market due to their reach, or due to actual or reasonably foreseeable negative effects on public health, safety, public security, fundamental rights, or the society as a whole, that can be propagated at scale across the value chain;
The Shared Foundation: Core Principles of Trustworthy AI¶
The AI Act establishes a set of shared principles that apply to both high-risk systems and the most general-purpose models, creating a consistent foundation for “trustworthy” AI.
The first shared principle is proactive risk management: For high-risk AI systems, providers must establish, implement, and maintain a continuous risk management system throughout the system’s entire lifecycle, covering foreseeable risks to health, safety, or fundamental rights when the system is used as intended. Similarly, providers of general-purpose AI models must also assess and mitigate possible systemic risks that could stem from the model’s development or use and could have a significant negative impact on public health, security, or society as a whole – that is a rather wide definition. In both cases, the regulation demands that developers think critically about potential harm from the outset and take concrete steps to prevent it.
A second area of alignment is the mandate for comprehensive documentation:
-
High-risk AI systems must be accompanied by detailed technical documentation before they are placed on the market.
-
The providers of all general-purpose AI models are required to draw up and maintain technical documentation, including details about the model’s training and testing process and the results of its evaluation. They must prepare separate documentation for the downstream providers.
Finally, the Act imposes a common imperative for technical robustness and reliability. High-risk AI systems must be designed to achieve an appropriate level of accuracy, robustness, and cybersecurity, performing consistently throughout their lifecycle. They must be resilient against errors or inconsistencies and protected from attempts by malicious third parties to alter their use or performance.
The providers of general-purpose AI models with systemic risk must perform thorough model evaluations, which include conducting adversarial testing to identify and mitigate risks. Red team exercises for AI? Im in. They are also required to ensure an adequate level of cybersecurity protection for the model and its physical infrastructure. Call me sceptic, but this may take some big fines to drive the culture change. I wonder why the EU AI Act does not directly reference EU NIS2 regulation.
Divergent Paths: Tailored Regulations for Distinct Challenges¶
While built on a shared foundation, the specific regulatory obligations for high-risk systems and general-purpose models diverge significantly. These differences reflect a nuanced understanding that a final product used in a critical context poses different challenges than a foundational model with vast but undefined potential.
The most critical distinction lies in the core regulatory focus. The rules for high-risk AI are fundamentally vertical and application-specific. They target AI systems intended for use in a number of pre-defined areas where the stakes for individuals are high, such as in education, employment, critical infrastructure, and law enforcement. The regulation is concerned with the direct impact of these systems on people’s lives.
In contrast, the regulations for general-purpose AI models are horizontal. They address the foundational technology itself, recognising that a single model can be integrated into a vast variety of downstream systems or applications. The concern here is more about the systemic, widespread impact a powerful and widely used model could have, from copyright implications to unforeseen societal risks.
For high-risk systems, the requirements are specific and prescriptive. Providers must adhere to strict data governance practices for their training, validation, and testing datasets. They must design their systems to allow for effective human oversight, providing tools that enable a human to intervene or override the system’s output.
The obligations for general-purpose AI model providers are structured differently, focusing more on enabling a safe ecosystem. A key and unique requirement is the obligation to put in place a policy to comply with Union copyright law. This includes making publicly available a detailed summary of the content used for training the model. The content creators in the EU should be happy with this provision.
Finally, the regulations differ in their point of intervention and enforcement. For a high-risk AI system, compliance is policed through a conformity assessment that must be completed before the system is placed on the market or put into service. For general-purpose AI models, the obligations for providers begin as soon as the model is placed on the market. For models with systemic risks, these duties are ongoing and are supervised directly at the Union level by the newly established AI Office within EU Commission.
Cybersecurity¶
The EU AI Act elevates cybersecurity from a best practice to a core legal requirement for regulated AI systems. For high-risk AI, providers must design and develop systems that are resilient against attempts by unauthorised third parties to exploit vulnerabilities and alter their use or performance throughout their lifecycle. This mandate specifically includes protecting against AI-centric threats like data poisoning, model poisoning, and adversarial attacks.
Similarly, providers of general-purpose AI models with systemic risk must ensure an adequate level of cybersecurity protection for the model itself and its physical infrastructure. To streamline compliance, the Act allows providers to demonstrate conformity through existing certifications under the EU’s Cybersecurity Act.
The Global Reaction: Concern, Competition, and Cooperation¶
The AI Act’s detailed approach has not gone unnoticed on the global stage. Its implementation is being closely watched by tech companies, other governments, and international bodies, each with their own perspective on its potential impact.
The Tech Industry’s Cautious Welcome¶
The reaction from the technology sector has been a combination of cautious support and significant concern. While there is a broad consensus on the need for AI regulation, many companies worry about the practical implementation of the Act.
Lobby groups representing major firms like Google and Meta have argued that the compliance framework, including voluntary codes of practice, creates a “disproportionate burden on AI providers”. One of the most significant points of contention is the transparency requirement for providers to publish summaries of their training data. Companies fear this could expose them to a flood of copyright lawsuits and intense scrutiny from data protection authorities. I personally agree with the requirements in the legislation for transparency. High-tech, have for some time benefited from copyrighted content. See the US court battles between publishers and Meta or Anthropic.
Despite these concerns, companies are actively engaging with the new rules. The European Commission’s voluntary codes of practice are seen as a path to “legal certainty and reduced administrative burden,” and some firms, like France’s Mistral, have quickly signalled their intent to sign on, while others remain in a review phase.
The UK’s “Pro-Innovation” Counterpoint¶
In contrast to the EU’s legal approach, the United Kingdom is charting a deliberately different course. This likely stems from fundamentally different underpinnings of the legal system— freedom to do what you want (precedence-based UK), compared to EU stricter legislation alignment.
The UK’s strategy is explicitly “pro-innovation,” prioritising flexibility by relying on existing laws and regulators rather than creating a single, overarching AI law. This approach is structured around five cross-sectoral principles: safety, transparency, fairness, accountability, and redress.
For now, the UK government has chosen a non-statutory path, setting it apart from the EU’s legally binding Act. The focus remains firmly on economic growth, as outlined in the “AI Opportunities Action Plan” from January 2025, which aims to make the UK a premier destination for AI companies through investment in infrastructure and talent. Underscoring its independent path, the UK has also signed a bilateral agreement with the United States on AI safety testing, distinct from the broader European framework.
The World Economic Forum’s Call for Global Governance¶
International bodies like the World Economic Forum (WEF) view the EU AI Act as a pivotal moment for global governance. The WEF has suggested the Act could become a global benchmark for AI regulation, much as the GDPR did for data privacy. This pioneering effort by the EU is seen as a crucial first step in preventing the unchecked development of AI.
The WEF’s own initiatives, such as the AI Governance Alliance, are closely aligned with the Act’s goals, seeking to unite industry and government to foster the “responsible global design and release of transparent and inclusive AI systems”. The Forum has praised the Act’s emphasis on building trust through human oversight, data quality, transparency, and accountability. It frames the legislation not as an isolated European effort, but as a key part of a wider global movement toward responsible AI, which includes the UN’s recent adoption of its first global resolution on the topic.
Conclusion: A New Era of AI Accountability¶
The EU AI Act ushers in a new era of accountability for artificial intelligence. The global reaction underscores the Act’s significance; it is simultaneously a source of concern for some in the tech industry, a competitive counterpoint for nations like the UK pursuing a different strategy, and a celebrated milestone for international bodies advocating for global governance.
Only time will tell if this new AI Act will stifle or accelerate innovation, protect privacy and mental health, protect the rights of copyright holders, and in general make our society better; whatever “better” means.