The EU AI Act: Legislative Journey & Key Elements

A brief overview of the EU's sweeping legislation on artificial intelligence

A symbolic representation of the balance between technology and law, featuring a scale of justice with one side holding a classic book of law and the other side cradling a futuristic, glowing cybernetic brain. In the background, a cyborg figure stands, half-human and half-machine, representing the intersection of humanity and artificial intelligence. This image conveys the idea of the European Union's efforts to regulate AI, balancing innovation with ethical considerations and fundamental rights.

FuturePoint Digital is a research-based consultancy positioned at the intersection of artificial intelligence and humanity. We employ a human-centric, interdisciplinary, and outcomes-based approach to augmenting human and machine capabilities to create super intelligence. Our evidenced-based white papers can be found on FuturePoint White Papers, while FuturePoint Conversations aims to raise awareness of fast-breaking topics in AI in a less formal format. Follow us at: www.futurepointdigital.com.

The European Union is poised to lead the way in the regulation of artificial intelligence (AI) with its groundbreaking Artificial Intelligence Act (AI Act). Adopted after extensive negotiations, this comprehensive legislation aims to harmonize rules across the EU, balancing the promotion of innovation with the imperative to safeguard health, safety, and fundamental rights.

FuturePoint Digital is tracking this and related AI related legislation around the globe in an attempt to anticipate the general direction and implications of such efforts. In this post, we briefly delve into the EU AI Act's journey, from proposal to political agreement, exploring the key components of the compromise that promise to shape the future of AI development and deployment. With its focus on high-risk applications, prohibited practices, and the establishment of a robust governance structure, the AI Act sets a precedent for responsible AI use that could serve as a model for regulators worldwide.

The main elements of the compromise on the proposed Regulation for harmonized rules on artificial intelligence within the EU (known as the AI Act) are summarized below. (The full version of the AI Act can be accessed via the following link: https://data.consilium.europa.eu/doc/document/ST-5662-2024-INIT/en/pdf).

Development of the AI Act

  • The European Commission proposed the AI Act on April 21, 2021.

  • The Council adopted its General Approach on December 6, 2022, and the European Parliament confirmed its position on June 14, 2023.

  • Several political trilogues took place throughout 2023, focusing on less controversial parts of the proposal, measures supporting innovation, classification of high-risk AI systems, and more contentious issues like general-purpose AI models and systems, governance, and the prohibitions and law enforcement package.

  • The final trilogue occurred from December 6 to 8, 2023, where all political issues were agreed upon, concluding the inter-institutional negotiations.

Main Elements of the Compromise

  • Subject Matter and Scope: The AI Act aims to ensure a high level of protection for health, safety, and fundamental rights, with national security explicitly excluded.

  • Definition of an AI System: Adjusted to align with the work of international organizations and clarified to exclude traditional software systems.

  • Prohibited AI Practices: Includes prohibitions like real-time biometric identification in public spaces with certain exceptions, untargeted scraping of facial images, and limited bans on emotion recognition and predictive policing.

  • High-Risk AI Systems: Post-remote biometric identification and other specified systems are listed as high-risk, subject to additional safeguards.

  • General Purpose AI Models (GPAI Models): Introduces horizontal obligations for GPAI models, including documentation, risk assessments, and mitigation measures, with compliance facilitated through codes of practice or standards.

  • Governance and Enforcement: Establishes a new governance structure, including the AI Office and an enhanced role for the AI Board, with specific tasks and advisory bodies for technical advice and stakeholder input.

  • Penalties: Sets fines for non-compliance, with specific amounts for various infringements, including a grace period for providers of GPAI models.

  • Entry into Application: Specifies timelines for the regulation's application, ranging from 6 to 36 months for different provisions.

The AI Act represents a significant step towards regulating AI, focusing on ensuring safety, fundamental rights, and fostering innovation while addressing the risks associated with AI technologies. The compromise text reflects a balanced approach to regulation, aiming to protect citizens and encourage the development of AI technologies within a clear legal framework.

FuturePoint Digital is tracking this and related legislative and regulatory developments and will provide more in-depth analysis on the implications of this and similar developments.

How might Future Point Digital help your organization reimagine the art of the possible with respect to new ways of working, doing, thinking, and communicating via emerging technology?

Follow us at: www.futurepointdigital.com