The AI Act, a groundbreaking piece of legislation from the European Union, aims to establish comprehensive governance and regulatory frameworks for artificial intelligence technologies within the EU. Technical negotiations concluded in January 2024, yet there remains considerable uncertainty regarding who is responsible for implementing and enforcing these measures at both the EU and national levels. This article, and the accompanying comprehensive document, aim to clarify these responsibilities and timelines, making the AI Act more accessible to all stakeholders.
Member States within the EU are faced with numerous obligations under the new AI Act. From November 2024 to August 2026, they must establish an AI governance system by completing 18 specified tasks. Additionally, they may need to introduce new national laws and secondary legislation, with some deadlines already set while others are more flexible. A total of 55 categories of enforcement activities, beginning as early as February 2025, and eight tasks for conducting ex-post evaluations between 2025 and 2031 are also outlined.
This detailed breakdown of the AI Act's requirements is especially beneficial for civil society, academics, and small to medium-sized enterprises (SMEs) that may lack the resources to independently monitor the legislation's implementation and enforcement. By laying out key priorities and their respective deadlines, the document helps these groups focus their scrutiny on critical areas. The list will be regularly updated to account for any changes or oversights, ensuring that it remains a reliable resource for all interested parties.