EU Will Stand Up Office to Enforce AI Act, Says EU LawmakerDragos Tudorache Says the Agency Will Be a "Powerful Enforcer" of AI Act
The European Union will soon set up a dedicated office to oversee the implementation of the AI Act, especially by big-tech companies such as OpenAI, said a key European lawmaker.
The European Parliament in June approved regulations intended to mitigate AI's potential for negative effects on society. The AI Act entered final negotiations this month between parliamentary members and a committee of member state representatives. The proposal is set to come into force in early 2026.
Dragoş Tudorache, a Romanian politician and the co-rapporteur of the AI Act, said negotiators have agreed in principle on creation of an "EU AI Office."
"We may end up calling it differently, but there will be an EU entity," Tudorache told Information Security Media Group. "We'll see whether it will stand as a separate body or if it will be an autonomous entity within the commission."
The new office, proposed in the European Council and commission's draft legislative version, will act as a centralized agency with national subsidiaries that will be responsible for hiring talent and building expertise.
In addition to promoting coordination among member states, Tudorache said, such an entity will be critical to monitor the activities of big-tech companies with a global presence, such as OpenAI or Meta.
"No member state alone would be able to properly handle big companies as these are very powerful actors with global presence. This has been clear in the way we have implemented the General Data Protection Regulation. So for these very powerful actors, you need a very powerful enforcer."
Although a political agreement has been reached on the creation of an EU-level entity, Tudorache said discussions are currently underway to determine the functions of the new office, how it will be financed, and what "such governance at the European level looks like."
EU nations that have already established the groundwork for the creation of the regional agencies include Spain, which in August announced the creation of its first-ever AI regulatory agency (see: Spain to Launch Europe's First AI Regulatory Agency).
The AI Act primarily intends to mitigate societal risks and ban a slew of applications, such as biometric recognition in public places, that are deemed to be a high risk to society.
Among concerns raised against the legislation include its two-year enforcement gap, which the Dutch data regulator warned will result in more high-risk AI systems entering the market by the time of its enactment (see: EU Artificial Intelligence Act Not a Panacea for AI Risk).
"Politically, we all think that two years is too long. But we cannot make it too short either, because both member states as well as companies will need a bit of time to prepare for compliance," Tudorache said, adding that a "realistic" timeline for enforcement is between 12 and 16 months.
The Dutch agency and privacy rights groups have raised concerns about industry self-assessment of high-risk AI systems, which they argued could result in AI developers assessing their systems as low-risk, allowing them to skip security checks put in place for high-risk AI systems.
Tudorache said lawmakers are working to create "clearer and more precise" criteria for high-risk AI systems. He refused to comment on a proposed blanket ban on live AI surveillance systems, referring to it as the "delicate point of negotiation." Among the EU nations contesting a blanket ban on AI facial recognition systems is France, which in recent months has adopted a more pro-facial recognition stance.
AI Act co-rapporteur Brando Benifei is among the backers of a more flexible implementation of the proposed legislation, Reuters reported recently.
The EU is likely to come into a political agreement on the proposed AI Act toward the end of this month. The legislation is likely to be voted into law by early next year, Tudorache said.