The agreement among the leading groups in the European Parliament on the AI regulation is dead, opening the door for amendments from both sides of the aisle.
The AI Act is a landmark legislation to regulate Artificial Intelligence based on its potential to cause harm. The European Parliament is set to vote on the legislative proposal on 14 June, as the deadline for tabling amendments passed on Wednesday (7 June).
At the end of April, the four main political parties agreed that they would not table alternative amendments, with the partial exception of the European People’s Party (EPP), which was granted some flexibility on the issue of remote biometric identification.
However, the EPP has abused such flexibility for the other main groups by tabling separate amendments as a group rather than a split vote, which would see a provision from the compromise text voted separately.
“The EPP broke the deal. They need to take responsibility for that. The agreement was no group amendments,” said a European Parliament official, adding that now “anything is possible” as the other groups might now decide to support other amendments.
A parliamentary official from the centre-right group called the accusation ‘fake news’, arguing that “we have it in the minutes that EPP was granted flexibility on RBI [remote biometric identification] in plenary. It was always up to our group to decide how to do it.”
Remote biometric identification
The issue of remote biometric identification in public spaces has been a hot topic in parliamentary discussions, as lawmakers were conflicted between the need to ensure security and the risk of mass surveillance.
The compromise was to ban the real-time use of this technology whilst allowing it ex-post to investigate serious crimes, after approval from a judicial authority.
MEP Jeroen Lenaers coordinated the EPP’s amendments and essentially reverted to the original text that allowed the real-time use of these systems in three exceptional cases: to find a missing person, prevent a terrorist attack or locate the suspect of a serious crime.
The text specifies these are specific cases that would require preapproval of a judicial authority or by an independent administrative authority and are subject to safeguards in terms of necessity and proportionality.
By contrast, the Left has tabled an amendment that would entirely ban remote biometric identification technology. This proposal might find support among the political groups that feel wronged by the EPP.
The AI regulation bans applications that are deemed to pose an unacceptable risk to society. Upon impulse from left-to-centre MEPs, this list has been extended to applications such as predictive policing and emotion recognition.
Individual lawmakers tabled a few amendments that were part of the political deal and would further increase the list of banned AI applications.
A cross-party coalition including Pirate MEP Patrick Breyer, social democrat Birgit Sippel and liberal Karen Melchior tabled an amendment to ban any AI system from detecting, monitoring and analysing the behaviour of people in publicly available spaces.
“Behavioural analysis is a form of mass surveillance of public spaces which would automatically alert authorities to “abnormal behaviour”. Such practices educate to conformist behaviour,” reads the justification.
Sippel also tabled two separate amendments with MEP Sylvie Guillaume forbidding AI-powered tools to profile or assess whether migrants might pose a threat based on personal known or predictive data and to forecast individual or collective movements about border crossings.
Leftist MEPs support both amendments have the support of, and have also proposed banning any system that might be used to detect a person’s presence in workplaces, educational settings, and border surveillance.
Two political groups that did not support the deal were the Left and the European Conservatives and Reformists, which tabled 16 and 9 amendments, respectively.
Key amendments relate to classifying AI systems as high-risk to cause harm, as in these cases, the developers would have to comply with stricter obligations in risk management, data governance and technical documentation.
In the original text, the AI systems that fall under a list of critical areas or use cases would automatically qualify as high-risk. This automatism was removed as a concession to the centre-right, but the leftists are now trying to reintroduce it.
The compromise text significantly refined and expanded this list of high-risk areas and use cases, including recommender systems used by social media platforms designated as having systemic relevance under the Digital Services Act (DSA).
An amendment of the conservative group would restrict this category to only the platforms that do not comply with the DSA, explaining that “there is no need to duplicate obligations”.
For foundation models, particularly for generative AI like ChatGPT, the MEPs agreed to introduce specific obligations, including the fact that they would have to publish a sufficiently detailed summary of the training data covered by copyright law.
Conservative MEPs are proposing to introduce a caveat that such a summary be subject to exclusions of any data protected by intellectual protection rights or constitute a trade secret.
The Left’s amendments would also introduce the right for a person affected by an AI system to request an explanation of the decision-making process and ask a public interest organisation to complain to competent national authorities if they think an AI system infringes the AI Act.
[Edited by Nathalie Weatherald]