
Jessica Sänger. Image: Börsenverein des Deutschen Buchhandels, VNTR Media
By Porter Anderson, Editor-in-Chief | @Porter_Anderson
Big Tech: ‘No Interest in an AI Act With Teeth’
An open letter of protest launched this summer by more than 40 of the most prominent organizations of publishers, producers, performers, authors, and other rights-holders charged that the European Artificial Intelligence Act’s implementation measures “fail to address the core concerns [of] our sectors—and the millions of creators and complaints active in Europe which we represent.”
Publishing Perspectives—in order to get a deeper look at where the EU AI Act’s implementation measures may be falling short—has put several questions to Jessica Sänger, the director for European and international affairs at Börsenverein des Deutschen Buchhandels. Our intent is to get a sharper focus on just where the EU’s implementation efforts are falling short and what is needed to remedy them—points left unclear in the open letter of protest.
We’re pleased to have Sänger’s expert input on the issues at hand. In the course of our exchange what develops is a picture of a distinct bias toward the Big Tech interests in the debate, to the point of undermining the efficacy of the EU Act itself.
Publishing Perspectives: Jessica, the public letter of objection to the European Commission’s implementation plans says that “the feedback of the primary beneficiaries these provisions were meant to protect has been largely ignored in contravention of the objectives of the EU AI Act as determined by the co-legislators and to the sole benefit of the GenAI model providers that continuously infringe copyright and related rights to build their models.” Does there seem to have been such resistance to the objections of the field?
Jessica Sänger: When Europe’s AI Act was drafted, provisions were introduced to provide rights holders with transparency around the use of copyright-protected works by GenAI models. But the co-legislators dodged the question of precisely what model providers must disclose and left it to other bodies to work on the details.
In the current political climate, where it seems that “innovation” must be fostered at all costs, creators and rights holders—those who are innovators by nature of what they do—are being framed as obstructing “progress” when they defend their rights against Big Tech companies that have been stealing their works. And yes, those companies do powerfully resist anything that could help rights holders make them accountable or force them to be transparent about uses of copyright-protected works in the future. They have no interest in an AI Act with teeth.
Publishing Perspectives: Is this akin to the big debate that raged in London when MPs repeatedly tried—and then failed—to achieve opt-in requirements for the use of copyrighted material and complete transparency as to what works a large language model might be trained on?
“Persuading policy-makers that licensing is the best way forward for the development of ethical, high-quality AI models in Europe is still an uphill fight, but we believe that it is essential.”Jessica Sänger
Jessica Sänger: The debate in the UK is indeed about the same core question: Do rights holders still have control over their exclusive rights, or is the AI sector going to be given carte blanche to secretly use millions of works without permission?
The copyright-related obligations on GenAI model providers in the EU’s AI Act are expressly designed to facilitate the exercise and enforcement of rights holders’ exclusive rights by providing transparency. The TDM [text and data mining] exceptions introduced in the EU by the CDSM Directive of 2019 [the Directive on Copyright in the Digital Single Market] have already limited rights holders’ choice to opt in to many uses by AI developers. They instead introduce an opt-out requirement for TDM when it is not conducted for the purpose of research.
The scope of this exception is debated, and unfortunately, opting out can be burdensome for rights holders. To make matters worse, we are seeing crawlers simply ignoring or circumventing machine-readable opt-outs. All this shows that sticking to the basic principle underpinning copyright of opting in—requiring permission for AI uses—would be a very sensible path for the UK.
Publishing Perspectives: The signatories to the objection say that they’re dissatisfied with Europe’s published GPAI Code of Practice, the GPAI guidelines, “and the template for disclosure of a sufficiently detailed summary of training data under Article 53 of the EU AI Act.”
Jessica Sänger: As I mentioned, the AI Act itself left some important details to be worked out later.
It mandated the AI Office of the European Commission to facilitate the development of Code of Practice (CoP) which could help put certain obligations into operation. The working group of stakeholders that was set up to produce the CoP was huge – there were about 1,000 members. With meetings held online, this format did not allow for real discussion among the stakeholders. The drafting was therefore entirely in the hands of the working-group chairs.
The Big Tech companies were provided with a separate forum where they were able to discuss directly with the chairs—a privilege not afforded the rights holder group. This was a concerning scenario from the start, so we made sure to provide detailed and constructive written input at every opportunity, speaking with one voice as the Federation of European Publishers (FEP).
We saw three iterations of a draft for the Code of Practice on copyright, and each one seemed to water down the obligations more than the last. None of our concerns was taken on board. The resulting Code of Practice shows clearly that the exercise was tilted in favor of Big Tech interests, with the AI Office approaching it with a mindset of equating these with “innovation.” There was no interest in allowing those who created or published the works used by GenAI to know about such use or enforce their rights. Respect for the IP of the creative sectors was seen as “hindering” the development of AI in Europe.
The Commission was left by the co-legislators to provide a template to GPAI providers, which they are to use when disclosing the “sufficiently detailed summary” of works used in their models as required by the AI Act. The process for developing this essential piece of the puzzle was conducted in a highly opaque process within the Commission.
Through Federation of European Publishers, we provided constructive input whenever an opportunity arose, but the result is a template that operates with quantitative thresholds for disclosure which simply incentivize amassing ever larger hoards of data, as only the top few percent of crawled sources need to be named. The larger the dataset, the more opaque it will be.
Finally, although we were always assured that the Code of Practice was voluntary and adherence would provide no shield against liability, we came across a section in the Commission’s FAQ where they promise GPAI developers who sign up for the Code of Practice an extra period during which they won’t be scrutinized for compliance. It makes no sense for the enforcement of rules to come a full year after they take effect. So this, too, is very troubling.
Publishing Perspectives: Is there a process for having the implementation package reviewed?
Jessica Sänger: We expect the European Commission to be monitoring the development closely, and it could make changes if it felt the need. Of course we remain closely engaged with the commission services to see how this develops and provide information from the perspective of publishers.
But in parallel, we have seen a draft own-initiative report tabled in the European Parliament which looks at opportunities and challenges of GenAI and copyright. We’re analyzing it and have submitted our proposals for amendments.
An own-initiative report is not a binding instrument, but this is an opportunity for the EU Parliament to call on the commission to make certain improvements, so the discussion is not over.
Persuading policy-makers that licensing is the best way forward for the development of ethical, high-quality AI models in Europe is still an uphill fight, but we believe that it is essential. We hope to see the development of even more successful licensing deals around AI to show that licensing works and copyright is the key, not the obstacle.
More from Publishing Perspectives on artificial intelligence is here, more on the European Union is here, more on the Federation of European Publishers is here, and more on the publishing markets and their issues in Europe is here.
A version of this story originally appeared in our Publishing Perspectives 2025 Show Magazine.
If you can’t be with us at Frankfurter Buchmesse this year, be sure to download our PDF of the full magazine here.
Wherever our international readers are in the world, they use our free daily email to be sure they don’t miss any news. Sign up now.


