Skip to content

Feed the robot; Starve the copyright owners?

There’s little doubt that AI has proven to be a powerful tool, with user rates growing at a phenomenal rate.

But the very thing that makes it so useful, is creating major headaches in the world of copyright law as lawmakers try to find a balance between the massive database AI needs to function, learn and grow, and the very real rights of copyright holders to their creations.

Should AI be allowed to roam at will?

Should there be some form of licensing or fair rights agreement and how would that even work?

Weiken Yau takes a look at the efforts so far and discusses future options.

The UK proposal

In 2022, the UK government launched a consultation on its proposal to carve a new copyright exemption for “text and data mining”, which, if adopted, would allow AI models to be fed with copyright-protected work without the need for a licence from copyright owners.

In the UK, as in Hong Kong, all creative works are protected by copyright from inception. So in effect the proposal would remove this protection, essentially leaving all creative work open for digestion and analysis by AI models.

Good news for the AI developers, but rights owners were outraged, protesting the proposal was hugely unfair as it would allow AI models to access and use work protected by copyright without paying licence fees.

Having met with a wall of opposition from rights owners, it is understood the proposal has now been shelved.

The default position

What the rights owners in the UK apparently favoured, was a default position under the law, whereby any person, including AI models or anyone using AI models, would not be allowed to use any copyrighted work without consent of the rights owner.

This is the current legal position in several jurisdictions including Hong Kong, where copying is not allowed without the consent of the rights owner, except in a few statutory “fair dealing” areas, for example research, private study, reviews, satire, reporting and commenting.

While rights owners may be pleased with the default position (and rightly so), it would likely hinder the development of AI technology by making the building of the huge databases needed to train AI models prohibitively expensive.

A compromise? The EU proposal

Current EU law allows AI platforms to use for text and data mining purposes any works that are lawfully accessible by the platforms, for example freely available on the internet, where the copyright owner has not expressly reserved their rights.

Unsurprisingly, this rather loose and vague regulation has attracted a fair amount of criticism.

To catch up with the rapid development of AI, some members of the European Parliament have reportedly shown support for a new draft AI Act.

If adopted, this would require generative AI tools to disclose to the public a summary of all copyrighted materials used to develop their systems.

It is hoped the transparency requirement mandated by the new Act, a public summary of materials, will strike the right balance between the legitimate interests of the AI developers and the rights owners.

But will it?

Rights owners would periodically scan the summaries provided by each AI database and then, either negotiate a licensing agreement, or demand that the work be removed if no agreement can be reached.

The problem is firstly that the EU proposal doesn’t provide clear guidance on what details would need to be included in these summaries or how often they would need to be updated.

And secondly, given the enormous appetite AI has for data and the speed with which it can digest and integrate this data, the proposal is likely to ramp up the litigation risks for AI platforms, and multiply by a major factor the compliance costs for both AI platforms and for rights owners.

All of which would very likely have a heavy impact on AI development.

Common ground?

Finding a common position is difficult.

For a society which rewards creativity and protects property rights, it would not be fair to deprive rights owners of their entitlement to licence fees.

At the same time, the development of AI will continue one way or another.

Common ground may lie in the nature of future licence agreements.

To allow reasonable access to creative works by AI models, AI developers should be entitled to licence on terms fair to both the rights owners and the AI developers. The rights owners should not be allowed to withhold licence; in return, they should be guaranteed fair compensation.

This can be addressed by way of a new class of statutory licence, with its terms and licence fees explicitly spelt out in the legislation.

With mounting concerns over AI going rogue and calls for governmental supervision, the law can require that only the AI platforms meeting certain regulatory requirements can benefit from the statutory licence.

The UK government seems to be moving in this direction after its setback in amending the law last year, and is reportedly drafting a code of practice which promises that “an AI firm which commits to the code of practice can expect to be able to have a reasonable licence offered by a rights holder in return”.

That sounds fair enough; although what the code of practice entails and what a “reasonable licence” means remain to be seen.

Disclaimer: This article is intended for information purposes only and is not intended as legal advice.If you need to speak to a lawyer, contact us for help.

Sign up for our exclusive legal newsletter

Tune in to our podcast

Haldanes Law Matters