When Big Tech Clashes with Big Creativity

A drawing of a robot wearing a robber mask working as a music technician – by Microsoft Designer

The past week has been a bruising one for technology and creativity in the UK, with musicians in particular pushing back against a perceived raid on their intellectual property orchestrated by builders of generative AI models, enabled by a permissive new approach to copyright proposed by the British government. It is the latest chapter in an ongoing war between the creative industries and the well-resourced corporations who build data hungry technology capable of imitating human creativity. Earlier skirmishes have included strikes by Hollywood actors fearful they might be replaced by virtual characters in the future, along with an ongoing anti-AI lobby from book publishers in the UK and abroad.

Here’s a quick rundown of what’s at stake. It’s no secret that generative AI models are built using tremendous computing and data resources. While model developers have become increasingly cagey over the last couple years about the exact nature of their training datasets, if we extrapolate from the figure of some hundreds of billions of words worth of textual data used to build a large language model like GPT-3, we can infer that subsequent models like GPT-4, GPT-4.5, and the perennially forthcoming GPT-5 will use just about everything they can find. And while it may be tricky to definitively illustrate that a model has been trained on a particular instance of copyright protected intellectual property, it is easy to find such data on the web. If it’s easy for us to find, it’s reasonable to conclude that model builders are finding this data in their sweeping excursions on the internet, as well.

The problem here seems to be about convenience as much as it is about outright unscrupulous intent. Intellectual property is so deeply embedded in the substance of the web – be it in the form of permitted uses such as official YouTube videos, or less permitted uses one might find by googling a sentence from a popular novel – that any measure that saves model developers the trouble of separating public domain content from licensable content has to be worth something. But the solution that has been proposed by the UK government, which would involve allowing tech companies to use publicly accessible data for training unless a copyright holder has explicitly instructed them not to, has not been viewed favourably by the creative industries. It seems artists feel they should be given something more, and more proactively, for their contributions to a technology which, at least by some accounts, will go some way towards displacing their own livelihoods.

A mutually acceptable price point remains elusive. In November 2024, publisher HarperCollins offered some of its non-fiction authors a deal which would have licensed their existing works as model training data for $2,500 per title, with the publisher themselves taking another $2,500. The reaction to this proposition was not positive, with the Authors Guild in the US declaring that the 50-50 payment split "gives far too much to the publisher". A perhaps more telling response comes directly from author Daniel Kibblesmith saying he would take the deal for "a billion dollars", his argument being "I’d do it for a sum of money that wouldn’t require me to work anymore, since that’s the ultimate goal of this technology".

Kibblesmith’s quip gets to the heart of the matter, which has more to do with the threat to the future livelihoods of content creators than with reward for existing work. As such, the cases against allowing AI to have relatively free access to licensable content end up mainly revolving around one core idea: established creators want to share in the profits they imagine tech companies will make off of their work. Here are a few different ways this has been expressed by creators in the past several days.

“The album consists of recordings of empty studios and performance spaces, representing the impact we expect the government’s proposals would have on musicians’ livelihoods.”

– 1,000 Musicians

1,000 outstanding and well-intentioned musicians have come together to release an album consisting of recordings of silent – or at least empty – music studios. In their words, "the album consists of recordings of empty studios and performance spaces, representing the impact we expect the government’s proposals would have on musicians’ livelihoods". The idea is that we would no longer enjoy output from these artists if AI could offer a passable imitation of that output. This might be construed as a clever publicity stunt, but then also as one which arguably defeats its own purpose. The point here is that these artists would no longer be producing music because the rest of us have found an alternative to listening to the music they make, and so, while their studios may be silent, our sound systems will not be. This is not to disregard the value of music and art made by humans, but by that same token, if part of what an audience finds valuable about creative content is its humanity, then surely the audience will continue to pay for content that comes at least in some form from humans, working in studios. It might just be the case that access to AI enables a broader group of humans to work creatively from a more general idea of what counts as a "studio", and so a particular type of artist – one who is already in a position to access resources for producing content – may be somewhat displaced.

“Nobody will be able to afford to make music from here on in.”

– Brian May

Then Queen’s sensational guitarist Brian May has raised concerns about a future where "nobody will be able to afford to make music from here on in". It seems like what May is saying is that if established musicians don’t continue to profit, they won’t be able to afford – or, to read between the lines, won’t be motivated – to go into the studio to make music to sell to the rest of us. That may be the case, but the phrasing and in particular the word "nobody" doesn’t seem quite right here. In fact it would seem that generative AI could offer a chance for a lot more people to be able to afford to make music; they probably just won’t get paid much for it once it’s made, and they will be sharing any profit they do make, on some level anyway, with providers of the technology which enables them to make the music in the first place. Additionally the music might not be all that good, but here, like with value placed on authentically human output, it seems like there’s an opportunity for really talented musicians to maintain some of their market share.

“AI can replicate patterns, but it does not create. If left unregulated, it will not just be a creative crisis, but an economic failure in the making. AI will flood the market with machine-generated imitations, undercutting human creativity and destroying industries that drive jobs, tourism and Britain’s cultural identity.”

– Andrew Lloyd Webber and Alastair Webber

Meanwhile, legendary musical composer (and free market enthusiast) Andrew Lloyd Webber and his son Alastair had this to say: "AI can replicate patterns, but it does not create. If left unregulated, it will not just be a creative crisis, but an economic failure in the making. AI will flood the market with machine-generated imitations, undercutting human creativity and destroying industries that drive jobs, tourism and Britain’s cultural identity."

Here we have an appeal to the emotional and qualitative side of our engagement with human creativity. The suggestion from Webber seems to be that, regardless of the surface form it takes, there is something essential missing from content generated by AI. This touches on the ineluctably human quality of being creative, and also on the way that creativity, in various forms, pervades all aspects of our lives, even in less grandiose endeavours than composing musicals or putting together albums. But Webber’s stance assumes an unnecessary antagonism between technology and human creativity. He is positioning the AI as the exclusive agent of its output, which is a misunderstanding of what this technology does. It acts not as a replicator of human creativity, but as a kind of lens, allowing a human collaborator to gather up perspectives on vast swaths of what Webber equates with "cultural identity" and then to project it onto something novel and still imbued with a vital humanness.

“Big tech companies should not be able to generate and profit from music without permission or payment. How can we justify taking money away from British musicians and handing it to tech firms for free?”

– Anneliese Midgley

And finally, a group of earnestly concerned MPs have pushed back against the proposal that model developers should by default have access to content. Anneliese Midgley, the recently elected Labour representative for Merseyside constituency Knowsley, put it like this: "Big tech companies should not be able to generate and profit from music without permission or payment. How can we justify taking money away from British musicians and handing it to tech firms for free?" There is a bit more nuance here. The objection raised by Midgley suggests that the concern many of us instinctually feel is not just that creators are not going to get paid, but that someone else, someone less desirable, specifically "big tech companies", is going to be making profits that would otherwise have gone to musicians. And, yes, this is probably broadly in line with general public sentiment, because who wants to see a musician get displaced by a tech worker?

But who exactly do we have in mind when we imagine this archetypal musician who is being put out by AI, their livelihood transferred to someone who writes code for a living? It’s interesting that Midgley went on to say this: "Unless they are at the very top, making a good living as a musician in this country is becoming nearly impossible. Even those who can sell out venues of a couple of thousand people across our towns and cities are barely scraping by." I would speculate that these jobbing musicians who are not "at the very top" who the MP has in mind are not the ones who would be displaced in a market of digital content produced in collaboration with algorithms, are not the ones who would profit from a regime that aggressively sought payment for training data, and are not the ones engaged in a high-profile effort to stigmatise generative AI.

No one likes Big Tech – not even Big Tech, apparently, if all the energy tech leaders put into attacking one another is anything to go by. These companies take our data, analyse our lives, and then hand them back to us repackaged for their own profit. Their objective goes beyond just profit; these are institutions which are designed to seek control of every aspect of how we exist, on every scale. But then no one really likes Big Pharma or Big Agriculture either, but we still need well-researched medicine and an affordable food supply chain. We don’t, and shouldn’t, live in a world where principle always or even usually overrides pragmatism.

How do we feel about Big Creativity? Musicians, Actors, Artists, Writers, and all sorts of creators with established platforms have long enjoyed a kind of virtuous cycle: because of their position as successful creators and communicators of content, their amplified opinions have been automatically granted a degree of credence, and if their opinions about how the creative industries should be run are ones which maintain their status within those industries, they are accepted. This industrial self-determination has allowed them to position themselves as champions of progressive, pro-human causes – a stance also traditionally broadly taken by technological innovators, because of the forward-looking nature of technology itself. But the status quo is necessarily exclusive: creators who enjoy the privilege of an audience do so because of the barriers that prevent most of the rest of us from competing for their audience, and so from being heard. This arrangement, and the inherent alliance between creative and technological economies, now seems to be threatened by a technological movement that offers a way to expand access to the resources for creating and disseminating content. And just like any other Big industry, Big Creativity is a system that is above all else built to sustain its own existence.

Now let’s talk about little creativity instead. The creativity I have in mind here is that aspect of being human that shines through even everyday activities like making a plan to go out, or making a to-do list, or choosing the right clothes for a trip. There is a continuum from dealing with a broken shoelace to performing to a packed stadium, because both of those activities involve doing a thing for a reason, and success arises from a combination of acquired skills and novel application of established knowledge based on the context of what’s available in the world at the moment of taking an action. What humans will always bring to these activities – which are always in some way technologically mediated – is that spark of intentionality.

What generative AI could offer all of us little creators is an opportunity to engage with the richness of what humans have collectively been capable of creating. We still need to bring our own vision and context, our own sense of purpose; the technology is, in the end, just a tool for elevating that purpose. And yes it seems fair that creators get some kind of payment when their work is used as part of the substance on which this technology is built, but it doesn’t seem quite right to try to extend a sense of ownership or entitlement into all the future creativity that is going to be enabled by this technology. And yes we need to be ready to hold Big Tech to account over how the technology is provided, and who really benefits from that, but this goal is not accomplished by either thwarting the basis for the technology or transferring reward from one well-positioned party to another. If the emerging technology is part of a paradigm shift in the way that creative content gets framed and disseminated – and how true that is remains to be seen – the shift needs to be one towards inclusiveness, towards a situation where creativity is more distributed, where the lines between creator and consumer, and the corresponding economic flows, become more blurred. This means we’re going to have to find ways to share reward, and credit, and maybe shift our ideas about what we think counts as "stealing" from what should really be the shared wealth of our living, growing collective creative accomplishments.


Posted

by