Saturday 13 December 2025, afternoon
A few weeks ago I was on a panel hosted by Jaya Chakrabarti at the Communicate conference in Bristol, along with Chris Preist. There's a full video here, and Jaya has written up her take-aways, but (mostly for my own benefit) here's a transcript of my bits:
Hi folks. So as Jaya said originally, I work at Watershed which is, if you're not from Bristol, just kind of over there, like two doors across. We're an arts and media center. We have cinemas, a gallery space, a cafe bar—recommend the nachos if you're looking for food later—but we also host the space where I work, which is the Pervasive Media Studio.
We're a space that hosts a community of artists, technologists, people who are doing stuff with art and technology. Often that means that they don't fit neatly into any particular category. So really, we are seeing how this looks from the coalface of people who are working with this stuff, from quite a range of different perspectives. We have illustrators, voiceover artists, and people who've been working in AI since before it was even called AI.
When I first interviewed to work at Watershed, one of the interview questions was, "Which technology do you think is going to be a big deal in the years to come?" The first thing that popped into my head was, "I think this machine learning AI stuff is kind of janky right now, but it might be something soon." That was about 2019, so I feel like I was right but didn't get the scale of it quite correct.
We really started to see AI largely when the hype around the Metaverse wore off; AI took the next hype cycle. That was sort of 2021-22ish, coming out of the peak pandemic years. People were starting to get really excited about things like ChatGPT.
AI is what technology studies people call a "charismatic technology," which means that it seems to provoke ideas in people. When people think about AI, it's usually not in terms of the material reality of what it actually is. Instead people spin off into a million different ideas about what AI could be. Some of the ideas they have are perfectly reasonable things you could do with AI. Some of the ideas are just bananas pie-in-the-sky impossibilities. Some of them are totally possible and just wouldn't involve AI at all.
From an artistic perspective, it's a really exciting provocation to think about AI, to think about "what does intelligence other than our own mean?" or "what would it mean if we were able to look at ourselves from this external perspective?" People get very excited. We ended up funding quite a lot of interesting artistic projects that work through that.
Then, hot on the heels of that wave of excitement, came the terrifying existential dread. We work with a lot of artists whose livelihoods are genuinely being affected by this. If you are an illustrator today, a voiceover artist, or someone who composes music, you are finding that a lot of what used to be your bread-and-butter jobs — making the 30 seconds of music that goes in the background of a podcast or doing an illustration for the cover of a corporate report; not the exciting stuff, but the stuff that would pay your rent in a quiet month — that work is drying up. That work is moving to AI.
Reasonably so, right? We are all in a massive cost-of-living crisis. We are in an economic crunch. If you can get that work for free — probably at a much lower quality, but for free — then you are going to do that rather than spend money on a freelancer. We know the creative industries are full of freelancers. It is not an industry that has job security. Because of that, those are the first people to go when there's any kind of economic crunch. You think, "Maybe we just don't get the freelancer in this time. We'll see if we can do it in-house. Someone found a cool website, let's type a prompt into that. It's not good, but it's good enough to go on page seven of the report. No one's going to look at it anyway."
So that came in. Because of that, I don't think it’s a surprise to hear that the arts are quite negative on AI these days. Most people working in the creative sector are feeling pretty bad about this stuff. That's because it's affecting their livelihoods. But also there's a cultural clash with the tech industry, which is not doing itself any favors with its rhetoric around issues like copyright. People feel like their work has been stolen and used to train these things. And, with people in the arts often coming from quite a politically progressive perspective, the environmental concerns and supply chain concerns are coming in. There are so many different issues, and people are just feeling really negative about that.
That was the second wave. The third wave, which has really come in mostly in the last year, has been that this is starting to slip into everything. New buttons are popping up in Microsoft Teams that do AI stuff. I'm having Zoom meetings where someone just turns up and they've got their AI note-taker alongside them. There are all of these things where someone is saying, "Well, we can just put it into this website and it'll do some AI." And that is becoming a part of normal, mundane business activity.
So we're trying to think about how that should work, how we need to adapt to a world that has these things in it. This is really difficult because the people who are doing that are still being affected by that charismatic technology idea. They're using it for mundane business stuff, but with an imagination that it is going to do magical stuff in the future. This is where the real challenge comes in. This stuff is psychologically powerful. It has an effect on people, both as an idea and as a reality.
I often compare it to when I was 19 and went to Las Vegas for the first time. I remember standing in Caesar's Palace, looking out at these seas of machines. I am not a gambling addict; I do not have a problem. But I felt the pull. I studied math as an undergrad. I think I had a probability textbook in my bag. I knew exactly how much money I would lose if I played on those machines, but I still kind of wanted to do it. There is a psychological pull that is real, even if you know it's there, even if you don't like it.
It's important to see AI in that same way. It is compelling. We see people who have AI girlfriends now, AI therapists. I know someone who was very nearly recruited into an AI cult. These things are real. These things are definitely having psychological impacts on some people quite severely. I think we would be foolish to imagine that we are all completely immune to that. Maybe we're not going home to decide how to get married to our AI girlfriend, but we are still feeling something when we play with these things. I know I feel a little weird compulsion like, "Ooh, that was interesting. Let's refresh the thing." There is an almost gambling-like element to this which makes it compelling to use.
From the perspective of understanding how we should think about these things in that mundane business context, we really need to think: How do we do this in a way where we're not being affected by these psychological compulsions? Not being affected by these charismatic dreams of what AI might do tomorrow? How do we think about these things in a way where we're able to focus on the concrete reality of what's going on today? And I think I'll leave it there.
Question from the audience about video production
I think what we're seeing is a lot of people will play around with this stuff. They'll say, "Oh yeah, we'll go to a proper filmmaker in the end, but for now let's type something in the AI, get something in 30 seconds, and use that as a stand-in until we get the real thing or until we decide if we want the real thing." And then, of course, that ends up being the final thing because that's how business works. So there's definitely that aspect to it.
I think there are two other aspects to video production which are really important. One is truth in communication. Video is an area where we expect to see the truth. We have an idea that it is more acceptable to lie in text, whereas faking a video feels like a greater moral crime. It feels like something that is more convincing to people. If you see something in video, you are more likely to believe that it's real than if someone just wrote it down. Getting into a world where we think it is morally acceptable that most video is generated by a machine rather than recorded in the real world — there are some really troubling issues there.
The other thing is that it takes about three kilowatt-hours to render 30 seconds of video. It's really easy to believe you're only going to do that once. You're not. You're going to see what it comes out with and you're going to go, "Oh it's almost there. It's not quite right. I'll just press the button again." And before you know it, you are one of those people in Las Vegas. It is natural human instinct. You see how it's nearly there and you think, "If I do this again, it's going to be right this time." That is a gambler's way of thinking.
Question from the audience about AI note-takers in meetings
It is difficult. As with any new technology, there are questions of etiquette. People develop patterns of working and we are all going to have to negotiate this together. We did it when Zoom calls went mainstream; we are going to have to do it with AI note-takers.
I find it useful — because I also do not enjoy it when people bring their AI note-taker — to treat it like this: Would I be okay with someone demanding a recording of this conversation? Would I be okay with that recording being transmitted to a company that I haven't vetted and processed? The answer is usually no. Being able to say, "Well, actually it's our organizational policy that if you want to record a meeting, it needs to go to a GDPR-vetted supplier," etc.
How true that is may vary, but being able to say, "Look, we have a policy on this. If you want to do this, everyone needs to have consent in advance. The place that it's being stored needs to have a solid data retention policy. It needs to be vetted for GDPR compliance." Usually, when you say that to people, they go, "Oh. No."
But I think we do also need to practice saying no. I think it's really hard. You start a meeting, someone presses some buttons, things happen and you're like, "Oh, I'm not sure I'm comfortable with that." And now I'm being recorded saying I'm not being comfortable with that. That is tricky. I think we are all going to have to learn to put our foot down.
Question from audience on how to prepare for the future
I think part of this is that "charismatic idea"—because I feel that too. I think some of it is that we're reacting to the possibilities of AI. AI has moved reasonably quickly, but I don't think it has moved as quickly it might seem when Sam Altman stands up on stage and says it's going to take over the world in the next five years. I think there's a lot of hype, a lot of speculation, a lot of "what will it be able to do next year?"
What really helps me, at least, is to focus on what can and does it actually do now. And thinking: is that actually that much different to what we were doing before? So, is the AI note-taker that much different to recording a Zoom call? In some ways yes, in a lot of ways no. Is using ChatGPT to answer a question that much different to using Google? In some ways yes, in some ways no. Thinking in terms of what is actually different here helps. Often it's quite small compared to what was going on before. It might be that it's now more convenient, cheaper, or more effective at some of these jobs than it was before, but it's not necessarily entirely qualitatively different. And it's something that we can think about in the same terms.
Question from audience: I can see the value of it in medicine and engineering, but I'm failing to understand the value of employing it in the creative industries. All it's doing is taking the fun bits away from humans. We are naturally creative, curious, complex beings. If we take those things away, what are we left with? I keep coming back to the term "enshittification." Is that the way we're heading?
Question from audience about the value in the creative industries
What I would say is that, yes, it would be great if we were all making art for fun and personal enjoyment, but some people are making art for a living. The solution to this is "fund the arts" — brilliant, yes — but we are in a massive economic crunch. The creative industries do not have the money that they used to, and in large cases, they are being driven as profit-making businesses.
If you are in control of your entire workflow, then yes, of course, you want to use humans, you want to work with people and have ideas. If what you need to do is deliver three minutes of polished video by Wednesday, you're going to look at the way of doing things quickly and cheaply. Those kind of compromises are part of the tension that has always existed in the creative industry. There have always been ways of doing things that are a bit crap, but they're cheap and fast and will get the job done and please the client. To some degree, this is just the same thing happening but at a slightly larger scale.
So I think what we need to do as a society — as people who believe this — is to just express that we are not happy seeing AI-generated creative outputs.
We put out a poster at Watershed advertising our nomination for Independent Cinema of the Year. It was not AI-generated. It was generated by Tony Styles, who is an actual human graphic designer. But some people believed that it looked kind of AI-generated, and we got a lot of feedback which was quite negative about that. We weren't happy about it, but that is healthy. If as an audience we don't want to see this happening, we need to say something when we do see it happening. Which does mean that we also need to demand transparency, because more and more we're not able to see where it's happening.
Final question on one thing people should take away
I think what I would say is that these things aren't magic. These things aren't sci-fi creations. They are websites you type words into. They are run by human beings; they are run by Silicon Valley companies. They are normal parts of technology. You probably already have policies for dealing with this. You probably have data retention policies, environmental impact policies, and policies about how you're going to vet your supply chain.
Apply those. Look at where they fall short in this new instance. But really, you probably don't need to be thinking about this as a new, exciting, weird thing. You probably need to be thinking about: How does this fit into the world that I already know how to work with? How does my accounting department think about this? How does my legal department think about this? All of those things should be relatively straightforward, and you can identify the points where they can't, and that's where you need to focus your effort.
Want to leave a comment? I'd prefer an email at [email protected]