AI in Production: Who’s Actually Responsible for the Risks?

A room full of underwriters, lawyers, and production executives sat down to talk about AI risk. The consensus? Most studios can't answer the basic questions their insurers are about to start asking.

If you’re a studio head trying to work out what your insurer actually expects from you on AI, you’re not alone. A recent roundtable co-hosted by Paul Hillier of Tysers Live & AIMICI leadership brought together underwriters, lawyers, risk managers, and production executives from across the media and entertainment sector — and one thing was immediately clear: almost nobody feels confident about where they stand.

But here’s the encouraging news. The gap between what insurers want and what studios are doing is smaller than you might think. And the steps to close it are practical, not theoretical. This article distils the key themes from that conversation into what actually matters for your production workflows — and your next renewal.

The Red Herring: Copyright Claims

If your AI anxiety is mostly about copyright, you’re focused on the wrong thing. That was one of the roundtable’s sharpest consensus points. As Andy Moseby, Partner at Lee & Thompson LLP, put it: “the copyright system is broken, but AI didn’t break it.” Rather, Andy argues, it was already “broken” by the realities of modern digital distribution, where every piece of content is endlessly copied and repurposed.

That doesn’t mean copyright is irrelevant. Insurers at the table were clear that they’re still expecting copyright-related claims to come through, and any policy with IP coverage will be the first port of call when they do. But the roundtable pushed back on the idea that copyright should be the lens through which productions evaluate their AI risk.

The real issue, as several participants noted, is that the industry has been consumed by the question of training inputs — what data was fed into the model — when in practice, the claims will land on the outputs. If an AI-generated asset ends up in your final delivery and something’s wrong with it, nobody is going to sue the model. They’re going to sue you. The focus, then, needs to shift from where the AI learned to what the AI produced, and whether you had the rights, the checks, and the audit trail to back it up.

Jack Jones, Partner at Sheridans, sums it all up: “I think the opt-out opt-in discussion [around use of copyrighted AI training data] is pretty much dead – image rights will be the new thing, and it’s a societal issue rather than a creative issue.”

The Real Battleground: Accountability

So if copyright is a distraction, what should studios actually be worried about? The answer from the room was unanimous: accountability. Specifically, can you demonstrate that someone was responsible for overseeing AI use in your production, and that reasonable steps were taken?

This is where many productions fall short. As one underwriter noted, the biggest problem is that nobody is responsible. There’s no AI officer. No single point of ownership. Compliance teams, legal departments, and risk managers all assume someone else is handling it. The result is a gap that insurers are noticing— and one that some organisations are beginning to address by bringing in external AI governance support, such as AIMICI’s Fractional AI officer services, to provide that missing layer of oversight.

From a claims perspective, what insurers want to see isn’t perfection — it’s process. Claire Templeton, an experienced M&E Underwriter at Beazley’s, described her approach. “I have a set of AI questions that I ask”. The details of Claire’s questions are not just about knowing if AI has been used or not at a high level, but key details as well, such as understanding “AI inputs” versus “AI outputs” and human review checkpoints. “Most important for me is the human check process – I absolutely need to make sure that something isn’t just being pumped into an AI model and then just the outputs being used without any normal checks and balances.”

The insurers & underwriters agreed: if a claim comes in and the production’s response is “we have no idea if we used AI” that’s a problem. But if they can point to an error in an otherwise reasonable process, that’s exactly what insurance is for. The distinction matters enormously.

Andrea Forder (Head of Group Insurance at ITV) also offered a view on the complexity of trying to establish a single precedent. “We have robust AI policies…they’re not only in the UK – we’ve got production labels all around the world. Therefore they need to be bespoke to each territory from a legal and data privacy point of view”

However Andrea and others acknowledged the perennial challenge: ensuring policies are understood and followed across every production label & entity, at scale.

The Standoff Problem

One of the most striking dynamics at the roundtable was the circular dependency at the heart of the industry’s approach to AI risk. Studios want guidance from insurers. Insurers want clarity from lawyers. Lawyers look to legislation and case law — but there isn’t any yet. And so everyone looks at each other, as one participant memorably described it, like headless chickens.

The insurance market is currently underwriting AI risk on a case-by-case basis, relying on clients to self-report their use. And as one insurer admitted, half the time, clients don’t have the answers to even basic AI questionnaire questions. On the other side, a lawyer at the table retorted: “we don’t always have the answers either!”.

This standoff is compounded by the absence of precedent. As experienced film financier & AIMICI advisor Jon Chadwick noted – “There hasn’t been the ‘Exxon Valdez’ of the entertainment AI space yet” — no landmark claim, no test case, no regulatory intervention that forces a market-wide response. Several participants noted the parallel with cyber insurance, where policies were “silent” on cyber risk for years until Lloyd’s mandated that every policy had to either affirm or exclude it. The room broadly expected AI to follow the same trajectory, though estimates ranged from three to ten years away.

In the meantime, production companies — especially independents without the legal budgets of a major studio — are left navigating this on their own, often signing broad warranties and indemnities they don’t fully understand, using tools whose terms of service they haven’t read, and warranting ownership of outputs they may not actually own.

The Solution? Industry-Led AI Standards

If government regulation isn’t coming soon — and the room was near-unanimous that it isn’t — then the pressure has to come from within the industry itself. Several participants argued that insurers and bonding companies are the natural drivers of change, because they already have the mechanisms to require standards as a condition of coverage.

The logic is straightforward. When Lloyd’s decided that cyber risk couldn’t stay silent, it forced every insurer in the market to take a position. The LMA then developed standardised endorsements and wordings that became the baseline. The same could happen with AI. If Lloyd’s or the LMA moves to require AI-specific wordings, production companies will have no choice but to get their house in order.

But waiting for that moment is risky in itself. As one participant pointed out, the VFX industry pushed back hard when the government tried to exclude AI from tax credit calculations, arguing they’d been using it for years. The technology moves faster than any regulatory or market framework can keep pace with. Productions that get ahead of it now will be in a far stronger position when the rules do crystallise.

There was also a strong view in the room that, in the grand scheme of things, media and entertainment isn’t considered a “high-risk” sector for AI in the way that medical devices or financial services are. Nobody dies if a visual effect is AI-generated. But what’s at stake is the industry itself — its creative integrity, its economic model, and the livelihoods of the people who work in it. That means the standards need to come from people who understand production, not from legislators drafting broad-brush rules from the outside.

The First AI Standard For Industry: Training and Education

So what can you do now? The roundtable kept circling back to the same practical starting point: training and education. Not AI training in the technical sense, but AI training for humans in production-related businesses.

The gap that every participant identified — insurers, lawyers, and producers alike — was a basic lack of understanding. Productions don’t always know what tools they’re using, what the terms of those tools allow, or whether they own the outputs. Some can’t even accurately describe what AI is or isn’t doing in their pipeline. One participant shared an anecdote about a production that submitted AI-generated financial projections to them  — projections that didn’t even get the name of the relevant tax credit right.

Before you can manage risk, you need to understand it. And before your insurer can cover you effectively, they need you to be able to answer their questions. That starts with making sure your teams — from development through post-production — can identify where AI is being used, what data is going in, what’s coming out, and who reviewed it before it went into the final product.

This is exactly the gap that AIMICI’s new AI training courses are designed to close. Built in direct collaboration with production workers, and supported by the UK Government’s BridgeAI Initiative, they also align with the UK Gov’s cross-industry AI Skills Framework. Specifically, these courses have been set up to provide everyone in production teams with structured, practical grounding in what they need to know: how to identify AI use in their workflows, how to assess realistic risks, and how to document their processes in a way that satisfies both internal governance and external scrutiny from insurers and commissioners. This was considered at the roundtable as a credible foundation for the first wave of industry standardised AI competency training. 

The Bottom Line

The roundtable painted a picture of an industry in transition. AI is already embedded in production workflows — from script development to post-production to distribution — and it’s not going away. The question isn’t whether to use it. It’s whether you can demonstrate that you’re using it responsibly.

Your insurer isn’t expecting you to have all the answers. Nobody does yet. But they are expecting you to have a process: someone accountable, a human review step, an understanding of your tools, and documentation you can point to when questions arise. Get those basics right, and you’re ahead of most of the market.

Ignore them, and you may find that when the claim eventually comes — and it will come — you’re on the wrong side of a very expensive conversation.

Note: This article is based on an exclusive roundtable discussion, hosted by Tysers & AIMICI, and moderated by Muki Kulhan a BAFTA-nominated Chief Innovation Officer and leader of the IBC Accelerator Media Innovation Programme.

Participants included representatives from Beazley, Markel, ITV, Netflix, Tokyo Marine HCC, and others. Any views expressed are those of the individuals and are not speaking on behalf of their organisations.

Share the Post:

Explore more insights