Listen to this article
A shifting landscape
The Venice Film Festival closed last weekend, offering a glimpse into how the industry is still grappling with AI. On one of cinema’s most prestigious stages, there were no official guidelines and no clear position on whether AI-generated or AI-assisted work should be allowed, how it should be judged, or what “responsible use” even means. Yet, at the same time, Venice played host to two side events dedicated entirely to AI: the Reply AI Film Festival and the AI Film & Ads Awards. That contrast captures the mood of the festival world right now: split, confused, and moving in several directions at once.
Last year: A clear divide
In 2024 the picture was perhaps a bit simpler at first glance. Some festivals positioned themselves firmly against AI. Credo 23, founded by Justine Bateman, banned AI entirely and promoted itself as a space for purely human creativity.
Meanwhile, AI-only festivals were on the rise. Runway’s AIFF saw submissions leap from just a few hundred in 2023 to several thousand in 2024. But not every response has been celebratory. At IMAX screenings of AI-generated shorts, audiences and critics questioned whether this work belongs on the big screen at all. It appeared to be a binary choice: AI or no AI.
This year: Hybridization begins
In 2025, as AI becomes more realistically immersed into workflows and pipelines, we are starting to move away from the AI “bubble” moment of hype. The lines are blurring. Some festivals have softened their stance. The Oscars, for example, now allow AI as long as human authorship is proven. Other events, such as the Cambridge Film Festival, have introduced basic disclosure rules, though their guidance remains vague and inconsistent.
The Oscars’ shift was immediately tested with The Brutalist. Editor Dávid Jancsó revealed that AI (A tool called Respeecher) had been used to refine Adrien Brody and Felicity Jones’s Hungarian pronunciation. Director Brady Corbet defended the choice as a minor enhancement, but the reaction was polarising. Critics called for disqualification, while defenders like David Cronenberg dismissed the outrage as Oscar politics.
This kind of backlash highlights the real problem: hybridisation creates uncertainty. For filmmakers, it is unclear what level of AI use is acceptable or how to disclose it. For audiences, it fuels mistrust and heated debate. Instead of calming tensions, partial rules often end up creating more drama on both sides.
AI Bans: Will they continue?
As we can see by examples such as Credo 23, some festivals have planted a flag firmly in the ground by declaring themselves “AI-free zones.” These events position themselves as guardians of tradition, appealing to filmmakers and audiences who see AI as a threat to the craft of cinema.
The question is whether this approach can hold. As AI tools become embedded in everything from storyboarding to VFX, enforcing a total ban may get harder for festivals and regulators alike. It risks excluding films where AI is only used in small, technical ways rather than as a creative driver. The bigger challenge may not be to stop AI altogether, but to define what kind of AI use is acceptable, and how transparent filmmakers need to be about it.
The real problem: lack of clarity
What emerges across all of this is confusion. Very few festivals explain what they actually consider “problematic” AI use. Most rely on simple disclosure rules without asking how extensively AI was used, or in what part of the process. Filmmakers who already rely on AI for tasks such as storyboarding, editing or VFX are left guessing how they should report that use, and whether doing so could disqualify them.
Why clarity matters
For filmmakers, the absence of clear rules is frustrating. They need to know whether their work will be judged fairly or rejected outright. For festivals, transparency is a way to maintain credibility and give juries the tools to evaluate work on its artistic merit rather than its technical origins. And for audiences, disclosure helps frame the conversation around authenticity, innovation and creative intent.
Moving forward
The festival circuit in 2025 is no longer neatly divided between “AI” and “non-AI.” It is evolving into something more complex, and that makes the need for clarity urgent. Simply banning films outright for using AI is too much of a blunt instrument to employ in this scenario.
What matters most is transparency: explaining how AI was used, to what extent, and for what purpose. That information should be made available, so juries and audiences so they can decide for themselves what deserves recognition.
Without it, festivals risk drifting further into uncertainty. So the real question is, will they lead with transparency, or keep film makers and audiences in the dark?
–
At AIMICI, we are examining how upskilling, governance, and real-world workflows can turn AI from a headline into a practical asset for producers, crews, and studios alike.
For festivals & industry bodies, this means helping organisers define clear, transparent policies that protect creative integrity without shutting out innovation. For filmmakers, it means understanding how to disclose and document AI use in ways that meet those standards.
If you are considering how to integrate AI into your work, or how to prepare for festivals where rules are still evolving, get in touch to explore how AIMICI can help you adopt AI strategically and sustainably.