Truth in the Age of AI: Upholding Journalistic Integrity in Documentary Filmmaking

In this information crisis era, where distinguishing the real from the fake is increasingly challenging, the mission to capture and convey reality has never been more vital. Many would say that documentaries are not just entertainment; but also act as engaging archives - capturing the essence of human experience, societal issues, and historical events. Yet, as AI-generated content becomes more believable, and the attention economy reshapes the entertainment industry, documentary makers face unique challenges that threaten to damage the integrity of their profession forever.

Documentary film, as Bill Nichols describes, is “a discourse of sobriety that claims to describe the real and to tell the truth”. Yet, the genre often straddles the boundaries of fact and fiction, art and documentation, entertainment and knowledge, allowing documentary filmmakers a degree of creative interpretation. G. Roy Levin supports this view in Documentary Explorations, noting that “perhaps no other art form brings art and reality in any closer juxtaposition than the documentary film”. 

But this balancing act of fact and fiction has turned from harmony to headache thanks to recent technological advancements. The rise of easy-access generative image and video technology has created new ways to tell stories, and new ways to fabricate reality. In an age where the line between authentic and fabricated content grows ever blurrier, trusting what is real and what isn’t becomes a critical challenge – for audiences and creators alike. As we become more accustomed to seeing a mixture of reality and fakeness in our content, one pressing question arises: how much can we still trust documentaries?

 

Adobe’s ‘Generative Fill’ AI feature used to expand the field of view in an existing filmed video
Source: Youtube, Matti Haapoja

 

 

Google’s “Add Me” feature, enabling people to be seamlessly added to photographs who were not originally there.
Source: Google

 

Historical photographs converted to a moving image using LumaAI. Realistic results could be shared and imply that a moving image camera was at an event when it was not, or that events played out in a certain way.

Source: Youtube, CreativeAI Magic

 

The good, bad & ugly of AI in documentaries

Earlier this year Netflix came under fire for their true crime documentary “What Jennifer Did“, which allegedly used AI-generation to adjust and reconstruct photorealistic images of people, places, and objects to better fit the film’s narrative, raising ethical concerns about authenticity. Viewers spotted anomalies in the visuals, such as distorted hands and unusual artefacts, suggesting manipulation. This practice not only questions Netflix’s editorial choices but also highlights broader implications for documentary filmmaking, especially when representing real events and active court cases.

Another example of the use of GenAI in documentaries is found in “Welcome to Chechnya” by David France, where AI-powered facial replacement technology was used to protect the identities of LGBTQ individuals facing persecution in Chechnya. Faces of the interviewees were digitally altered using GenAI to overlay new facial features onto their real ones. This technique allowed the filmmakers to maintain the subjects’ anonymity while still conveying the raw and authentic emotions of the individuals. While this technique was used for safety, it prompts a broader discussion about when and how digital alterations should be disclosed. 

The documentary makers conundrum – Ethics vs Exposure

In addition to the rise of new generative tools, today’s filmmakers are also grappling with the pressures of the attention economy, where content value hinges on its ability to capture and retain audience attention. Streaming services prioritize content that generates clicks, views, and shares, often favoring sensationalism and emotional engagement over thoughtful storytelling. 

In this highly competitive marketplace, captivating visuals and provocative narratives become essential for cutting through the noise. AI-generated imagery can be used to help elevate stories for these purposes, but if overused can easily overshadow research-led, fact-based reporting. Also, as storytellers are pushed to produce content quicker and cheaper to stay competitive, time and resources are squeezed, making it harder to maintain journalistic rigor and ethical responsibility. This combination creates a tension between entertainment value and factual accuracy, often leaving filmmakers feeling forced to sensationalize their stories to attract the fleeting attention spans of modern viewers.

As Richard Curson Smith aptly puts it: “As a documentary filmmaker using these technologies, you must be careful tonally, as it can feel like you’re defending this citadel of truth and veracity against an onslaught of GenAI material.”

The fact is that documentary makers exist to be truth-tellers, and it’s a profession that should continue to be highly respected – especially in our increasingly post-truth world. By keeping these high standards of truth and authenticity, documentary makers play a vital role in shaping public knowledge and opinions, and keeping real human stories in our eyes, hearts and minds.

A practical guide to using AI responsibly in documentaries

So, where can a documentary maker begin if they are looking to use AI responsibly? Ultimately the answer is transparency, both internally within the production team and externally with the audience. Practical recommendations of how to achieve this can be found in the Archival Producers Alliance’s Best Practices for Use of Generative AI in Documentaries, co-signed by over 50 filmmakers, including Ken Burns, Rory Kennedy, and Michael Moore. 

  1. Internally, production teams should have open communication, and carefully track the use of GenAI materials through cue sheets, documenting i.a.: the prompts used, software versions and terms, creation date, reference materials and their copyright status, as well as descriptions and timecodes for where GenAI appears in the production.
  2. Externally, creators should clearly inform audiences when GenAI is used, employing watermarks, visual cues, or narration. It is also encouraged to collaborate with specialised GenAI attorneys to address issues including IP, union requirements, and rights of publicity to avoid potential legal risks and insurance issues. Extra care is advised when using GenAI to simulate people, alter real events, or create fictional historical scenes, as each adjustment may risk misleading the audience.

 

That may all sound quite daunting to an already-stretched production team, leading many to avoid using GenAI at all, leaving documentary making in the dark ages. However, businesses like AIMICI look to help support documentary makers through this transition. “We believe these stories deserve to be told, and that there are ways to do it safely and ethically, without tying yourself in knots,” says AIMICI’s CEO Kathryn Webb. “We offer services to help support better tracking of AI use in production. We also pre-vet AI tools and work closely with content makers to navigate legal and ethical concerns.”

Overall, while the use of GenAI by documentary filmmakers offers clear benefits, it also presents serious risks to the integrity of the craft. Guidelines give us all a good start, but only through practical implementation can we begin to see use of GenAI hit the mainstream in documentaries.

Share the Post:

Explore more stories

Connect with us