Should Journalism Ever Use AI-Altered Video? CTV Did – and it’s Complicated
There’s a reason “deepfake” has become a dirty word.
It’s synonymous with deception, with digitally altered images used to mislead or manipulate. In an era of growing media distrust, where misinformation travels faster than facts, it’s a word that sends a shiver down the spine of most journalists.
So when CTV’s W5, one of Canada’s most well-known investigative programs, used AI-generated visuals to help tell a story of sexual violence, I didn’t know how to feel.
COURTESY: CTV
The episode, “Sleeping with the Enemy,” features survivors of abuse—women whose identities need protection, but whose stories deserve to be heard. Newsrooms traditionally handle this by filming in silhouette, pixelating faces, digitally altering voices or using voice actors. These techniques are effective, but sometimes they remove more than a victim's recognizability. They alter the emotional weight of the person speaking. It can feel distracting, distanced, and clinical. The impact is dulled by production choices made out of necessity.
This time, CTV took a different approach. W5 used artificial intelligence to generate "computer-created synthetic" faces mapped to the survivors’ expressions and synced with their voices. The result was powerful. The women were not visually identifiable, but you could see their emotions. You could feel their pain. Their humanity.
In an article explaining the decision, the network said:
“Using AI, we were able to create new, fictional faces, mapped to their real expressions. It brings survivors out of the shadows.”
It does. But it also opens a debate.
While this may be one of the most justifiable uses of AI in journalism, it risks blurring the line between what’s real and what’s digitally constructed. And at a time when public trust in media is on shaky ground (if not in crisis), that line matters.
If audiences begin to question what’s real or the accuracy of reenactments in our storytelling—especially in serious investigative work—the consequences are profound.
Transparency helps, but it’s not enough on its own. What W5 did was unusual, maybe even necessary. But where might it lead? Could AI-generated faces be used to replace sketches and show testimony in court cases of public interest? Missing persons? Whistleblowers? Protesters fearing state violence?
“I don't see an issue with using deepfake technology to mask the identity of someone who's been abused if it means their story is going to be told,” says Emmanuelle Saliba, Chief Investigative Officer at GetReal, a US-based cybersecurity company specializing in the detection and mitigation of malicious generative AI threats.
Instead, she advocates the creation of industry standards when it comes to disclosure and labelling.
"From start to finish, there needs to be a coherent and consistent way of letting the audience know what you've done and why you've done it. If there is not that transparency, that's where you lose trust."
CTV says its decision wasn't "made lightly."
"We chose not to use the digitally altered faces in shorter news pieces that were broadcast on CTV National News over the last two weeks. But we believe it was the right technology outside of the news division."
Other Canadian networks are taking a more cautious approach.
CBC News General Manager & Editor in Chief Brodie Fenlon says, "The question of using generative AI to mask or alter the appearance of confidential sources has come up at CBC News, but we have not allowed it to date."
“We are taking an extremely conservative approach with this technology for fear any misstep or misunderstanding could jeopardize public trust in our journalism.”
I’ve worked in newsrooms for most of my career. I know how hard it is to balance ethics with impact. To protect sources while doing justice to their stories. I can’t say definitively whether CTV made the right call, but I do know they were transparent. They made their methods clear to viewers. That’s a start.
What we need now is more than transparency. Perhaps we need shared editorial standards and open conversations in our industry about whether AI tools can serve journalism rather than undermine it. We need to recognize that sometimes, technology isn’t just a threat, it’s a test of who we are and what we stand for. What do you think?