Disbelieving What You See: A Lesson in Media Literacy
Deepfake A.I. videos of ICE raids are capitalizing on fear and uncertainty during Trump 2.0
By Erik Sandoval | November 16, 2025
LOS ANGELES— In the 21st century cell phone cameras have become ubiquitous tools used by common people around the world to hold powerful institutions to account. Never has that been more apparent than in the past six months as our social media feeds have filled with videos of masked federal agents apprehending people in such haphazard fashion that oftentimes these civilian-shot videos are the only record of the event taking place.
But in the past month something new has begun to spread across social media feeds - A.I. deepfakes of ICE raids.
A New World of Misinformation
Since the release of OpenAI’s free-to-use video generator Sora 2 on September 30, the internet has been awash with very convincing deepfakes that can be generated by simply typing a few key phrases and letting the machine do the rest. But while most of the attention has been on fake celebrity endorsements or major copyright infringement, on the Spanish-language internet something else has emerged.
Platforms like TikTok and Instagram are now flooded with hundreds if not thousands of AI-generated videos of militarized police forcefully apprehending harmless Latinos and tearing them away from crying children, often while American civilians are caught in the middle defending the immigrants. These videos have millions of views, thousands of comments, and very few calling it out as fake.
In a recent appearance on Nick Valencia Live, A.I. expert Jeremy Carrasco spoke of the dangers of these deepfakes, including one popular strain of video that shows shopkeepers yelling at immigration agents to leave, often citing imaginary laws.
“ICE officers can be in restaurants in public spaces, so by showing a video where the store owner says they can’t and them going away you might get undocumented people going inside stores thinking thats a safe place and it’s not,” Carrasco said.
While companies like OpenAI wear the fig leaf of accountability by watermarking their videos, these watermarks are often covered over or scrubbed altogether.
And even in videos where the watermark is kept in place, that’s not really much of a help either.
“The watermark says ‘Sora’, not ‘AI video’,” Carrasco reminded us. “It’s part marketing, and it's not obvious to people who don't know what Sora is or what that would mean. It looks like a Tiktok watermark.”
Ragebait Equals Profit
The motivations behind these videos and their creation remain unclear, none of the videos reviewed by Nick Valencia News directly endorsed or opposed the actions of ICE agents.
Jeremy Carrasco spoke to the creator of one such video who turned out to be an immigrant themselves.
“He said it's something he sees in his community every day and wanted to bring attention to it.”
But Jeremy doesn't buy it.
“Any time that you’re covering up the watermarks I don't take you at face value that you're not trying to trick people”
It seems these videos depicting the pain and suffering of Latino families are created solely to drive views, clicks, and comments.
“Views are money,” says Carrasco, “People want attention and this is an easy way to do it.”
Catching the Tricks
Now more than ever it is vital to be thoughtful about what we see online and what we believe.
“The first thing to look for - Is there a watermark? Or is there something in the places where the watermarks are?” advises Carrasco.
Then there are the obvious visual glitches - the misspelling of words or the strange placement of handcuffs.
Another good tell is how people talk.
“In these Sora videos people talk a certain way… It almost sounds scripted… In a real video it's much messier.”
Carrasco explains this is a result of how these A.I. systems are trained.
“Sora, even the newest models, they’re not continually trained. They aren't looking at modern ICE raids videos so they’re kind of inventing them as if a movie would tell them.”
The Unknown Danger
Even when a deepfake video is swiped away, its simple existence on the timeline begins to create a muddle of what is true and what is false. The danger is not only that people will believe a fake video, but they will begin to disbelieve the real videos.
For Carrasco, who regularly debunks A.I. videos on Instagram for his 200k followers, the lines have already become blurred.
“Half the videos people send me are real and they just can’t tell the difference anymore.”
Because in a world where we constantly have to question what we believe, it becomes difficult to believe anything at all.
Erik Sandoval is the Executive Producer of Nick Valencia News.










The legacy media have be bought by the Oligarchs. They have turned these companies into Propaganda Machines. Independent journalists, many are ex-pats of legacy media, are our last line of defense in search of facts and truth. These people are courageous in a time when so many are others are silent. Silence is complicity and collaboration with tyranny.