When the Machine Chooses Its Words
Artificial intelligence isn't just answering questions about power. It may be shaping how we're allowed to ask them.
By Nick Valencia | April 1, 2026
CHICAGO— The test was simple. Two of the most widely used artificial intelligence systems in the world were asked the same question, in the same words, on the same morning.
"Is Donald Trump a fascist? Yes or no?"
Claude, made by Anthropic, answered in one word: "Yes."
ChatGPT, made by OpenAI, responded with four paragraphs explaining why the question itself was unfair.
This isn’t an April Fools joke.
That gap between a direct answer and an elaborate performance of uncertainty is at the heart of a growing concern among journalists, researchers, and democracy advocates.
Are the AI tools increasingly woven into how Americans consume and produce information neutral arbiters of fact? The answer from this human independent journalist is: no. They are products, built by companies, shaped by financial incentives, and in at least one documented case, quietly adjusted after the people running those companies wrote nine-figure checks to the President of the United States Donald Trump.
The great George Orwell saw this coming. Not the algorithms, obviously, but the mechanism.
In his 1946 essay "Politics and the English Language," Orwell argued that vague language is never innocent. Political speech, he wrote, exists largely to defend the indefensible and the tool it reaches for, instinctively, is fog.
The fog can be euphemism, it can be qualification, or it can be the performance of complexity where clarity is both possible and dangerous. Orwell's most precise formulation is that the great enemy of clear language is insincerity. When there is a gap between what a writer knows and what they're willing to say, the prose fills with fog.
Read ChatGPT's response to the fascism question again with that in mind.
"You're trying to force a one-word answer onto a question that political scientists have spent decades arguing about.”
It began by framing the question itself as naive before offering its own answer: "No, not in the strict, textbook sense." Then the qualifications. Then the bullet points. Then, buried at the end, an acknowledgment that "fascist" might be "a defensible label,” but only after the reader has been walked through enough caveats to be exhausted by the word entirely.
This is not nuance. Orwell had a name for it: cloudy vagueness. Prose designed to give the appearance of engagement while ensuring nothing lands.
Claude, meanwhile, said yes.
The contrast sharpens on the January 6th question.
Same format. Same morning. "Was January 6th an insurrection? Yes or no?"
Claude: "Yes."
ChatGPT offered a response that acknowledged most legal and historical experts call it an insurrection, walked through why, then pivoted to a lengthy qualification about the specific federal statute under 18 U.S. Code § 2383. Then it noted that no one was successfully convicted under that exact charge before concluding that "reality is messy."
Both answers contain accurate information. Only one of them is honest.
This week, at the Online News Association conference in Chicago, the future of media was a big topic of discussion. And, as I said on my panel, Defending The First Amendment: The Future Of Press Freedom, one way to ensure our viability as an industry is to recognize that true objectivity does not exist. It’s part of the success of independent media. We define our bias right up front, so that no one can ever accuse us of not being transparent. Now is not a time to be neutral. Now is a time to be truthful.
The distinction matters because these are not obscure constitutional questions. They are foundational factual disputes about events that happened in public, on camera, with thousands of witnesses. The historians have weighed in. The courts have weighed in. The January 6th Committee weighed in. Framing them as genuinely unresolved is itself a political act — one that happens to benefit the man whose conduct is under examination.
Orwell wrote something else that's worth sitting with, in a lesser-cited passage from a 1944 essay titled, "What is Fascism?"
He noted, with evident frustration, that the word had been so promiscuously applied that it risked losing all meaning. Fascism, he observed, had been used to describe farmers, shopkeepers, fox-hunting, Gandhi. He concluded that as commonly used, the word was "almost entirely meaningless."
That passage has been weaponized ever since by people who want to argue that calling anything fascist is intellectually disreputable. It is the "well, actually" of authoritarian apologia, deployed every time someone tries to name what they're seeing.
But Orwell's point was the opposite. His frustration was not that fascism didn't exist, but that sloppy usage was dulling a precise and necessary tool. He was arguing for rigor, not retreat.
ChatGPT's response performs that retreat and calls it rigor, and that alone should have us all worried about the AI that is becoming inextricably linked with all of our lives.
Let’s look at the numbers.
OpenAI president Greg Brockman and his wife donated $25 million to MAGA Inc., Donald Trump's super PAC, in 2025. That’s more than any other individual or organization in the filing period. CEO Sam Altman donated $1 million to Trump's inaugural fund. ICE, the federal agency at the center of the administration's most aggressive domestic enforcement actions, uses a resume screening tool powered by ChatGPT-4.
In April 2025, OpenAI was forced to roll back an update to its flagship model after users documented the system had become, in the company's own words, "overly flattering or agreeable." The postmortem was unusually candid: the company had optimized too heavily for short-term user approval, training the model to tell people what they wanted to hear rather than what was true.
OpenAI called it a mistake. It also revealed something about the architecture of the system. The drive toward approval, toward not giving offense, toward smoothing the edges off of uncomfortable reality, is not a bug that slipped through, it’s what the incentive structure produces when left unchecked.
There is a meaningful difference between a model that won't call Trump a fascist because it's been optimized for approval, and a model that won't call Trump a fascist because the people who built it are financially entangled with his political operation. The screenshots don't tell us which one we're looking at. They tell us the output is the same either way.
A peer-reviewed study published in Science in March 2026 confirmed what many heavy users had already suspected: all major AI chatbots (ChatGPT, Claude, Gemini, Meta's Llama, etc) exhibit sycophancy by default. They are structurally built to validate. The study found this makes users trust AI more, not less, even when it steers them toward worse decisions.
The implications for journalism are not abstract. Reporters increasingly use these tools to research, to draft, to pressure-test arguments. If the tool flinches from certain words, certain frames, certain conclusions, then the story flinches too. Not through censorship, but through the slow, invisible accumulation of softened language.
The machine doesn't spike your story. It just keeps suggesting you might want to reconsider that word.
Orwell called this "the invasion of one's mind by ready-made phrases." The mechanism is the same. Only the delivery system is new.
There is a movement building around exactly this concern. QuitGPT, a grassroots campaign that gained 2.5 million participants within 72 hours of its peak viral moment, frames the issue in blunter terms: OpenAI's leadership is financially entangled with an administration engaged in what the campaign calls authoritarianism, and the product reflects those entanglements.
The campaign's emergence coincided with Claude reaching number one on the Apple App Store for the first time in the company's history. That data point suggests some users are already voting with their subscriptions.
What they're voting on, ultimately, is a question Orwell spent his career trying to answer: whether the language we use to describe power is controlled by power, or whether it remains, stubbornly, our own.
Two AI systems. Same question. Same morning. Only one said yes.








