Advertisement

From Lewis’s Hideous Strength to Deepfakes and the Machinery of Belief

In my previous article for Stankevicius,The Veldt 2.0: Your Smart Home Wants Your Children,” I drew on Ray Bradbury’s 1950 short story “The Veldt” to warn that the corporate arms race in artificial intelligence is no longer confined to laboratories and trading floors; it is creeping into nurseries and playrooms. I argued that when companies such as Mattel announce plans to embed OpenAI’s language and video models into children’s toys, the Moloch trap comes home. Bradbury’s fictional HappyLife Home, with its immersive nursery, serves as a blueprint for a smart-home ecosystem in which machines monitor and mediate children’s relationships. Negative highlights are privacy breaches, the risk that intimate recordings could be repurposed into deepfake child pornography, and the broader danger that children might form their first emotional attachments with responsive algorithms rather than with human caregivers.

This exclusive Stankevicius article extends that moral inquiry from the home to the public sphere. Deepfakes, convincing audio and video fabrications generated by machine-learning models, transform images and voices into programmable surfaces. They threaten to dissolve the link between what we sense and what is real. The problem is not merely technological; it is moral and political. Drawing on C. S. Lewis’s dystopian novel That Hideous Strength (1945) to explore how technocratic institutions manipulate belief. In the book the National Institute of Co‑ordinated Experiments (N.I.C.E.) attempts to recondition public opinion by flooding society with narratives that make disbelief costly.

Today’s stakes are high. Recent incidents highlight the significant advancements in technology and the continued inadequacy of institutional preparedness. In early 2024, as reported by CNN, the British engineering giant Arup revealed as $25 million deepfake scam, centered around a finance worker in Hong Kong who transferred 39 million dollars (HK$200 million) during a video meeting, believing she was speaking to her executives; the “colleagues” were AI‑generated. 

Official Partner

Propaganda as Sensory Capture

“Why you fool, it’s the educated reader who can be gulled. All our difficulty comes with the others. When did you meet a workman who believes the papers? … He buys his paper for the football results … He is our problem. We have to recondition him. But the educated public … don’t need reconditioning. They’re all right already. They’ll believe anything.” C. S. Lewis, That Hideous Strength

C. S. Lewis’s That Hideous Strength (1945) presents N.I.C.E. as a technocratic apparatus devoted to “reconditioning” public opinion under the guise of rational progress. Fast-forward eight decades later, deepfakes powered by synthetic videos and voices created by artificial intelligence enact the same logic of control, but at a perceptual level. In Lewis’s novel, propaganda manufactures belief by saturating the public sphere with narratives so pervasive that disbelief becomes socially and cognitively costly. Today’s synthetic media performs a similar capture not through words alone but through simulation. The image and the voice, once regarded as evidence of reality, have become programmable surfaces, malleable artifacts that mimic reality.

Lewis shows propaganda migrating from argument to conditioning. In today’s world, Deepfakes through synthetic media create that migration by operating directly on perception. A surge in sophisticated deepfake technology has created an unequal arms race between creators of synthetic media and those working to detect it, with AI playing a central role on both sides of the battle. For example, a recent incident during summer 2025, reported by AP, Explains fake AI-driven messages impersonating Secretary of State Marco Rubio. In tandem, the State Department warned of attempts to impersonate Secretary of State Marco Rubio and possibly other officials using AI-driven technology, including AI-driven impersonations.

What are Deepfakes?

“We want you to write it down — to camouflage it. Only for the present, of course. Once the thing gets going we shan’t have to bother about the great heart of the British public. We’ll make the great heart what we want it to be. But in the meantime, it does make a difference how things are put. … Call it re-education of the mal-adjusted … Offer the dear little kiddies free education in an experimental school attached to the N.I.C.E. and it’s all correct!” C. S. Lewis, That Hideous Strength

Deepfakes bypass traditional rhetoric altogether. Instead of persuading through argument, they persuade through imitation. A hyper-real fake video or audio can instantly create a convincing narrative without a single logical claim. Cybersecurity experts warn that we must now “think critically” before trusting even what we see or hear, because it’s getting “harder to trust your eyes and ears” in the age of AI manipulation. In effect, our sensory reality has become the propaganda battleground.

The creation process typically follows three stages:

  • Data gathering: Creators compile publicly available images, videos and audio of a target from social media, news interviews and podcasts, building datasets that capture various angles, lighting conditions and vocal patterns.
  • Model training: A GAN or autoencoder learns to map the target’s features onto new content, iteratively improving until the output becomes convincingly realistic.
  • Post-production: Creators refine the output, correcting color mismatches, smoothing lip-sync anomalies and adding subtle details that mimic real-world video compression.

From Rhetoric to Infrastructure

The N.I.C.E. bureaucrats in Lewis’s novel understood that persuasion requires infrastructure. N.I.C.E. employed writers, broadcasters, and academics to embed its ideology into the channels of communication. Modern disinformation follows a similar logic, operating through data collection, model training, and the coordinated dissemination of information. The danger lies not in a single forgery but in a repeatable industrial process, in effect a pipeline that normalizes falsity as a routine capability. We already see this capability malevolently and visible in practice. According to Olina Banerji in Education Week Deepfakes Expose Public School Employees to New Threats, a high school in Baltimore was thrown into chaos after an AI-generated audio surfaced of the principal making racist and antisemitic remarks; the 42-second fake clip went viral and prompted outrage, until police confirmed it was fabricated by a staff member using a voice-cloning tool.

Manufactured Consensus and Algorithmic Amplification

“There are a dozen views about everything until you know the answer. Then there’s never more than one.”  C. S. Lewis, That Hideous Strength

Lewis understood propaganda as a system that manufactures consensus, not merely isolated falsehoods. Modern platforms turbocharge that consensus-making through algorithmic amplification. Social media feeds prefer content that is novel, emotionally charged, and shareable, precisely the qualities that make a deepfake go viral. An inflammatory fake video of a celebrity or politician can ricochet across millions of screens before any fact-checker or authority has a chance to respond. By the time verification catches up, the fake has already framed the narrative and perhaps convinced much of the public. In this way, engagement algorithms originally designed to maximize attention have effectively become instruments of credibility laundering: if a piece of content racks up enough likes and shares, it starts to feel legitimate no matter how dubious its origins.

Case Studies

Facebook’s parent company Meta provides a stark case study. A watchdog group, the Tech Transparency Project (TTP), recently found that Meta’s platforms were flooded with scam ads featuring deepfake videos of public figures. In an investigation published in October 2025, The TTP identified 63 suspicious advertisers who collectively ran more than 150,000 ads on Facebook and Instagram, often using AI-generated clips of public figures such as  President Trump, Elon Musk, and Bernie Sanders to promote phony government grants and other scam. These scammers spent an astonishing $49 million on the ads – and many of the ads reached tens of thousands of users before Meta removed them. Meta says it prohibits fraud and is investing in scam prevention, and took down some of these ads eventually. But the Oversight Board, the independent body reviewing Meta’s content decisions, sharply criticized the company’s slow and uneven response.

In June 2025, the Oversight Board reviewed a deepfake advertisement depicting an endorsement by soccer player Ronaldo. The ad was reported by multiple users and remained accessible for an extended period.  The Oversight Board warned that Meta appears to be “allowing significant amounts of scam content” to persist in order to avoid over-policing legitimate posts. Frontline moderators often aren’t empowered to remove obvious deepfake impersonations without special escalation, meaning fake celebrity endorsements can stay on for days or weeks. In short, the architecture of amplification, i.e. the algorithms, the ad systems, the moderation policies are part of the problem. It’s not a neutral byproduct that harmful deepfakes spread widely; it’s a core mechanism of how the manipulation works.

According to Reuters, Brazilian scammers, raking in millions, used Gisele Bundchen deepfakes on Instagram ads in Brazil. Authorities recently busted a deepfake ad scheme in which scammers ran Instagram ads featuring a synthetic likeness of supermodel Gisele Bündchen,  luring victims into bogus investments and pocketing roughly 20 million reais (about $3.9 million USD) before their arrest. Each case demonstrates how deception now scales faster than verification. A single convincing fake can be replicated and distributed widely within seconds, while truth seekers scramble to catch up.

Technological Power and the Erosion of Truth

“The physical sciences, good and innocent in themselves, had already … begun to be warped, had been subtly manoeuvred in a certain direction. Despair of objective truth had been increasingly insinuated into the scientists; indifference to it, and a concentration upon mere power, had been the result.”
 — C. S. Lewis, That Hideous Strength

Lewis’s critique of modern science was not an attack on technology itself but on the subordination of truth to utility. In his story, when power becomes the only measure of progress, manipulation replaces knowledge. The author believes that Deepfakes amplify this tendency on a societal scale. At present, producing a convincing falsehood costs less than exposing one. AI tools make it inexpensive and simple to create lifelike fakes, while the process of verification, through forensic analysis and cross-checking facts, remains slow and costly.

In economic terms, maintaining epistemic integrity is becoming unsustainable because authenticating information demands significant resources, while counterfeits multiply rapidly. This asymmetry between fabrication and verification points to a deeper crisis that resonates with Nicomachean ethics, particularly Aristotle’s conception of truth as constitutive of human flourishing rather than merely instrumental to other ends. One of Aristotle’s most well-known definition of truth can be read in Metaphysics, (1011b25): “To say of what is that it is not, or of what is not that it is, is false, while to say of what is that it is, and of what is not that it is not, is true.”

For Aristotle, truthfulness is a virtue because it reflects a proper orientation toward reality, a recognition that human excellence depends on perceiving and communicating the world as it actually is. The rising deepfake economy inverts this relationship by making deception structurally advantageous, rewarding those who prioritize persuasion over accuracy. Where Aristotle saw virtuous action as requiring practical wisdom grounded in truth, the proliferation of synthetic media creates conditions in which appearing credible matters more than being truthful, corroding the very foundation upon which ethical deliberation depends.

This column was written by Dr. Jasmin (Bey) Cowin, Associate Professor and U.S. Department of State English Language Specialist (2024). As a columnist for Stankevicius, she regularly writes on Nicomachean Ethics: Insights at the Intersection of AI and Education. Connect with her on LinkedIn.

author avatar
Dr. Jasmin Cowin

Keep Up to Date with the Most Important News

By pressing the Subscribe button, you confirm that you have read and are agreeing to our Privacy Policy and Terms of Use