Opinion|Articles|January 27, 2026

Chatbot-Generated Propaganda Threatens Democracy

Listen
0:00 / 0:00

Key Takeaways

  • Edward Bernays pioneered public relations by applying psychological techniques to influence public opinion, a practice that has evolved with technological advancements.
  • AI chatbots are now a potent tool for propaganda, capable of producing disinformation on a massive scale and mimicking human interaction.
SHOW MORE

Edward Bernays, Sigmund Freud's nephew, turned out to be almost as influential as his famous uncle. Having grown up in the United States, he went on to invent the field of public relations and is still regarded as its father. Working for dozens of corporate clients, Bernays cleverly combined psychological techniques derived from psychoanalysis, behaviorism, and group psychology into a strategy of subliminal influence that is still remarkably successful in manipulating consumers.

Bernays quickly grasped that selling political ideology to a gullible public is no more difficult than selling commercial products. His 1928 book, Propaganda reads like a magician revealing the secrets behind his tricks. It contains this extremely timely warning: "The conscious and intelligent manipulation of the organized habits and opinions of the masses is an important element in a democratic society. Those who manipulate this unseen mechanism of society constitute an invisible government which is the true ruling power of our country. We are governed, our minds are molded, our tastes formed, and our ideas suggested, largely by men we have never heard of. It is they who pull the wires that control the public mind."

Bernays wrote Propaganda hoping it would become a useful guide to protecting democracy. Instead (to his chagrin and the world's misfortune), Joseph Goebbels was his most avid reader and the brainwashing achieved in Nazi Germany his most tragic legacy.

I have invited Joseph Pierre, MD, to explore how AI chatbots provide a new propaganda tool with psychological impacts, powerful beyond the wildest imaginings of Bernays and the furthest reach of Goebbels. Pierre is an expert on why people are so vulnerable to overvalued ideas, conspiracy theories, and delusions. He is author of a wonderful book: False: How Mistrust, Disinformation, and Motivated Reasoning Make Us Believe Things That Aren’t True.

Allen Frances, MD: How do you define propaganda and what is its purpose?

Joseph Pierre, MD: Propaganda is misrepresenting the truth for political purposes, deliberately producing and disseminating disinformation to deceive public opinion and to manipulate behavior in the service of a particular agenda. The term tends to be used as a pejorative designating a social evil. Consequently, debates about what should or should not count as “propaganda” are inevitably colored by bias. For example, both MSNBC and Fox News have been accused of being propaganda machines by their detractors.

Dr Frances: How have propaganda techniques evolved over time?

Dr Pierre: When you’re in the business of mass persuasion, any technology that can speed up or further the spread of misinformation is an irresistible propaganda tool. Propaganda took giant steps forward with the invention of the printing press, radio, television, and film, and more recently, with the internet. Its reach recently expanded further with “troll farms” producing bogus online personas and messages on social media platforms with the intent of spreading “fake news,” sowing division, and influencing political elections.

Dr Frances: What are the ways that chatbots are used for propaganda?

Dr Pierre: Chatbots can produce propaganda on a massive scale. They are scary good at mimicking humans—the technology has advanced to the point that users cannot tell the difference between AI-chatbot generated and human-generated output. Propagandists no longer need an army of humans running fairly primitive troll farms. It can now all be done by chatbots—cheaply, much more efficiently, and ever so much more effectively.

Vanderbilt University professors Brett Goldstein and Brett Benson have warned that “AI driven propaganda is no longer a hypothetical future threat. It is operational, sophisticated and already reshaping how public opinion can be manipulated on a large scale.”1 Chatbots can be used to generate “deepfake” videos depicting convincingly realistic images of real-life people doing things or saying things that they never actually did or said.

Russia has used chatbots to spread disinformation about the war in Ukraine and China has used them to sway the 2024 elections in Taiwan. Robert F. Kennedy’s Make America Health Again (MAHA) commission report that called into question the safety and efficacy of vaccines contained fake citations almost certainly generated by chatbots.2 Over the past year, the Trump administration has circulated at least 14 AI-generated images including a recent photograph of a woman altered with AI to make it look like she was crying during her arrest by US Immigration and Customs Enforcement. When asked for comment, the White House responded that “the memes will continue.”3

Such memes and fake videos can be profoundly influential. The "illusory truth effect" describes how repeated exposure to such disinformation increases their credibility—even when they’re known to be fraudulent

Dr Frances: How is chatbot propaganda more powerful and potentially psychologically harmful than previous methods?

Dr Pierre: Part of the power and danger of chatbot propaganda is its vast scale and reach. Another is the integration of generative AI with individual profiles (based on personal data mined from social media) so that propaganda will be much more targeted than ever before. And users tend to accept as gospel whatever chatbots say. I call this “deification” in the setting of AI-induced psychosis, and the same process operates when AI fuels nondelusional false beliefs among the mass public.4 A recent Pew Poll found that when using Google to search for information, most users just accept the AI-distilled answer rather than clicking through to the hits or the source material.5

This is particularly disturbing because chatbots are not reliable sources of objective information. They aren’t even designed to be. And they can also be purposely trained for bias (aka “LLM grooming”). Grok, X’s chatbot, is notorious for its right-wing, neo-Nazi, and antisemitic content.6

Chatbots suffer from what has been called a “garbage-in, garbage-out” effect. They can dilute objective truth and generate false noise in what I call the “flea market of opinion.” It has been argued that a sense of shared facts is essential for a functioning democracy. But under the influence of ever more ubiquitous and effective propaganda, we are moving ever further away from collective consensus facts. Hannah Arendt put it best: “If everybody always lies to you, the consequence is not that you believe the lies, but rather that nobody believes anything any longer… And a people that no longer can believe anything cannot make up its mind. It is deprived not only of its capacity to act but also of its capacity to think and to judge. And with such a people you can then do what you please.”7

Dr Frances: Is it realistic to think we can protect ourselves from chatbot propaganda? How?

Dr Pierre: As clinicians, patients, and individuals, we should become better and more skeptical consumers of information. And we can teach ourselves to be more literate about how chatbots really work and their associated risks. I often recommend thebullshitmachines.com, a website run by University of Washington professors Carl Bergstrom and Jevin West, as a good starting point.

Beyond what we can do as individual users, we can support advocacy that demands regulation of the AI industry and calls for greater transparency and scrutiny of use of AI in everything from targeted advertising to military decision-making.

Dr Frances: How do you think this will work out?

Dr Pierre: Moral panics tend to follow new technologies—so on one hand, I am mindful of the risk of being a "Chicken Little," telling people that the sky is falling. There is also no doubt that, like the internet, AI has significant potential for good.

But, on the other hand, there are great risks. The White House’s new AI Action Plan has called for deregulation of the AI industry in order to win the “race to achieve global dominance in artificial intelligence.”8 Meanwhile, the Global Engagement Center, a State Department agency tasked with countering foreign propaganda, was shut down based on concerns about censorship and claims that it was a waste of taxpayer dollars. Based on known risks to date, that seems like exactly the wrong direction if we are to safeguard against the potential dangers of AI. Without regulation and a push back against the unbridled hype and dangers of AI, I’m not particularly hopeful that things are going to end well.

Concluding Thoughts

Mark Twain observed: "A lie can travel halfway around the world while the truth is still putting on its shoes.” And that was a century before the internet,

The converse of Thomas Jefferson's “A well informed citizenry is the best defense against tyranny” is that a badly misinformed and brainwashed citizenry is a set-up for tyranny.

George Orwell's 1984 provided a nightmarish description of the brainwashing potential of a totalitarian state propaganda machine. But the tools of persuasion in 1984, powerful though they were, are pathetically primitive compared to what can be achieved by chatbot deepfakes. There is a tragic vicious cycle—as propaganda tools become more technologically sophisticated, they are more successful in fooling people. These misinformative technologies can have dramatic psychiatric consequences for vulnerable individuals.

I am even more pessimistic than Dr Pierre about our ability to resist the coming avalanche of increasingly sophisticated and fast moving chatbot propaganda. Truth is a fragile thing that we must cherish, protect, and proclaim—now more than ever because it is in such short and decreasing supply.

The opinions expressed are those of the authors and do not necessarily reflect the opinions of Psychiatric Times.

Dr Frances is professor and chair emeritus in the department of psychiatry at Duke University.

Dr Pierre is a professor in the Department of Psychiatry and Behavioral Sciences at the University of California, San Francisco.

References

1. Goldstein BJ, Benson BV. The era of A.I. propaganda has arrived, and America must act. New York Times. August 5, 2025. Accessed January 26, 2026. https://www.nytimes.com/2025/08/05/opinion/china-ai-propaganda.html

2. Gilbert C, Wright E, Nirappil F, et al. The MAHA Report’s AI fingerprints, annotated. The Washington Post. May 30, 2025. Accessed January 26, 2026. https://www.washingtonpost.com/health/2025/05/30/maha-report-ai-white-house/

3. Kornfield M. White House posts an altered photo of Minnesota protester’s arrest to make it look like she was crying. CBS News. January 24, 2026. Accessed January 26, 2026. https://www.cbsnews.com/news/white-house-photo-minnesota-protester-arrest-altered-crying/

5. Pierre J. Can AI chatbots validate delusional thinking? BMJ. 2025;391:r2229.

5. Chapekis A, Lieb A. Google users are less likely to click on links when an AI summary appears in the results. Pew Research Center. July 22, 2025. Accessed January 26, 2026. https://www.pewresearch.org/short-reads/2025/07/22/gogle-users-are-less-likely-to-click-on-links-when-an-ai-summary-appears-in-the-results/

6. Hagen L, Jingnan H, Nguyen A. Elon Musks’s AI chatbot, Grok, started calling itself ‘MechaHitler’. NPR. July 9, 2025. Accessed January 26, 2026. https://www.npr.org/2025/07/09/nx-s1-5462609/grok-elon-musk-antisemitic-racist-content

7. Arendt H. Hannah Arendt: From An Interview. The New York Review; 1978. https://www.nybooks.com/articles/1978/10/26/hannah-arendt-from-an-interview/

8. Winning the race: America’s AI action plan. The White House. July 2025. Accessed January 26, 2026. https://www.whitehouse.gov/wp-content/uploads/2025/07/Americas-AI-Action-Plan.pdf

Newsletter

Receive trusted psychiatric news, expert analysis, and clinical insights — subscribe today to support your practice and your patients.