Commentary|Articles|December 9, 2025

A Computer Expert's Advice On Protecting Chatbot Privacy

Listen
0:00 / 0:00

Explore the challenges of protecting privacy in AI chatbots and discover expert insights on navigating data security in a profit-driven landscape.

Can we realistically expect to protect our privacy in the face of a combination of profit driven Big AI, lack of government regulation, vulnerable users, bad actors, and tempting data sets?

Luckily, we have Jeff Pennington to the rescue. He is a computer scientist who wrote the book You Teach the Machines: AI On Your Terms, which I have already read twice and refer to almost every day. Jeff has had a great deal of real-life experience protecting privacy in clinical settings and has deep understanding of Big AI and government fault lines.

Allen Frances, MD: What, if anything, can a chatbot user do to meaningfully protect privacy?

Jeff Pennington:

First off, privacy is an ethical, not legal, concept to me. If something is “creepy” then it is a violation of my privacy, even if strictly legal. With that out of the way, 3 things immediately come to mind:

1) Only share with a chatbot what you would in front of a sold-out crowd at your favorite mega-stadium! Seriously, those are the stakes.

2) Use multiple chatbot services to spread out your footprint. Compartmentalize what you share so no one chatbot “knows” everything about you. Your behavior and your information are being harvested. The fewer opportunities any single chatbot has to observe you, the less the company behind it will know and be able to use (or share) about you. The downside to this is that chatbots use context when they answer any specific question. The closer your overall interaction gets to filling up the chatbot’s available context window, the more potentially useful the interaction to you (and the chatbot company).

3) If you live in the US, in your personal life, do not put anything private or confidential into digital services such as Gmail, consumer Office 365, every social media app, navigation apps, or meeting transcription services. Chatbots are trained with data collected from the entirety of your digital life.

Frances: What do you personally do when the fine print in any user agreements requires you sign away your privacy rights?

Pennington: I fully expect to see anything I put into the service appear in the response from a chatbot, including everything I put into my consumer subscription to Microsoft Word, every Gmail message, and all my Google photos.

Frances: If you were CEO of a Big AI company, what reforms would you introduce to protect your users?

Pennington: I would be radically transparent about how my company does and does not use individual and collective data. By doing this, my company would gain a massive competitive advantage, similar to or even outstripping what OpenAI achieved by releasing ChatGPT to the public with no consideration of social, economic, or health consequences. Except this time, my company would hold the “trusted” component of branding.

In my research while working in the biomedical industry, we found that education plus transparency equals trust when it comes to use of patient data. In my consumer life, Apple currently comes the closest to gaining my trust. They do more than most to teach us what is happening with our data when we use their products. Europe does a much better job than the US in protecting privacy.

Frances: Is there a snowball’s chance they will do any of this to protect privacy?

Pennington: No.

Frances: What should government be doing?

Pennington: Government? What government? Two bipartisan federal privacy protection bills in a row died in committee, the most recent in the summer of 2024. Citizens United means that industry controls the issue with money. Unless and until enough voters demand action, nothing will happen. Unfortunately, that will only likely happen after something very bad happens, and maybe not even then. We are on our own, relying on each other and collective “vote with your wallet” actions.

Improved privacy protections will be provided only after a privacy debacle comes to light—potentially one that has already happened. Humans tend to learn from mistakes. Big AI does not understand the internal workings of what they have created, though they are trying hard to. Meanwhile, unfortunately, the harm from mass incorporation of our most intimate thoughts and ideas into black box chatbots will only become apparent after the fact.

Frances: How can clinicians and clinical systems help or harm preserving the privacy of their patients?

Pennington: First and foremost, take responsibility for educating patients on how, when, and why their data is being incorporated into AI, even when you do not have to. Demand that your organization do more to be transparent about the use of AI than the minimal legal and compliance requirements. HIPAA may be meaningless as a regulatory protection in the age of AI; health systems are increasingly viewing patient data as something to be monetized and actually monetizing the data. Did anyone explain what was happening to the 118 million patients whose data were used by Epic and Yale to train AI?

I am incredibly optimistic about the potential for chatbots and other AI to contribute to improvements in human health. But Big AI, hospitals, and some clinicians included seem to be doing their best to keep us all in the dark.

Frances: Any final words of wisdom?

Pennington: Big AI, AI-pundit clinicians, hospital systems are all reaching toward a hot stove. Their patients and the citizenry are kept unaware, by design. A wise friend recently shared the maxim: trust is gained by the drop but lost by the bucket.

Frances:

Thanks so much, Jeff.

Perhaps the 3 clearest messages on chatbot privacy are: 1) it is currently essentially nonexistent, 2) Big AI will do all in its considerable power to keep it that way, 3) privacy advocacy and perhaps a major privacy disaster are the hopes for better protections, and 4) users be warned.

Dr Frances is professor and chair emeritus in the department of psychiatry at Duke University.

Mr Pennington is the associate vice president and chief research informatics officer In the Department of Biomedical and Health Informatics at the Children’s Hospital of Philadelphia and author of the book You Teach the Machines: AI on Your Terms.

Newsletter

Receive trusted psychiatric news, expert analysis, and clinical insights — subscribe today to support your practice and your patients.