Chatbot Privacy Is an Oxymoron: Assume Your Data Is Always At Risk
Key Takeaways
- AI chatbots collect vast amounts of user data, often without clear consent, posing significant privacy risks.
- Companies prioritize data collection for market dominance, often at the expense of user privacy and transparency.
AI chatbots pose significant privacy risks as user data becomes proprietary, raising concerns about exploitation and data security in a digital landscape.
Whenever we go online, we leave a rich trail of data—social media activity, websites we linger on, search queries, purchases we make, where we travel, and what we like to eat. It has algorithms that can analyze our personality, decision-making, beliefs, preferences, and quirks. AI can get to know us better than our loved ones do, better even than we know ourselves.
Chatbots take our loss of privacy to a whole new level. Secrets are shared with chatbot therapists and companions, with individuals sharing information they would never reveal to human ears. But once typed into a chatbot, their information becomes proprietary data owned by the company. Since the widespread adoption of ChatGPT began in 2022 this storage of sensitive data has grown exponentially.1
The data trail we all mindlessly create has enormous monetary value for big AI. Their stock valuation is not based on the quality of their product—it is valued on the vast vault of user data they have stored. Investors view these datasets as essential for future market dominance, in every sector. Shoshana Zuboff warned in The Age of Surveillance Capitalism that we are not the customers, we are the product being refined and sold.2
The high commercial value of data puts privacy on the back burner and opens doors for exploitation. The value of data in the AI race for market dominance directly conflicts with privacy protections for users. The data trove is not only gold for AI companies but also for hackers, scammers, and cybercriminals.
Signing Away Your Rights
When you click “I Agree” to use a chatbot, you are essentially signing away ownership of your data. Almost nobody reads the full terms and conditions. OpenAI’s ChatGPT (the most used chatbot to date) and Google’s Gemini both run on “opt-out” privacy systems, meaning that by default, you are giving them a “license to use the content you provide to improve models, products, and services.”3-5 Opting out is possible, but only if you are persistent enough to dig through menus most users do not even open.
Anthropic’s Claude differentiates itself by being the so-called privacy-conscious option.6 It once prided itself on being a fully opt-in system, but as of September 2025, it quietly flipped to a hybrid system, an opt-in/opt-out system, that requires users to make the choice of sharing data for model training or not.7 If you decide to allow sharing, the retention of that data will be for 5 years; if you object, it will be in the system for 30 days. Deleting a chat on Claude means it will not be used for further training.
With ChatGPT, “delete” does not always actually mean delete. Typically, deleted chats on ChatGPT will be removed within 30 days; however, there is now a caveat to that. A 2025 court order in a copyright lawsuit involving New York Times has required OpenAI to keep all conversations, even those deleted by users, until the case is resolved. OpenAI is challenging the order as it undermines user privacy and sets a dangerous precedent for data retention. It is movements like this that show how, without blanket regulation, the piecemeal privacy rules provide zero concrete reliability for the user.8,9
Gemini is not much better than ChatGPT. You can limit data storage to 3, 18, or 36 months, but if you try to opt out completely, Google limits personalization, and generally performs worse. Privacy, it seems, comes with a tax.10
"I Have Nothing To Hide"
Many people will read an article like this and assume it does not apply to them because they have nothing to hide. That shrug misses the point. It is not about having the ability to hide a big secret; it is about protecting your thoughts, emotions, and personal choices from being exploited. Even small, seemingly innocent, anonymized data points become powerful when combined. They can reveal patterns about who you are and how you can be persuaded, providing a stage for behavioral manipulation, targeted advertising, or political sway.11
The Cambridge Analytica scandal exposes the underside of data misuse. Personality quizzes were sent to 87 million Facebook users, gathering information about them and their Facebook friends. These seemingly harmless personality quizzes were then used to create hyper-personalized political ads that influenced the 2016 US election, swaying opinions and suppressing turnout.12 “Innocent data” like shares, quizzes, and likes can be weaponized at scale.
Take Target’s pregnancy-prediction story: the retailer used shopping data to figure out which customers were pregnant in hopes of targeting them for pregnancy products. In one case, a teenager’s father found out she was pregnant after Target mailed her coupons for baby clothes.13 Corporations know what you are susceptible to buying even before you do. Chatbots are seemingly and delightfully free of charge, but they are never a truly free.
Tricks Companies Play
Big AI thrives on its lack of transparency around how data is collected, stored, used, shared, and sold. The opacity makes this whole issue even worse. Companies promise user control but hide details in fine print. Dr Randazzo of Charles Darwin University explained this as the “black box problem.”14 Users cannot see what happens to their data, so they cannot know if their rights are being violated and, in turn, cannot fight back.
But, regulators have slowly started to push back. In 2024, Italy’s privacy watchdog fined OpenAI 15 million euros for its lack of transparency in ChatGPT’s data collection and usage.15 The ruling emphasized that if individuals cannot understand how their information is handled, they cannot truly consent to sharing it. Many chatbots reserve the right to share your data with partners, but they do not say who those partners are or what they could do with your data. Once conversations leave the system, you cannot track or control them.16
Regulations around transparency also vary across the globe. Europe is leading the charge of AI policy: they have the General Data Protection Regulation (GDPR), which requires transparency for AI companies' data collection and usage.17 This provides users with more rights and control over their data. In the US, there is no strong federal privacy policy in place yet, making it a patchwork system, leaving US users with less legal protection and giving big companies the ability to grow without restrictions.18
Bad Things Happen
Chatbots are a perfect vehicle for scamming. Voice cloning and deepfake videos create synthetic identities, dummy corporations, and fictitious government officials. Simple scams create fake emergencies that can extort thousands. More complicated scams can create fake investment vehicles that can extort millions. Ransomware attacks can extort tens of millions. Chatbot automation allows scaling in the number of targets and the take from each. The elderly are prime targets, but scamming is becoming so advanced that even the most sophisticated users can be turned into victims. Identity theft and surveillance have never been so easy.19, 20
Just this year, hackers used Claude to attack 17 organizations in healthcare, government, and emergency services. Claude was used to analyze financial data to figure out ransom amounts and generate ransom notes that were then embedded on victim machines. The attackers extorted the organizations for Bitcoin payments up to $500,000 and threatened to sell and expose stolen personal records.20
The psychological dimension is most concerning. Many people turn to chatbots to talk through things that they don not feel comfortable talking to a person about, including intimate, criminal, interpersonal, or health-related details. The illusion of privacy is misleading, as the information that you put into these bots does not disappear when you close the app, but people interact with these tools as if they have legal privacy rights in place. Individuals who are sharing personal medical or health-related information may find their data alarmingly insecure in the hands of these chatbots and their creator corporations. Founder of OpenAI, Sam Altman, summarized this risk in June 2025, saying, “I think it makes sense to really want the privacy clarity before you use [ChatGPT] a lot—like the legal clarity.” He confirmed that OpenAI is legally required to share “private” conversations if subpoenaed, and though Altman said there should be the “same concept of privacy for your conversations with AI that we do with a therapist,” this is yet to be implemented.22 Once your data is in another company’s hands, it can be reused without clear reporting or renewed consent.
Concluding Thoughts
Chatbot privacy is, in practice, an oxymoron. Even if users are told that their data will be used for training the chatbot model, they are rarely informed about the potential risks or other ways their data could be exploited. Until stronger user protections, regulations, transparency, and meaningful informed consent processes are in place, true chatbot privacy is not possible. In the meantime, users should be cautious about the personal data they share and question the illusion of privacy.
In our next piece, an experienced computer scientist, Jeff Pennington, will provide recommendations on how to better protect privacy in the wild world of chatbots.
References
1. Duarte F. Number of ChatGPT users (November 2025). Exploding Topics. October 31, 2025. . Accessed September 17, 2025. https://explodingtopics.com/blog/chatgpt-users
2. Zuboff S. The Age of Surveillance Capitalism. PublicAffairs; 2019.
3. Fischer S. ChatGPT is still by far the most popular AI chatbot. Axios. September 6, 2025. Accessed September 17, 2025.
4. Terms of use. OpenAI. December 11, 2024. Accessed September 17, 2025.
5. Gemini API additional terms of service. Google. Accessed September 17, 2025.
6. Company. Anthropic. Accessed September 17, 2025.
7. Updates to consumer terms and privacy policy. Anthropic. August 28, 2025. Consumer Terms Update. Accessed September 17, 2025.
8. How we’re responding to the New York Times’ data demands in order to protect user privacy. OpenAI. June 5, 2025. Accessed September 17, 2025.
9. Sudha R. Court orders OpenAI to retain ChatGPT conversations indefinitely. Medium. July 18, 2025. Accessed September 17, 2025.
10. Schwaiger C. You can stop Gemini from training on your data. Tom’s Guide. July 19, 2025 Accessed September 17, 2025.
11. Redden J, Brand J, Terzieva V. Data harm record. Data Justice Lab. August 22, 2020. Accessed September 17, 2025.
12. Harbath K, Fernekes C. History of the Cambridge Analytica controversy. Bipartisan Policy Center. March 16, 2023. Accessed September 17, 2025.
13. Duhigg C. How companies learn your secrets. New York Times. February 19, 2012. Accessed September 17, 2025.
14. “AI is not intelligent at all” – expert warns of worldwide threat to human dignity. SciTechDaily. September 1, 2025. Accessed September 17, 2025.
15. Zampano G. Italy’s privacy watchdog fines OpenAI for ChatGPT’s violations in collecting user personal data. AP News. December 20, 2024. Accessed September 17, 2025.
16. Gen AI and LLM data privacy ranking 2025. Icogni. June 30, 2025. Accessed September 17, 2025.
17. Key issue 5: Transparency obligations - EU Artificial Intelligence Act. Accessed September 17, 2025.
18. The big long list of U.S. AI laws. Morris, Manning & Martin, LLP. October 17, 2025. Accessed September 17, 2025.
19. FinCEN issues alert on fraud schemes involving deepfake media targeting financial institutions. Financial Crimes Enforcement Network. November 13, 2024. Accessed September 17, 2025.
20. Ullman E. AI impersonation scams are exploding: here’s how to spot and stop them. KIRO7 News. July 28, 2025. Accessed September 17, 2025.
21. Rahman-Jones I. Hackers use Claude to attack organizations. BBC. August 28, 2025. Accessed September 17, 2025.
22. Perez S. Sam Altman on AI privacy. TechCrunch. July 25, 2025. Accessed September 17, 2025.
Newsletter
Receive trusted psychiatric news, expert analysis, and clinical insights — subscribe today to support your practice and your patients.










