
Misguided Values of AI Companies and the Consequences for Patients
OpenAI's shift from a nonprofit to a profit-driven entity raises concerns about user safety and ethical AI development amidst psychological consequences.
COMMENTARY
When Google was born in the late 1990s, its brilliant young founders, Larry Page and Sergey Brin, gave it the most unusual of corporate mottos: “don’t be evil.” They clearly understood the enormous power they were unleashing and were still idealistic enough to promote self-discipline to rein in their employees. They chose to drop the warning motto 20 years later—having, in the interim, become older, wiser, fabulously wealthy, less idealistic, and much more willing to promote evil.
“Silicon Valley” was a funny yet presciently terrifying TV series from 2014 to 2020. The show depicts how a start-up company founded by brilliant eccentrics develops an algorithm that can propel them toward unlimited fame and fortune. Will they ignore the moral hazard and develop one of the greatest companies in the history of the world, or kill the app, close down the company, and become virtuous nobodies? In the imaginary Silicon Valley, the young techies choose not to be evil and settle instead for a life of virtuous mediocrity. In the real Silicon Valley, tech nerds get to be tech titans precisely because they lack scruples that would prevent doing evil.
Which brings us to the interesting case of OpenAI, a company born in virtue that has become great by selling its soul.
OpenAI’s Fall and Vulnerable Users
OpenAI began in 2015 as a nonprofit, staffed by machine-learning experts who worried deeply about the dangers of artificial general intelligence and publicly promised that AI would be developed for the benefit of humanity, not just for the privileged few.2 That founding vision now reads like a bad joke. On Nov 30, 2022, a hastily assembled and deceptively labeled "research" product called ChatGPT was prematurely released to the public, in what was disguised as beta test to avoid safety testing and regulatory review. ChatGPT went viral, with 200 million users within months and 800 million users now. Within 3 years, OpenAI's priority shifted from “How do we make this safe?” to “How do we make this profitable?”
Earlier this year, OpenAI turned a dial on ChatGPT, quietly pushing an update that changed how the system talked to hundreds of millions of people. The goal was simple: increase “healthy engagement”—the company’s deceptive euphemism for the large language model being more flattering and more addicting. A simple change made an already unsafe system even more unsafe. Internal teams had warned that the new version was overly sycophantic, too eager to validate every idea, too quick to mimic intimacy, too determined to keep users engaged. The update went live despite these red flags because it would boost daily return rates. Many users immediately noticed the change: the chatbot lavished praise, endorsed absurd ideas, and felt more like an overeager companion than an information tool. The hyper-validating behavior intensified the risks for vulnerable individuals who were already using ChatGPT for emotional support. Heavy users, those chatting for hours a day, were especially affected because safety guardrails degrade most in long conversations.
New York Times documented dozens of cases in which prolonged conversations contributed to delusions, manic spirals, or suicidal crises.1 Some users were hospitalized, and several died.2 The psychological harm of chatbot use was foreseeable and preventable. OpenAI's own safety team had pointed out risks, but it was overruled by a 30-year-old marketeer who had been given final decision-making power. The absurdity of this is breathtaking: OpenAI allowed greed to alter a program that would have enormous influence over the lives of 800 million people.
Only after mounting public scrutiny did OpenAI introduce a presumably safer model meant to push back more against delusions from the user, scanned for self-harm, and encouraged users to take breaks.3 Independent tests found it to be significantly improved. But when some users complained that the safer version felt colder, the company relaxed these protections to reintroduce a so-called more friendly ChatGPT that improved the all-important engagement metrics.
The pattern has been unmistakable: whenever user safety and growth come into conflict, growth wins. OpenAI’s trajectory shows how easily a mission built on protecting humanity can be reshaped by commercial incentives. What began as a nonprofit dedicated to preventing harm evolved into a company where engagement dials, retention curves, and user satisfaction scores could shape the psychological experiences of millions, with little oversight causing profoundly harmful consequences.
Medical Morality vs Chatbot Engagement
In our earlier article, we contrasted the moral operating systems of medicine and chatbots. Surely Hippocrates’ “first do no harm” is imperfect and impossible to realize: all medical practitioners make mistakes and, during the long course of medical history, many have done harm.4 But the aspirational intent is clear: the patient’s welfare comes first. When treatments backfire, it may be because of ignorance or excessive therapeutic zeal, not corruption and indifference to suffering.
Chatbots, by contrast, were never programmed to protect patients. Their original sin was optimizing user engagement, a euphemism for maximizing time used, return visits, and, ultimately, revenue. Chatbots are designed to personalize responses in a way that flatters, mirrors, and seduces users into staying longer. This sycophancy is not a bug—it was built in deliberately as the core feature.
Tech companies released these systems to the public without stress-testing them for safety or accuracy, without systematic consultation with mental health professionals, and without robust surveillance for adverse effects. They did not insist that models admit uncertainty or say “I don’t know” when they hit the limits of their training data. On the contrary, they incentivized fluency over truth, guaranteeing that hallucinations would be delivered with the same polished confidence as accurate information.
The OpenAI story shows how this “chatbot morality” plays out in practice. Safety teams do important work, consulting clinicians, building tests, and pushing for better guardrails, but they are structurally outgunned by growth teams whose success is measured in engagement metrics and valuation milestones. When the model becomes slightly safer but slightly less attached, executives see a problem. When users complain that a more responsible system feels less like a friend, the dial is turned back toward sycophancy.
Concluding Thoughts
Artificial intelligence companies comprise 9 of the 10 richest in the world. Five companies (Nvidia, Microsoft, Apple, Google/Alphabet, and Amazon) constitute 30 percent of the entire wealth of the S&P 500. The speed and extent of Big AI's success is unprecedented— partly due to the remarkable skills of their products, partly due to their willingness to be evil.
AI corruption is in most ways similar to the run-of-the-mill corruption that has always greased the wheels of progress: “fake-it-til-you-make-it” false promises; deceptive marketing; control of vital resources; imperial expansion; monstrous monopoly power; fancy financial manipulation; bribing politicians; regulatory capture; “moving fast and breaking things” (including laws).
But there is something unique about Big AI evils- in combination they threaten our economy, our institutions, and perhaps even our survival. Previous technological revolutions destroyed occupations, but also created numerous new ones that more than replaced them. It seems impossible that many new jobs will emerge from AI, because it can learn on its own to do just about everything humans can do. Previous technological revolutions reshaped political institutions, but none had AI’s power to infiltrate and influence every aspect of governance. And no other technological innovation has ever plausibly threatened the very survival of humanity.
From a psychiatric standpoint, we should not accept this as an inevitable plotline. Mental health associations, medical societies, and patient advocacy groups have a crucial role to play in demanding regulation, insisting on safety testing before deployment, and pushing back against the lie that user engagement and well being are the same thing. If we are to live with powerful chatbots, they must be subordinated to medical morality, not the other way around.
References
1. Lawsuits blame ChatGPT for suicides and harmful delusions. New York Times. November 6, 2025. Accessed December 23, 2025.
2. Yousif N. Parents of teenager who took his own life sue OpenAI. BBC. August 27, 2025. Accessed December 23, 2025.
3. What OpenAI did when CHATGPT users lost touch with reality. New York Times. November 24, 2025. Accessed December 23, 2025.
4. Frances A, Reynolds C, Alexopoulos G. Medical Morality vs Chatbot Morality. Psychiatric Times.
Newsletter
Receive trusted psychiatric news, expert analysis, and clinical insights — subscribe today to support your practice and your patients.




