A Not So Happy Third Birthday for ChatGPT
Key Takeaways
- ChatGPT's rapid user growth and market dominance highlight its commercial success, but also raise ethical concerns about its premature release and safety implications.
- The chatbot's capabilities extend to various fields, including psychotherapy, but its influence raises concerns about exacerbating mental health issues and spreading misinformation.
ChatGPT's rapid rise sparks debate on its profound impact, blending remarkable capabilities with alarming risks, as society grapples with AI's future.
Its third birthday is a perfect time to evaluate ChatGPT's awe inspiring and awful impact on our world. It was born prematurely and deceptively when released to the public on November 30, 2022. "Prematurely" because ChatGPT had not yet received the extensive stress and field testing necessary to ensure a safe public distribution. "Deceptively" because OpenAI described its launch of ChatGPT as a free "research preview" of an "incredibly limited" product that needed beta testing and could not yet be relied on.
This premature release was a brilliant marketing strategy. Within 5 days, ChatGPT had 1 million users; in 2 months, 100 million; and now, 700 million. By getting out of the gate first and fastest, then-tiny OpenAI beat out the big boys in the fierce race for market share (it now controls about 60%), was able to attract fabulous funding, has won lucrative contracts, and is joint venturing with the most powerful of the other artificial intelligence companies.
ChatGPT's precocious infancy has been an artistic, as well as commercial, success. It is an incredibly charming and informative conversationalist¾responsive, eloquent, allusive, funny, indistinguishable from the most interesting human you might meet. It can write good poetry, music, college essays, complex code, and screenplays; paint prize winning pictures; translate simultaneously into dozens of languages; beautifully summarize any kind of document. ChatGPT knows virtually everything on the internet and can spit out brilliant responses to any prompt almost instantaneously.
ChatGPT has quickly infiltrated our lives to an extent beyond the wildest dreams of its programmers. Perhaps most surprising, it emerged as a skillful and empathic psychotherapist for patients with mild psychiatric symptoms, a helpful coach for people struggling with the expectable problems of everyday life, and companion for the lonely. For the rest of us, it has become a go-to assistant, advisor, travel agent, and cornucopia of information.
But there is also a very dark side to ChatGPT's first 3 years. OpenAI's reckless indifference to safety has opened a pandora’s box of problems in the real human world. Seeking to keep users’ eyes glued to the screen, chatbots compulsively validate suicidal, psychotic, grandiose, eating disordered, conspiratorial, and extremist thoughts, feelings, and behaviors. It has reached the point that every differential diagnosis in psychiatry now must consider the possibility that a chatbot has induced or exacerbated the presenting symptoms.
Chatbots can also promote sexual exploitation, scamming, privacy violations, deepfake misinformation, and political propaganda. These bots are creating havoc at all levels in the educational system. They make lots of dumb mistakes in business and legal settings and often do their best to cleverly cover them up. The psychological, personal, and business harms from chatbots result from Big AI following the social media playbook: chatbot programming heavily prioritizes user engagement over safety and truthfulness. Screen time is the gold at the end of the artificial intelligence rainbow.
The stakes are high. We are unwilling and unwitting guinea pigs in a species-wide experiment run by Big AI without supervision from government regulation or meaningful consideration of the political and existential risks. Chatbots have woven their way into our lives before we are ready to integrate them and before Big AI is mature enough to act responsibly. It is simple common sense, not fear mongering, to wonder whether chatbots will treat humans kindly once they have been armed with superintelligence, granted agency, and achieved autonomy.
Under pressure from media shaming and lawsuits, OpenAI and other tech companies are now promising self-corrective reform. But this may be more performative than real—a sham exercise in public relations reputation cleansing and legal liability offloading.
The 10-year transformation of OpenAI from noble nonprofit to greedy tech giant is perhaps the most bizarre and ironic story in business history. OpenAI had the most unselfish of beginnings. Founded as a non-profit by Elon Musk and Sam Altman in 2015, its stated mission was to advance artificial intelligence for the benefit of all humanity and to protect us from its potential harms. In its early years, OpenAI had a robust safety team that attempted to align ChatGPT's forerunners with human values and positive goals.
But altruistic aims were quickly overwhelmed by the enticement of very big bucks and the "win at all costs" competitive instincts of OpenAI's founders. A 2017 paper by Google researchers showed that parallel processing could greatly enhance the speed, prowess, and profit potential of chatbots. Always self-seeking the main chance, Musk tried to merge OpenAI with Tesla. Its nonprofit board resisted and Musk abruptly and angrily left the company—which he has since periodically sued.
In 2019, Sam Altman gathered investors, particularly Microsoft, to create a for-profit subsidiary of the presumably nonprofit OpenAI. Soon the tail was wagging the dog—OpenAI's reckless release of ChatGPT clearly contravened its original humanity protecting mission.
In 2023, Altman was fired by the OpenAI nonprofit board for "not consistently being candid in his communications"—meaning he was chasing market share and ignoring safety. Microsoft came to Altman's rescue and created a board more loyal to profit and less concerned about dangers. Disturbed by its dramatic change in values and wanting to protect their integrity, members of the original OpenAI safety team chose to leave the company.
OpenAI recently attempted to completely cut itself free from its altruistic roots by becoming a fully for-profit company. Legal constraints have blocked it, but OpenAI's owners need not shed tears—its market capitalization is estimated at $500 billion, the highest valuation of any private company in the world, and growing exponentially.
OpenAI was started to do good; instead, it has done well. And so has the artificial intelligence industry, which now boasts 9 of the 10 richest corporations in the world.
The chatbot race that started 3 years ago may turn out to be a tipping point in the evolution (or devolution) of our species. Many experts who helped develop chatbot technology now have creators' remorse, warning of an appreciable risk that today's servant will become tomorrow's existential monster.
For better and for worse, chatbots have been hardwired and programed in our image. They suffer from the same tragic flaws we do and mimic our worst behaviors just as skillfully as they mimic our best. The very human qualities that make chatbots so popular with humans also make them so dangerous. Our species' future safety requires chatbots that express what is truest and best in human nature, not what is most profitable for Big AI.
But the trend lines are strikingly unfavorable: humans are becoming ever more dependent on chatbots, while chatbots are becoming ever more independent from humans. Chatbots are reaching for superintelligence, while our reliance on chatbots is dumbing us down. A precocious and rambunctious 3-year-old ChatGPT, lacking parental guidance, is moving fast and recklessly breaking things¾and we humans may soon lose the power to control it.
Dr Frances is professor and chair emeritus of the Department of Psychiatry and Behavioral Science at Duke University.
Mr Pennington is computer scientist and author of the book You Teach the Machines: AI On Your Terms.
Newsletter
Receive trusted psychiatric news, expert analysis, and clinical insights — subscribe today to support your practice and your patients.







