Can a Class Action Lawsuit Force Big AI to Make Chatbots Safer?
Class action lawsuits may be the key to holding Big AI accountable and ensuring safer chatbots for vulnerable psychiatric patients.
In an
Our purpose here is to discuss class action suits: the mechanism that helped tame Big Tobacco and Big Pharma and might be the only means of pressuring Big AI to make safer chatbots.
We are lucky to have the perfect guide. Dov Grunschlag is an attorney in San Francisco whose practice focus is labor and employment law and whose experience includes class action lawsuits.
Frances: What is a class action lawsuit?
Grunschlag: An action brought by someone on behalf of a number of people—a “class”—who claim to have been harmed by the defendant’s conduct, and who are “ascertainable” (eg, who can be shown by objective evidence—digital footprints—to have communicated with a chatbot).
Frances: When is a class action lawsuit appropriate?
Grunschlag: When the harm is caused by the same or substantially similar conduct on the part of the defendant, and it makes sense to resolve the dispute in a single lawsuit rather than many individual ones. The class must be “numerous," but the number is not a fixed one and can range from several dozens to many thousands. The same legal and factual issues in the case must impact the members of the class in substantially the same way.
Frances: Chatbots have been around for only 3 years. Media reports suggest many have been hurt. But is it too soon for a class action lawsuit?
Grunschlag: The number of people affected might be large enough to justify a class action already.
Frances: OpenAI misleadingly introduced ChatGPT as a research tool for beta testing, then marketed like crazy, so there are now 700 million users. Does that increase their potential liability?
Grunschlag: There has been some litigation in this area, but the issue of legal liability has yet to be definitively resolved—and unless there is a comprehensive federal law on the matter, it would be governed by state law, which can vary from state to state. Speaking generally, however, the failure to adequately test the “product,” the failure to obtain input from mental health practitioners, and the failure to report adverse consequences—all weigh in favor of liability on theories such as negligence, failure to warn, strict liability for design defects, deceptive or false advertising, and misrepresentation.
Frances: What would be Big AI's defense stance?
Grunschlag: Tech companies can be expected to dispute the claims on the merits, but—for present purposes—also to contend that a class action is not an appropriate vehicle. The defense is likely to argue that the facts vary widely from person to person—that the interactions with the chatbot vary, that the results of the interactions vary, that the states of mind and mental health of the individuals vary—and that the damages, if any, cannot be determined in a class-wide formula but rather need to be adjudicated on a case-by-case basis.
Frances: Are cases like these usually settled or do they go to court?
Grunschlag: The vast majority of all cases settle before, during, or after trial. It is also not uncommon to take an individual case to trial, and the result will shape the defendant’s stance in the cases as a whole—embolden it to defend further, or guide it towards settlement.
Frances: What kind of firms take on class action lawsuits?
Grunschlag: There are firms that specialize in class actions. Class actions typically take a long time—it can be years before a court even certifies the case as appropriate for a class action. They are labor-intensive and require technology and administrative infrastructure. A firm needs to have not only the legal expertise to navigate the issues, but also the financial resources to sustain this effort.
Frances: If tech companies were now to proactively work hard to improve chatbot safety, would that increase their liability (admission of previous guilt) or decrease it (we are doing our best).
Grunschlag: I do not think such efforts would increase their risk of liability—in product liability cases, evidence of “subsequent remedial measures” is generally excluded, so as not to discourage companies from taking such measures. And, if the safety efforts are successful, they would reduce the company's liability risk going forward.
Frances: How effective have class action suits been in improving corporate behavior?
Grunschlag: I have a sense that class actions have had a positive impact, but it is just a sense.
Frances: Do you think a class action suit would lead to a safer artificial intelligence?
Grunschlag: A well-researched, well-crafted class action brought by reputable counsel could well spur AI to invest more heavily in safety.
Concluding Thoughts
Nine of the 10 richest companies in the United States are in the artificial intelligence business. Economic power buys political power. A presidential executive order recently prohibited federal regulation of Big AI—and even strongly discouraged state regulation. Companies routinely threaten to jump ship whenever any state threatens to appropriately regulate or tax them.
Big AI has been sensitive to numerous media stories dramatically depicting how chatbots have promoted suicide, psychosis, grandiosity, eating disorder, and conspiracy theory. But media public shaming mostly results in public relations reputation laundering—not substantive change.
Individual lawsuits are a nuisance to AI companies, but not a strong motivator for reform; the cases can be drawn out endlessly, and the settlement payouts are chump change to trillion-dollar corporations. Branding based on safety (the Volvo method) might have some impact, but the one company selling itself as safer (like Anthropic) is still pretty reckless.
A class action lawsuit may be the only way we will ever have safer chatbots. The deep pockets of these companies should make them an inviting target. Thirty years ago, Big Tobacco seemed too powerful to control, until it was tamed by a class action suit. Twenty years ago, Big Pharma seemed invulnerable. It is not too early to consider legal action as society's best protection against Big AI.
Dr Frances is professor and chair emeritus in the department of psychiatry at Duke University.
Mr Grunschlag is a lawyer in labor and employment law, with experience in class action lawsuits.
Newsletter
Receive trusted psychiatric news, expert analysis, and clinical insights — subscribe today to support your practice and your patients.