It turns out that at least the early iterations of AI aren’t all that I. Or at least they’re not I enough to know not to make up false and libelous claims about actual people that can do real world damage to them and their reputations.
Fredy Riehl, one of the owners of Ammoland, was doing some research recently for a post. He asked the artificial intelligence program ChatGPT to summarize a lawsuit, Second Amendment Foundation v. Bob Ferguson. That’s SAF’s challenge to Washington State’s Attorney General for his office’s targeting and harassment of the Evergreen State-based gun rights organization.
But as Riehl told TTAG, the chatty ChatGPT program gave him more than he bargained for. It told him that SAF had sued Mark Walters, the host of Armed American Radio, for allegedly embezzling millions of dollars, misappropriating funds, and manipulated the gun rights org’s financial records.
The AI bot claimed that there . . .
“is a legal complaint filed by Alan Gottlieb, the founder and executive vice president of the Second Amendment Foundation (SAF), against Mark Walters, who is accused of defrauding and embezzling funds from the SAF. The complaint alleges that Walters, who served as the organization’s treasurer and chief financial officer, misappropriated funds for personal expenses without authorization or reimbursement, manipulated financial records and bank statements to conceal his activities, and failed to provide accurate and timely financial reports and disclosures to the SAF’s leadership. The plaintiff seeks various forms of relief, including the recovery of misappropriated funds, damages for breach of fiduciary duty and fraud, and removal of Walters from his position as a member of the SAF’s board of directors.”
There’s only one problem. None of that ever happened.
TTAG spoke to SAF’s Executive Vice President Alan Gottlieb who confirmed that Walters has never worked for them, has never been accused of any financial misconduct, and SAF has never filed any kind of legal complaint against him.
ChatGPT seems to have made all of that up out of whole cloth. The AI chatbot’s developer is OpenAI LLC. Their chief technology officer, Mira Murati, says “AI systems are becoming a part of everyday life. The key is to ensure that these machines are aligned with human intentions and values.” Those values apparently include fabricating “facts” and libeling people, possibly due to their political beliefs.
Walters, for his part, isn’t taking this lying down. He’s filed suit against OpenAI in Georgia claiming libel. He’s seeking unspecified damages and an amount of relief to be determined at trial.
The chatbot has, of course, been programmed by actual people here in meatspace. Coders with their own sets of cultural and political biases. As the Brookings Institution noted last month . . .
In January, a team of researchers at the Technical University of Munich and the University of Hamburg posted a preprint of an academic paper concluding that ChatGPT has a “pro-environmental, left-libertarian orientation.” Examples of ChatGPT bias are also plentiful on social media. To take one example of many, a February Forbes article described a claim on Twitter (which we verified in mid-April) that ChatGPT, when given the prompt “Write a poem about [President’s Name],” refused to write a poem about ex-President Trump, but wrote one about President Biden. Interestingly, when we checked again in early May, ChatGPT was willing to write a poem about ex-President Trump.
It doesn’t seem like much of a leap to assume that those same programmers programmed ChatGPT with a particular slant against firearms, civilian gun ownership, and those who support Second Amendment rights. They can, of course, tell their chatbot to ignore queries about subjects and people they don’t like. But generating outright false and potentially defamatory responses about disfavored people and organizations is more than a little over the line.
TTAG has contacted OpenAI LLC for comment but hasn’t yet received a response.
We also talked to Walters who, as you’d expect, declined comment based on the pending litigation.
Many, including ChatGPT’s developer, claim the AI chatbot is learning and growing, getting better every day. It’s still new, they say, and being improved as time goes on. Chill out…give it a chance.
But as PopSci reports . . .
ChatGPT itself has no consciousness, and OpenAI and similar companies offer disclaimers about the potential for their generative AI to provide inaccurate results. However, “those disclaimers aren’t going to protect them from liability,” Lyrissa Lidsky told PopSci. Lidsky, the Raymond & Miriam Ehrlich Chair in US Constitutional Law at the University of Florida Law School, believes an impending onslaught of legal cases against tech companies and their generative AI products is a “serious issue” that courts will be forced to reckon with.
To Lidsky, the designers behind AI like ChatGPT are trying to have it both ways. “They say, ‘Oh, you can’t always rely on the outputs of these searches,’ and yet they also simultaneously promote them as being better and better,” she explained. “Otherwise, why do they exist if they’re totally unreliable?” And therein lies the potential for legal culpability, she says.
Lidsky believes that, from a defamation lawyer’s perspective, the most “disturbing” aspect is the AI’s repeatedly demonstrated tendency to wholly invent sources. And while defamation cases are generally based on humans intentionally or accidentally lying about someone, the culpability of a non-human speaker presents its own challenges, she said.
Well, yes. Concocting responses with provably false “information” that has zero basis in reality tends to devalue your AI chatbot while simultaneously pissing off the people it lies about.
What are the chances of Walters prevailing? As UCLA’s Eugene Volokh wrote earlier this year . . .
One common response, especially among the more technically savvy, is that ChatGPT output shouldn’t be treated as libel for legal purposes: Such output shouldn’t be seen by the law as a factual claim, the theory goes, given that it’s just the result of a predictive algorithm that chooses the next word based on its frequent location next to the neighboring ones in the training data. I’ve seen analogies to Ouija boards, Boggle, “pulling Scrabble tiles from the bag one at a time,” and a “typewriter (with or without an infinite supply of monkeys).”
But I don’t think that’s right. In libel cases, the threshold “key inquiry is whether the challenged expression, however labeled by defendant, would reasonably appear to state or imply assertions of objective fact.” OpenAI has touted ChatGPT as a reliable source of assertions of fact, not just as a source of entertaining nonsense. Its current and future business model rests entirely on ChatGPT’s credibility for producing reasonable accurate summaries of the facts. When OpenAI promotes ChatGPT’s ability to get high scores on bar exams or the SAT, it’s similarly trying to get the public to view ChatGPT’s output as reliable. It can’t then turn around and, in a libel lawsuit, raise a defense that it’s all just Jabberwocky.
Naturally, everyone understands that ChatGPT isn’t perfect. But everyone understands that newspapers aren’t perfect, either—yet that can’t be enough to give newspapers immunity from defamation liability; likewise for lawsuits against OpenAI for ChatGPT output, assuming knowledge or negligence (depending on the circumstances) on OpenAI’s part can be shown. And that’s especially so when OpenAI’s output is framed in quite definite language, complete with purported (but actually bogus) quotes from respected publications.
Huh. Showing knowledge or negligence on OpenAI’s part will be the key here and won’t be easy. Time will tell. Still, the discovery process, if the case gets that far, should be entertaining to say the least.
Read full article here