News

No filter — Will ChatGPTs hallucinations be allowed to ruin your life? Earliest lawsuits reveal how AI giants likely plan to dodge defamation claims.

Ashley Belanger – Oct 23, 2023 4:46 pm UTC EnlargeAurich Lawson reader comments 118 with

Bribery. Embezzlement. Terrorism.

What if an AI chatbot accused you of doing something terrible? When bots make mistakes, the false claims can ruin lives, and the legal questions around these issues remain murky.

That’s according to several people suing the biggest AI companies. But chatbot makers hope to avoid liability, and a string of legal threats has revealed how easy it might be for companies to wriggle out of responsibility for allegedly defamatory chatbot responses. Further ReadingWhy ChatGPT and Bing Chat are so good at making things up

Earlier this year, an Australian regional mayor, Brian Hood, made headlines by becoming the first person to accuse ChatGPT’s maker, OpenAI, of defamation. Few seemed to notice when Hood resolved his would-be landmark AI defamation case out of court this spring, but the quiet conclusion to this much-covered legal threat offered a glimpse of what could become a go-to strategy for AI companies seeking to avoid defamation lawsuits.

It was mid-March when Hood first discovered that OpenAI’s ChatGPT was spitting out false responses to user prompts, wrongly claiming that Hood had gone to prison for bribery. Hood was alarmed. He had built his political career as a whistleblower exposing corporate misconduct, but ChatGPT had seemingly garbled the facts, fingering Hood as a criminal. He worried that the longer ChatGPT was allowed to repeat these false claims, the more likely it was that the chatbot could ruin his reputation with voters.

Further ReadingWhy ChatGPT and Bing Chat are so good at making things upHood asked his lawyer to give OpenAI an ultimatum: Remove the confabulations from ChatGPT within 28 days or face a lawsuit that could become the first to prove that ChatGPT’s mistakesoften called “hallucinations” in the AI fieldare capable of causing significant harms.

We now know that OpenAI chose the first option. By the end of April, the company had filtered the false statements about Hood from ChatGPT. Hood’s lawyers told Ars that Hood was satisfied, dropping his legal challenge and considering the matter settled. Advertisement

AI companies watching this case play out might think they can get by doing as OpenAI did. Rather than building perfect chatbots that never defame users, they could simply warn users that content may be inaccurate, wait for content takedown requests, and then filter out any false informationideally before any lawsuits are filed.

The only problem with that strategy is the time it takes between a person first discovering defamatory statements and the moment when tech companies filter out the damaging informationif the companies take action at all.

For Hood, it was a month, but for others, the timeline has stretched on much longer. That has allegedly put some users in uncomfortable situations that grew so career threatening that they’re now demanding thousands or even millions of dollars in damages from two of today’s AI giants. OpenAI and Microsoft both sued for defamation

In July, Maryland-based Jeffery Battle sued Microsoft, alleging that he lost millions and that his reputation was ruined after he discovered that Bing search and Bing Chat were falsely labeling him a convicted terrorist. That same month, Georgia-based Armed America Radio host Mark Walters sued OpenAI, claiming that ChatGPT falsely stated that he had been charged with embezzlement.

Unlike Hood, Walters made no attempt to get ChatGPT’s allegedly libelous claims about him removed before filing his lawsuit. In Walters’ view, libel laws don’t require ChatGPT users to take any extra steps to inform OpenAI about defamatory content before filing claims. Walters is hoping the court will agree that if OpenAI knows its product is generating responses that defame people, it should be held liable for publishing defamatory statements. Page: 1 2 3 4 5 6 Next → reader comments 118 with Ashley Belanger Ashley is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience. Advertisement Channel Ars Technica ← Previous story Next story → Related Stories Today on Ars

Articles You May Like

'Forever Chemicals' in Tap Water Linked to Cancer. How to Lower Your Risk
Barron Trump business partner clarifies future of luxury real estate venture
Diddy files $50 million defamation suit against grand jury witness
DOGE faces first legal challenge minutes after Trump is sworn in
Peter Schiff on Bitcoin: ‘It’s Just a Meme Coin’ Fact or Fiction?