OpenAI faces the first lawsuit in artificial intelligence (AI) after alleged defamatory claims made by their AI chatbot, ChatGPT. Georgia-based radio host, Mark Walters, filed the lawsuit against OpenAI this week, asserting that the company should be held responsible for the statements made by ChatGPT. The case stems from when Fred Riehl, Editor-in-Chief of AmmoLand, asked ChatGPT for a summary of the Washington court case, Second Amendment Foundation v. Ferguson, which he was researching for an article.
However, instead of providing a summary of the case, the AI chatbot responded with false information about Walters, claiming that he was the Treasurer and Chief Financial Officer – hence allegedly engaging in fraudulent activities, including embezzling funds. The lawsuit claims that ChatGPT stated that Walters manipulated financial records, made unauthorised personal expenses, and failed to submit accurate or timely financial reports and disclosures. None of it was true, and as it turned out, Walters had never worked, nor had he been affiliated with the Second Amendment Foundation. Plus, the case didn’t even involve financial fraud. Walters is seeking punitive damages for the “hallucinations” of the AI, which ChatGPT reportedly asserted, even when Riehl asked for confirmation of the information.
Although Walters is the first to bring AI hallucinations to court, other similar cases might follow the trend of AI manufacturing false information. In April, an Australian mayor threatened to sue OpenAI when ChatGPT claimed he was a criminal convicted in a bribery scandal, whereas in reality, he was the whistleblower in the case.
A lawyer used ChatGPT for legal filing. The chatbot cited nonexistent cases it just made up