Home Lifestyle OpenAI sued for defamation after ChatGPT created bogus complaint accusing man of embezzlement

OpenAI sued for defamation after ChatGPT created bogus complaint accusing man of embezzlement

by admin

A Georgia man has sued OpenAI, the creator of ChatGPT, alleging that the popular chatbot produced a fake legal brief accusing him of fraud and embezzlement through a phenomenon AI experts call “artificial hallucinations.” This is the first defamation lawsuit filed against the creator of the generative AI tool.


  • According to Bloomberg Law, the case was filed in Georgia state court by radio host Mark Walters, who alleges that ChatGPT provided details of a bogus complaint to a reporter who sought information about an ongoing real trial.
  • The real lawsuit was filed by the Second Amendment Foundation against Washington State Attorney General Bob Ferguson, a case Mr. Walters is not involved in.
  • The radio host claims the chatbot responded to his inquiry about this real-life court case with a summary of an entirely fictional case in which the founder of the Second Amendment Corporation sued Mr. Walters for “fraud and embezzlement” against the organization.
  • The report adds that Gunmen of America radio host Mark Walters is not involved in the Washington lawsuit and has never worked for the Second Amendment institution.
  • Forbes has contacted OpenAI for comment on the lawsuit.


The fake legal brief likely resulted from a relatively common problem with generative AI, known as “artificial hallucinations”. This phenomenon occurs when the language model generates completely wrong information without any warning, sometimes in the middle of an otherwise correct text. Hallucinatory content can appear convincing, as it may superficially resemble real information and may also include fictitious quotes and made-up sources. ChatGPT states on its homepage that it may “sometimes produce incorrect information” or “produce harmful instructions or biased content”. When asked what AI hallucinations are, ChatGPT responds with a lengthy description of the problem, which ends with: “It is important to note that AI hallucinations are not actual perceptions. They are experienced by the AI ​​system itself… These hallucinations refer to content generated by the AI ​​system artificial intelligence that may resemble human perceptions, but is generated entirely by the computational processes of artificial intelligence.”

main context

Both OpenAI and competitors like Google have acknowledged concern about artificial hallucinations, an issue some experts say could exacerbate the problem of misinformation online. When announcing its latest language learning model, GPT-4, in March this year, OpenAI said it had “similar limitations” to previous models. The company issued a warning: “It is still not entirely reliable (it ‘hallucinating’ facts and makes reasoning errors). Great care must be taken when using language model results, especially in high-stakes contexts, by following careful protocol (such as human review, or identification of additional context, or outright avoidance of high-risk uses).Last month, OpenAI said it was working on a new AI training method that aims to solve the problem of artificial hallucinations.

Amazing fact

Last month, a Manhattan attorney sparked controversy after he used ChatGPT to generate a legal brief for a personal injury case and bring it to court. However, the legal document issued by Amnesty International referred to several cases that were not real.

Translated article from the American magazine Forbes – Author: Siladitya Ray

<< اقرأ أيضًا: عشرة ابتكارات تم تمكينها بواسطة GPT-4 >>>

Related News

Leave a Comment