First lawsuit against ChatGPT: I am offended by what the robot said, your Honour

A radio talk show host in the United States of America has sued artificial intelligence company, OpenAI, for defamation by its chatbot, ChatGPT.  This is the first lawsuit brought against the chatbot and may be a landmark moment, with defamation law being applied to a new area of artificial intelligence and publication. It is a reminder about the risks associated with artificial intelligence and may be indicative of future cases in this space, with a regional Australian mayor previously saying that he may commence proceedings against OpenAI if it did not correct ChatGPT’s false claims about him.  

 

What is the claim about?

On 5 June 2023, Mark Walters commenced a claim in the Superior Court in the State of Georgia against ChatGPT for allegedly defaming him by generating “false and malicious” accusations.

The claim related to the use of ChatGPT by journalist, Fred Riehl. Mr Riehl asked ChatGPT to summarise a Washington Federal Court case (Second Amendment Foundation v. Robert Ferguson) and provided ChatGPT with a link to the lawsuit. Mr Walters was not named in the lawsuit.

ChatGPT responded by creating a convincing and detailed summary of the proceedings. The summary stated that Mr Walters had defrauded funds from not-for-profit organisation, Second Amendment Foundation, and that the foundation had accused Mr Walters of manipulating bank statements and omitting the proper financial disclosures of the foundation’s leadership, to conceal his stealing of assets. This information was not correct. Mr Walters was not involved and had never been involved in the foundation.

ChatGPT can ‘hallucinate’

Artificial intelligence hallucinates. That is, it will give a confident response that is incorrect because of the use of inaccurate or unrelated information in generating the scripts. These hallucinations often come about due to a lack of real-world understanding or restraints around data input.

In Mr Walters’ case, the hallucinations have been said to be a direct result of his online commentary on gun rights in the United States of America which led to ChatGPT collating wrong information and producing a factually incorrect summary. 

Whilst all the parties are aware that the information produced by ChatGPT is incorrect, it may be difficult for Mr Walters to prove that ChatGPT defamed him.

It is unclear how Australian courts would approach ChatGPT publications in defamation claims. Matters of jurisdiction and identifying sufficient control and responsibility for a defamatory publication in this context are novel and challenging issues. It is evident that legislative reform may be required for Australian courts to respond to the advancements in artificial intelligence.

Lavan previously published the article ‘Defamation 101 – What You Need to Know’ which summarises the law of defamation in Western Australia and how a claim may be brought or defended. The article also provides useful guidance on how to minimise the risk of publishing a communication that is defamatory. 

Lavan comment

Similar cases may emerge in Australia in the near future. In April 2023, an Australian regional mayor publicly commented that he has requested that ChatGPT remove false claims that he has served time in prison for bribery. In reality, the mayor was never charged with a crime, and he was, in fact, a whistle-blower who assisted in uncovering an international bribery scandal in 2000.

It will be interesting to see how a court applies defamation law to artificial intelligence in Australia particularly in light of the upcoming 2024 Model Defamation Provisions reforms.

In the meantime, these examples are a timely reminder that you must be aware of the risks associated with using (and relying on) artificial intelligence, particularly if you use ChatGPT in a professional forum.

To read more about ChatGPT, its illegal or unethical activities and the implications for the legal landscape, please see Lavan’s article ‘Artificial Intelligence is rapidly changing the way we live and work, and its applications are wide-ranging and diverse’.

If you think you have any concerns or questions in relation to the use of an artificial intelligence system or artificial intelligence generally, please contact Iain Freeman, Cinzia Donald or Millie Richmond-Scott.

Disclaimer – the information contained in this publication does not constitute legal advice and should not be relied upon as such. You should seek legal advice in relation to any particular matter you may have before relying or acting on this information. The Lavan team are here to assist.