Legal ethics in question as lawyer uses AI to create fake evidence

Legal ethics in question as lawyer uses AI to create fake evidence
Images source: © GETTY | NurPhoto

9:18 AM EST, February 8, 2024

The year 2023 was marked by the development of artificial intelligence, including the ChatGPT. The CEO of OpenAI, the company responsible for this device, Sam Altman, was nominated for the Time's Persona of the Year title, which was ultimately awarded to Taylor Swift. Although new technologies are supposed to help, sometimes they can also lead to big problems, as happened to a lawyer from New York.

Recently, Reuters reported that New York lawyer Jae Lee is now facing disciplinary penalties after it turned out that he cited a case that never existed in court and was invented for him by Open AI's ChatGPT.

Lee, in her client's lawsuit against the Queens' doctor, allegedly botched an abortion. The lawyer included a non-existing court decision to appeal her client's case. However, when it turned out that the case she cited never took place and was only generated by AI, the 2nd U.S. Circuit Court of Appeals ordered Lee the grievance panel. As the court stated in its disciplinary review of the lawyer, her approach "falls well below the basic obligations of counsel."

Lee referred to the entire case in an e-mail quoted by Reuters, where she admitted that in her appeal, she added the court decision suggested by OpenAI's ChatGPT to her client's case. However, she underlined that there was "no bad faith, willfulness, or prejudice towards the opposing party or the judicial system."

Sources: Fox News, Reuters

Related content