Hawaiʻi is facing a rising wave of complaints against lawyers accused of ineffectively utilizing artificial intelligence (AI) programs to aid in document preparation. Yet, the state court system has yet to implement concrete measures to tackle this problem.
A significant incident highlighted this issue when a lawyer from one of Hawaiʻi’s oldest and most respected law firms confessed to having used an AI tool to both research and draft a brief presented before the Maui Circuit Court. This admission came after opposing counsel flagged “a disturbing number of fabricated and misrepresented” case citations within the document.
Honolulu attorney Kaʻōnohiokalā Aukai IV appealed to Judge Kelsey Kawano, asking that the court disregard all six cited cases. Intriguingly, two of these cases were revealed to be entirely fabricated and likely the result of what is termed “AI hallucinations.”
In a declaration, Aukai expressed regret for the error, stating his commitment to ensuring the accuracy of future case citations submitted to the courts. Fortunately for Aukai, the ruling ultimately favored him, with Judge Kawano choosing not to impose any sanctions, even though the Hawaiʻi Rules of Civil Procedure permit such consequences for the submission of erroneous citations. Aukai did not return calls for further comment.
Despite this conclusion, the incident has ignited discussions among legal professionals in Honolulu, emphasizing a critical issue regarding the use of AI tools that can significantly enhance productivity but are also notorious for generating serious errors. The credibility of the legal system is increasingly at risk, with Ray Kong, chief disciplinary counsel for the state, noting a small but rising problem with AI misuse among lawyers in Hawaiʻi.
While federal courts in Hawaiʻi have taken a firm stance against such misuse, the state judiciary is still navigating how to handle these challenges. A Committee on Artificial Intelligence and the Courts was established in April 2024 by Chief Justice Mark Recktenwald, chaired by Supreme Court Justice Vladimir Devens and First Circuit Court Judge John Tonaki, to investigate the matter, with a final report expected in December.
Meanwhile, the interim report that was originally planned for December 2024 is still considered a “work in progress,” according to judiciary spokesman Brooks Baehr. When questioned about the delay, Baehr clarified that the report is a working document and unavailable for public release.
Judges are currently addressing instances of AI-related errors individually, with no available statistics on their frequency from the judiciary.
The initial guidelines issued by the chief justice emphasize existing ethics rules requiring honesty in representations made to the courts. Attorneys making false statements can face sanctions under the Hawaiʻi Rules of Civil Procedure, a point reiterated in guidance provided to lawyers by Recktenwald. These guidelines indicate that the ethical obligations of lawyers remain unchanged in relation to the availability of AI.
The experience of Case Lombardi illuminates the complexities linked with sanctioning attorneys for the submission of flawed legal documents. Local law firms appear to be taking precautions by introducing internal guidelines for the AI’s use, particularly when producing court-related documents and client memos.
Paul Alston, a partner in the Honolulu branch of Dentons, the world’s largest law firm, cautioned against the professional use of such AI tools, labeling them “a disaster”.
National experts on the issue share similar concerns. Nancy Rappaport, a law professor at the University of Nevada Las Vegas, has noted that many practitioners, especially younger lawyers, are putting excessive trust in these tools, overlooking their capability to distort existing case law significantly.
Conversely, some attorneys argue that AI can vastly improve the efficiency of legal work. Mark M. Murakami, president of the Hawaiʻi State Bar Association and a member of the AI committee, emphasized that AI can significantly reduce the time required for specific tasks, allowing lawyers to serve more clients.
For example, Murakami successfully utilized AI to prepare questions for a trial witness by analyzing relevant transcripts, compressing a one-hour task down to just eight minutes. However, the increasing reliance on AI for externally submitted documents raises new challenges, particularly as researchers like Damien Charlotin, who tracks AI error cases, have compiled a database of over 230 instances globally where courts detected fake citations and arguments.
Charlotin reports finding 141 of these cases in the U.S. alone, suggesting that the frequency may be even higher, as this database excludes a broader range of filings containing AI-generated inaccuracies.
In contrast to Hawaiʻi, other jurisdictions have responded with strict sanctions. A notable incident occurred in California, where a federal court imposed $31,100 in sanctions against two law firms for submitting a brief replete with fabricated citations generated by AI. However, such consequences remain relatively rare, according to Charlotin, who noted a notable leniency towards professionals caught in similar predicaments.
Sanctions have typically been imposed when parties refuse to acknowledge errors, lie about them, or assign blame to someone else, highlighting a troubling tendency among certain practitioners.
In Hawaiʻi, there exists only one documented case where a lawyer faced sanctions for using a fictitious citation, likely produced by an AI tool. In that case, the lawyer promptly admitted the error, apologized, and accepted a $100 fine, despite the associated rule of civil procedure not supporting sanctions at that stage of the proceedings.
Calls for stricter measures are beginning to emerge within the legal community. Alston urged the courts to take a firmer stance to safeguard the judicial system’s integrity, asserting that the consequences for submitting fabricated law should be severe.
Ken Lawson, teaching professional responsibility at the University of Hawaiʻi, added that the ethical violations inherent in misrepresenting case law are substantial, especially given the likelihood that attorneys using AI-generated citations may not have read the original cases. He contended that these actions could have significant implications for the fees clients are charged for services that lack any substantiated legal backing.
The Office of Disciplinary Counsel (ODC) is responsible for investigating these issues, with Ray Kong asserting that while the number of complaints is not vast, it is on the rise.
Most complaints relate to fictitious citations or incorrectly interpreted legal cases. Kong stated, “Even if it’s unintentional, you’re still misrepresenting a case.”
Lawson and Alston also highlighted the question of proper supervision by senior lawyers in firms. Although Aukai accepted responsibility for the brief that contained errors, they argued his supervising partner, Michael Lam, ultimately bears the onus for the oversight.
“If Lam had properly reviewed the brief,” they noted, “he would have caught the issues before it was submitted to the court under his name.” Alston emphasized that partners have an ethical obligation to adequately supervise their staff to uphold the integrity of the legal profession.
However, Lam decided against commenting on the matter, suggesting that the court record adequately reflects the situation.
image source from:civilbeat