Prestigious Law Firm Apologizes for AI Errors in Bankruptcy Filing
AI Errors Lead to Apology from Sullivan & Cromwell
Sullivan & Cromwell, a leading law firm on Wall Street, has issued an apology to a federal bankruptcy judge due to significant errors in a court document caused by artificial intelligence. In a letter dated April 18, 2026, Andrew Dietderich, who co-leads the firm's global restructuring division, expressed regret for inaccuracies found in an emergency motion related to the Chapter 15 bankruptcy proceedings of Prince Global Holdings. The firm acknowledged that the filing was marred by AI-generated 'hallucinations,' which included fictitious case names, fabricated quotes, incorrect summaries of judicial decisions, and erroneous references to sections of the US Bankruptcy Code.
Dietderich stated, “We sincerely regret the errors in the Motion and the burden they have imposed on the Court and the parties. I apologise on behalf of our entire team.” He clarified that Sullivan & Cromwell has established stringent internal guidelines for AI usage, instructing lawyers to 'trust nothing and verify everything.' However, in this instance, the firm admitted that these protocols were not adhered to properly.
The inaccuracies were highlighted by opposing counsel from Boies Schiller Flexner, prompting the firm to submit a revised motion along with the apology. Sullivan & Cromwell is representing liquidators from the British Virgin Islands managing the fallout from Prince Global Holdings, which is part of the Cambodia-based Prince Group. This case is tied to serious allegations against the group's founder, Chen Zhi, who is facing US charges related to wire fraud and money laundering linked to alleged forced labor operations in Cambodia.
While the specific AI tool utilized remains unspecified, the firm reportedly holds an enterprise license for OpenAI's ChatGPT. Given that partners at Sullivan & Cromwell typically charge over $2,000 per hour for high-stakes bankruptcy cases, this incident is particularly damaging. This situation underscores the persistent risks associated with the use of AI in legal contexts, revealing that even elite firms with advanced safeguards can encounter issues if their lawyers do not meticulously verify AI-generated content.
