AI Chat Logs Are Now Court Evidence
A February 2026 federal ruling found AI chatbot logs admissible as evidence. Law firms are now rewriting client contracts over attorney-client privilege.

What to Know
- United States v. Heppner — a February 2026 federal ruling found that 31 documents generated using Anthropic's Claude were not protected by attorney-client privilege
- More than a dozen major U.S. law firms have since issued client advisories warning that AI chatbot conversations carry no legal protection in court
- New York firm Sher Tremonte became one of the first to add explicit AI disclosure warnings to client engagement contracts
- The pattern emerging from courts: represented parties who use consumer AI on their own are exposed; self-represented litigants in civil cases may have more protection
Attorney-client privilege does not extend to your ChatGPT conversations — and U.S. courts are starting to make that crystal clear. A federal ruling handed down in February 2026 confirmed that AI chatbot logs can be seized, subpoenaed, and used against a defendant in court. The legal profession is now in overdrive trying to figure out what that means for every client who has ever typed a legal question into Claude or GPT.
What the Heppner Ruling Actually Said
The case that sparked all this is United States v. Heppner, decided in February 2026 by Judge Jed Rakoff of the Southern District of New York. Bradley Heppner, former chair of bankrupt financial services company GWG Holdings, had been indicted on five federal counts including securities fraud and wire fraud. After receiving a grand jury subpoena, he turned to Anthropic's Claude on his own to start mapping out his defense. He generated 31 documents. The FBI later seized them from his home.
Rakoff ruled those documents were not shielded by attorney-client privilege for three reasons. First, Claude is not an attorney. Second, Anthropic's privacy policy explicitly reserves the right to share user data with third parties, including government regulators. Third, and critically, Heppner acted entirely on his own without his lawyers directing him to do so. No attorney-client relationship 'could exist,' Rakoff wrote, 'between an AI user and a platform such as Claude.'
This was a first-of-its-kind written judicial opinion in the United States on the question of AI and attorney-client privilege. The ruling did more than answer one defendant's motion — it handed prosecutors a roadmap and rattled the entire legal profession.
We are telling our clients: You should proceed with caution here.
Law Firms Are Rewriting Contracts in Real Time
Within weeks of the ruling, more than a dozen major U.S. law firms had issued formal client advisories. Some sent emails. Others went further — embedding warnings directly into the engagement contracts clients sign before representation even begins.
New York firm Sher Tremonte, which regularly handles white-collar criminal defense, added language to a March 2026 engagement agreement stating that 'disclosure of privileged communications to a third-party AI platform may constitute a waiver of the attorney-client privilege.' That contract language is believed to be among the first instances of a court ruling being translated into a formal contractual obligation for clients.
Other firms are pushing clients toward so-called 'closed' enterprise-grade AI systems, though even that comes with a caveat. O'Melveny & Myers and other firms have acknowledged that enterprise AI remains largely untested in court on this question. The guidance is essentially: be careful, use vetted tools, and understand that no one really knows where the lines are yet.
The Kovel Doctrine Lifeline — and How Firms Are Using It
Rakoff himself left a door open during the Heppner hearing. He noted that had counsel actually directed Heppner to use Claude, the AI 'might arguably be said to have functioned in a manner akin to a highly trained professional who may act as a lawyer's agent within the protection of the attorney-client privilege.' That language is now something lawyers are actively building protocols around.
The reference is to the Kovel doctrine, a legal principle that can extend attorney-client privilege to non-lawyers working as an attorney's agent. Debevoise & Plimpton turned this into concrete client advice: if a lawyer instructs a client to use an AI tool for research, the client should write that fact inside the chatbot prompt itself. The firm suggested language like: 'I am doing this research at the direction of counsel for X litigation.' Whether that will hold up in court remains untested, but it is the kind of tactical maneuvering that this ruling has forced.
Does Any AI Use Get Privilege Protection?
Are AI Chatbot Conversations Protected by Attorney-Client Privilege?
The short answer: sometimes, depending entirely on who you are and how you used the tool. In Warner v. Gilbarco, a court ruled that a self-represented plaintiff's ChatGPT conversations were protected as work product. The reasoning: AI tools are 'tools, not persons,' and sharing information with software is not the same as disclosing it to an adversary. A Colorado court reinforced that logic on March 30 in Morgan v. V2X, also protecting a pro se litigant's AI work product. That ruling went further though — the court ordered the plaintiff to disclose which AI tool he used and banned confidential discovery materials from being fed into platforms that allow data training.
The pattern that's emerging is fairly stark. If you're a represented party who decided on your own to use a consumer AI chatbot, you're exposed. If you're representing yourself in a civil case, you may have more cover. That difference is now one of the sharpest fault lines in U.S. evidence law, and courts are drawing the line case by case.
Justin Ellis of MoloLamken told reporters that more rulings will eventually clarify when AI chats can be used as evidence. Until then, the guidance coming from law firms is the clearest signal available — and it sounds a lot like: watch what you type.
AI Is Entering Courts From Both Sides
There's an uncomfortable irony in all of this. The same technology that just burned a fraud defendant is now being piloted by Los Angeles Superior Court judges to help manage their caseloads. A tool called Learned Hand summarizes filings, organizes evidence, and generates draft rulings in civil cases. The goal is to cut down on administrative work so judges can focus on the parts that require actual legal judgment.
So AI is entering the courtroom from both sides simultaneously. Defendants are using it to prep their cases. Judges are using it to process them. And the rules governing any of this are being written in real time, one ruling at a time.
The legal profession's version of clarity on all this is showing up in engagement letters, client emails, and advice that would have seemed bizarre two years ago: be very deliberate about what you type into a chatbot, because a judge may end up reading it.
Frequently Asked Questions
Can AI chatbot conversations be used as evidence in court?
Yes. A February 2026 federal ruling in United States v. Heppner confirmed that conversations with Anthropic's Claude were admissible as evidence. The court found no attorney-client privilege applied because the defendant used AI independently, without direction from his lawyers, and because Anthropic's privacy policy allows data sharing with third parties.
Does attorney-client privilege protect AI chat logs?
Generally no, if you used a consumer AI chatbot on your own without your lawyer's direction. The Heppner ruling established that no attorney-client relationship can exist between a user and an AI platform. However, if an attorney explicitly directs a client to use AI for legal research, courts may treat it differently under the Kovel doctrine.
What are law firms advising clients about AI chatbots in 2026?
More than a dozen major U.S. firms have issued formal advisories warning clients that AI chatbot conversations carry no legal protection. Sher Tremonte added explicit warnings to client contracts in March 2026. Other firms recommend enterprise-grade AI systems and advise clients to note in their prompts when use is directed by counsel.
What is the Kovel doctrine and how does it relate to AI?
The Kovel doctrine allows attorney-client privilege to extend to non-lawyers working as an attorney's agent. Judge Rakoff suggested in the Heppner case that if a lawyer directed a defendant to use AI, it might qualify for Kovel protection. Debevoise and Plimpton now advises clients to state in their AI prompts when research is done at counsel's direction.






