Dear Counsel, Meet Exhibit A: Your Client’s ChatGPT History
- Jodka, Sara H. Kass, Jeffrey H. Ewing, Gregory L. Franklin, Katie D.
- Industry Alerts
Click “Subscribe Now” to get attorney insights on the latest developments in a range of services and industries.
Privilege and Work Product in the Age of AI
Depending on which court you ask, your latest prompt to an artificial intelligence (“AI”) chatbot is either a protected private thought or a voluntary disclosure to someone other than your lawyer. In a legal landscape struggling to keep pace with technology, two ‘first-of-their-kind’ rulings—United States v. Heppner and Warner v. Gilbarco—have created a high-stakes divide over whether AI is a mere tool or a third party that renders communications discoverable by the other side.
When Bradley Heppner turned to the AI ‘Claude’ to help synthesize his defense strategy, he thought he was using a modern drafting tool; instead, a New York judge ruled he was effectively shouting his secrets in a “crowded public elevator.” Meanwhile, a Michigan court recently reached the opposite conclusion, shielding AI interactions as protected work product and declaring that AI is a “tool, not a person.”
These are not truly “AI good” vs. “AI bad” rulings. They are a reminder that privilege and work product protections rise and fall on old-school doctrine: confidentiality, purpose, and lawyer involvement. But the practical effect is new school: clients’ prompts and outputs may become the next hot category of discoverable ESI.
Shouting in an elevator: Judge Rakoff’s Decision in United States v. Heppner
In the landmark ruling of United States v. Heppner, Judge Jed S. Rakoff of the Southern District of New York held that communications between a defendant and the generative AI platform Claude are protected by neither the attorney-client privilege nor the work product doctrine. In applying the standard elements (communication between client and attorney, intended and kept confidential, for the purpose of obtaining/providing legal advice), the court determined that attorney-client privilege was inapplicable because an AI platform is not a licensed professional and cannot establish the "trusting human relationship" required for the privilege to attach. Furthermore, the court found no reasonable expectation of confidentiality in the exchange; because the defendant used a consumer-grade version of the tool, its terms of service permitted the provider to review prompts for training purposes and disclose data to government authorities. The court reminded parties that what matters is whether the person intended to obtain legal advice from Claude at the time—not whether the output was later shared with counsel. And as the court noted, Claude disclaimed providing legal advice and suggested consulting a qualified attorney.
The court also rejected the work product doctrine claim because the defendant had used the AI tool of his "own volition" rather than “by or at the behest of counsel,” and did not reflect counsel’s strategy at the time. Consequently, the generated reports reflected the defendant’s own independent research and defense theories rather than the protected mental processes or legal strategies of his attorneys. While the decision serves as a cautionary tale for the use of “unsecured public AI tools,” Judge Rakoff noted that the outcome “might arguably” differ if an attorney specifically directed a client to use a platform, or if the parties utilized enterprise-grade tools that provide contractual guarantees of confidentiality and prohibit model training on user inputs.
Still, it remains a warning to litigants. If you decide to simply upload a bunch of discovery materials, pleadings, and other documents to an AI-platform and then ask it to provide legal strategies, ideas, or identify strengths and weaknesses, if done without the involvement of counsel, those inputs and AI outputs could be discoverable. It also can be a powerful discovery tool for the other side if it learns in a deposition that a witness undertook such activities without counsel’s involvement.
AI is just another tool: Judge Patti’s Decision in Warner v. Gilbarco, Inc.
In the case of Warner v. Gilbarco, Inc., Magistrate Judge Anthony P. Patti of the Eastern District of Michigan denied a motion to compel the production of documents and information regarding a plaintiff’s use of third-party AI tools. The defendants sought to compel the production of all documents and information regarding the plaintiff’s use of third-party generative AI tools, such as ChatGPT, for litigation preparation. The court’s ruling established that such information can be protected from discovery under the work-product doctrine and lacks the necessary relevance and proportionality to the merits of the case.
The court’s decision rested on several key legal conclusions. The court ruled that using generative AI tools like ChatGPT does not waive work-product protection because waiver requires disclosure “to an adversary or in a way likely to get in an adversary’s hand.” As the court put bluntly: “ChatGPT (and other generative AI programs) are tools, not persons, even if they may have administrators somewhere in the background.” Judge Patti emphasized that an individual's AI inputs and software-reformatted outputs reflect protected “mental impressions” and internal drafting processes, which are shielded from discovery. Consequently, the judge dismissed the opposing party's attempt to obtain this AI-usage data, classifying it as an irrelevant and disproportionate “fishing expedition” that distracts from the actual merits of the case.
Ultimately, the court concluded that it would “uphold the protections afforded the thought processes and litigation strategies of both sides" and dismissed the party’s attempt to obtain AI-usage data, classifying it as an irrelevant and disproportionate "fishing expedition” that distracts from the actual merits of the case.
On their face, the Heppner and Warner decisions appear to be opposite conclusions about the same documents. But, unlike Heppner, in Warner the information uploaded to ChatGPT was already work product so the facts were quite different than Heppner and the two decisions are not necessarily opposite.
Key Takeaways
- Attorney-Client Privileged Information Cannot be Disclosed and Requires a Lawyer
- Terms of service and privacy policies are key. Entering your confidential information into a public, consumer-grade AI platform is likely to constitute disclosure to a third party, which waives attorney-client privilege. Anthropic’s privacy policy allows the company to review prompts for safety, train its models on user data, and disclose information to government authorities. This policy was critical to the Heppner court’s conclusion that the defendant could have no "reasonable expectation of confidentiality."
- AI is a tool, not a lawyer. Because a “trusting human relationship” with a licensed professional is required for attorney-client privilege—and likely other means of protecting confidences, conversation with an AI platform like Claude cannot be privileged. An AI is not an attorney and does not owe fiduciary duties.
- Work Product Protection Requires Attorney Direction
- If you want to protect information as work product, make sure the AI conversations are “at the direction” of counsel. Work product protection failed primarily because the defendant acted "of his own volition" and not at the direction of counsel. As Heppner noted, the work product doctrine is designed to protect a lawyer’s mental processes and strategies. Because the defendant used the AI tool independently to synthesize his own defense theories, the results were viewed as a layperson’s independent research rather than protected legal work.
Enterprise AI and Supervised Use Offer a Path Forward
What does this mean for lawyers and clients?
- Do not assume “I used AI” equals waiver. Warner will be cited for the proposition that work product is not automatically waived by using a generative AI tool, and that demanding AI prompts/outputs can be an improper attempt to force disclosure of mental impressions.
- Do not assume “I used AI” equals privilege. Heppner will be cited for the proposition that direct communications with a public AI platform are not attorney-client communications and may not be confidential—especially when the user acts without counsel’s direction and under consumer terms the court views as non-confidential.
- Do expect privilege fights to become platform fights. The “what tool, what tier, what settings, what contract” questions are going to matter more than ever. The factual record around AI product configuration is quickly becoming the new metadata.
Our advice? Now is the time to implement technical guardrails and an AI Usage policy. Here are key components to consider:
- Enterprise vs. Consumer Tools: On its face, the Heppner decision would have reached a different result if the user employed enterprise-grade AI tools that contractually guarantee data privacy and prohibit model training on user inputs. Make sure that any tool you are using for confidential conversations will not share those conversations.
- The Kovel Doctrine: Judge Rakoff suggested that if an attorney specifically directs a client to use an AI tool, it could arguably function as a "lawyer’s agent" under the Kovel doctrine, potentially preserving privilege. Before putting your confidential legal information into an AI tool, be sure to talk with your counsel.
- Institutional Discipline: For companies with a legal department or outside counsel, implement a "Lawyer-in-the-Loop" framework, where all AI use for legal tasks is strictly supervised by the legal department to ensure a sound basis for asserting privilege. Just like a company probably would not let their lawyer loose on their software’s code without supervision, keep your engineers out of independent legal research.
Best Practices
As for what attorneys should be telling clients:
- For any threatened or actual litigation, do not paste facts, timelines, witness summaries, termination rationales, or attorney advice into consumer AI chatbots. Treat consumer AI like a third-party inbox you do not control.
- If AI will be used, make it counsel-directed and controlled in that counsel selects/approves the tool, directs the use, documents the purpose, and limits use to enterprise software that contractually preserves confidentiality, restricts retention, and prohibits training on client inputs.
As for what attorneys should be doing:
- Update client engagement letters/litigation hold notices to identify what tools are approved, what categories of information must never be entered into unapproved tools, and that prompts/outputs may be discoverable and must be preserved during a hold. Assume opposing counsel will request AI prompts, outputs, chatlogs, and metadata, so build your litigation hold and collection plan accordingly.
- When negotiating a protective order or similar confidentiality stipulation, consider defining “Authorized AI Tools” for permitted use; prohibit uploading of protected materials into consumer/public AI tools, and address whether vetting, private AI use is deemed a waiver.
AI Use as a Sword
On the flipside of all the above, lawyers questioning adverse witnesses about their use of AI in litigation may very well learn of unprotected uses (including prompts and outputs) that may be discoverable in litigation and provide a leg-up in prosecuting or defending an action.
Bottom line. If you are waiting for a clean bright-line rule like “AI always waives privilege” or “AI is just Word with opinions,” you are going to be waiting a while. For now, the practical rule is simpler: assume your AI workflow will be judged later by someone who does not care how convenient it was. Build the record today: confidentiality, counsel direction, and tool choice. Courts will litigate the details of how the tool was used, not just whether it was used.
Recent Insights
- Industry Alerts Plugged In: An EV Newsletter Vol. 2, No. 9
- Articles Motions for Reconsideration Under Eastern District of Michigan Local Rule 7.1(h) No Longer Toll the Appeal Deadline
- Articles Effect of Approving the “Form and Content” of Orders
- January 13, 2026 Media Mentions Reuters recently interviewed Gregory Ewing in the article “Old laws, new tech: The massive litigation poised to define 2026.”
- January 12, 2026 In the News Gregory Ewing was recently interviewed for a KBTX TV segment, “Focus at Four: New law seeks to protect Texans from AI risks.”
- December 16, 2025 Media Mentions Michael Feder was recently quoted in a Nevada Business Magazine article, “Industry Focus.”
- December 08, 2025 Media Mentions Gregory Ewing was recently interviewed for an American Banker article, “How Ramp set up a data cloud to democratize AI.”
- November 11, 2025 In the News Sara Jodka's article, "Stepping into the AI void in employment: Why state AI rules now matter more than federal policy," was published by Reuters.
- November 05, 2025 Media Mentions Sara Jodka was recently quoted in a Business Insurance article, "Human monitoring of AI necessary to oversee hiring processes, limit discrimination claims."