AI is Now the Witness in Litigation

SUMMARY Federal courts now treat AI prompts, outputs and decision logs as discoverable evidence, applying standard discovery rules to a category of information most businesses have never thought to govern. Two landmark 2026 decisions, including United States v. Heppner, where a criminal defendant lost privilege protection over 31 Claude-generated documents, signal that unsupervised AI use creates serious legal exposure. Governance policies, retention protocols and attorney oversight are no longer optional.

How AI Outputs, Prompts and Decision Logs Are Reshaping Discovery

A company deploys an AI tool to analyze vendor contracts. The sales department uses a chatbot to draft customer proposals. Finance runs AI-generated forecasts that land in board presentations. All of this qualifies as smart, modern business practice. But almost no one stops to ask what happens when litigation follows.

The answer is increasingly clear. AI-generated outputs, the prompts employees typed, the decision logs the system maintained and the automated analyses the tool produced may all surface as discoverable evidence in litigation. Courts are no longer treating AI interactions as categorically different from email or text messages. They are applying familiar discovery principles to unfamiliar technology, and the results are creating significant new exposure for businesses that have not thought carefully about their AI governance.

This is not a theoretical concern. It is happening now, in federal courts across the country, in disputes ranging from criminal investigations to M&A earnout litigation to copyright infringement cases. Startups and growing businesses that have adopted AI tools without documented governance policies face the sharpest vulnerability.

The Legal Framework: Discovery Has Not Changed, But Discoverable Has

Federal Rule of Civil Procedure 26(b)(1) permits parties in civil litigation to obtain discovery of any nonprivileged matter relevant to any claim or defense and proportional to the needs of the case. That standard has not changed. What has changed is the volume and variety of potentially relevant information that AI tools generate.

Courts are beginning to treat AI prompts and outputs the way they treat internet search histories or internal emails: as windows into what a party knew, when they knew it and what they intended to do about it. The logic is direct. If an employee typed a question into an AI tool asking how to void a contract with a particular vendor, and that vendor later sues for breach, the prompt and the AI’s response may go directly to the issues in dispute.

Arnold & Porter’s litigation practice has noted that discovery requests may seek prompts, outputs, AI-generated summaries, transcripts and drafts when they relate to the dispute. Metadata, logs showing timestamps and model identifiers and administrative records showing who had access and under what settings may also fall within scope.

The Defining Cases: Courts Are Setting the Rules in Real Time

Two decisions issued within a week of each other in early 2026 illustrate both the risk and the uncertainty businesses now face.

In In re OpenAI, Inc., Copyright Infringement Litigation (S.D.N.Y. Dec. 2, 2025), Magistrate Judge Ona Wang compelled the production of millions of AI interaction logs, including user prompts and model responses. The court concluded these logs were relevant and proportional to the plaintiffs’ claims that OpenAI’s systems had reproduced copyrighted works. Privacy concerns, the court found, could receive adequate protection through anonymization and protective orders, but did not categorically bar production.

Then came United States v. Heppner (S.D.N.Y. Feb. 17, 2026), a case with immediate and sobering implications for anyone who has ever used a commercial AI platform to think through a legal or business problem. After receiving a grand jury subpoena and retaining defense counsel, the defendant independently used the consumer version of Claude, Anthropic’s AI platform, to generate 31 documents analyzing his legal exposure, potential defenses and strategic options. The government obtained those documents through a search warrant. Judge Jed Rakoff ruled they were not protected by either the attorney-client privilege or the work product doctrine.

The Heppner decision rests on three points that every business user of AI must understand. First, an AI platform is not an attorney and cannot form an attorney-client relationship. Second, the user had no reasonable expectation of confidentiality because Anthropic notifies users that it collects inputs and outputs for model training. Third, even if the documents carried the feel of legal analysis, they lacked the protection that actual attorney involvement would have provided. As Mayer Brown’s M&A practice has observed, the jurisprudence on AI and privilege lags behind the dramatic rise of AI use in deal-making, and courts are now beginning to engage with the consequences.

AI-Assisted Legal Analysis Is Dangerous Without a Lawyer in the Loop

The privilege question may be the most urgent issue businesses must address. The intuitive assumption holds that AI-assisted legal analysis is somehow shielded from discovery. It is not, absent actual attorney involvement.

Some courts have extended work product protection to prompts crafted by legal counsel that contain an attorney’s mental impressions. A Northern District of California court recognized this possibility in Concord Music Group, Inc. v. Anthropic PBC (N.D. Cal. May 23, 2025), while denying the specific requests for production of AI search prompts. But that protection belongs to the attorney, not to the business user working independently.

The line matters enormously. An employee who uses a company AI tool to draft a risk analysis of a pending deal, without any attorney involvement, has likely created a discoverable document. The same analysis prepared at the direction of counsel, reflecting counsel’s mental impressions and legal strategy, carries a defensible claim to work product protection. Businesses that encourage employees to use AI for sensitive legal and business analysis without attorney supervision are generating documents they will struggle to protect later.

M&A and Post-Closing Disputes: A New Frontier of Exposure

For companies engaged in mergers, acquisitions and venture financing, the risk profile is particularly acute. Deal teams routinely use AI tools to analyze target companies, model earnout scenarios, assess integration risks and draft representations and warranties. When those transactions later become the subject of earnout disputes, breach of representation claims or post-closing litigation, the AI interactions that shaped the deal may be fair game in discovery.

Consider a practical scenario. A deal team uses an AI tool to model three revenue scenarios for a target company. Those outputs inform the valuation and the earnout structure that gets negotiated. The deal closes. Two years later, the seller sues, claiming the earnout was structured to be unachievable. The AI interaction logs showing how the projections were modeled, what inputs went into the system and what outputs the team relied upon may now sit in the discovery production owed to the plaintiff.

This is not speculation. Mayer Brown’s corporate practice has expressly warned deal practitioners that AI-generated content from deal processes may become litigation ammunition in post-closing disputes and that the jurisprudence on privilege protection for this category of information remains unsettled.

Employment and Automated Decision-Making: The Vendor Shield Is Gone

Businesses that use AI tools for employment decisions face a separate and growing exposure. A court in the Northern District of California ordered Workday to produce a list of every customer that had used its AI hiring features since September 2020 in Mobley v. Workday, exposing hundreds of companies to potential discrimination claims based on a single lawsuit against the vendor. California’s Employment Regulations Regarding Automated Decision Systems, effective October 1, 2025, make clear that employers bear responsibility for discriminatory decisions made by third-party AI systems, even when they had no role in building those systems.

The lesson is direct. Buying an AI tool from a vendor does not outsource the legal liability that flows from using it. The business that deploys the tool carries the compliance burden, and if that tool generates outputs that inform employment decisions, those outputs and the prompts that generated them may become exhibit A in a discrimination claim.

What Businesses Must Do Now

Courts are not waiting for Congress to act or for AI-specific discovery rules to emerge. They are applying existing frameworks to this new category of information. Businesses that are not managing their AI footprint with litigation in mind are taking a risk that grows with every month of unmanaged AI adoption.

A defensible AI governance posture starts with knowing what tools employees are using, what data they are inputting and where the outputs are stored. From there, the analysis mirrors what good information governance has always required: retention policies that reflect legal hold obligations, training that tells employees what is and is not appropriate to input into commercial AI platforms and clear protocols for when attorney involvement is required before AI-assisted analysis is created or relied upon.

Businesses engaged in litigation or that reasonably anticipate a dispute should treat potentially relevant AI interactions the same way they treat email and chat messages. Epstein Becker Green’s litigation practice has noted that once litigation is reasonably anticipated, a duty to preserve potentially relevant evidence attaches, and that duty now extends to AI prompts, outputs and decision logs.

The proportionality principle still applies. Not every AI interaction in a company will be discoverable in every lawsuit. Relevance and proportionality remain meaningful limits, and objections grounded in those principles are available when AI-related material is not directly connected to the issues in dispute. But businesses should treat the starting assumption as settled: relevant AI interactions are discoverable, and governance must rest on that premise.

The Bottom Line for Startups and Growing Businesses

Startups and scaling companies are often the least prepared for this shift. They adopt AI tools quickly, without formal governance structures, and their employees use commercial platforms in ways that generate significant volumes of potentially sensitive information. They routinely make important legal and business decisions with AI assistance but without attorney supervision, creating documents that lack privilege protection.

The corrective steps are not technically complex. They require clear policies, employee training and, in many cases, a straightforward conversation with outside counsel about where AI use is creating exposure. Companies that build these practices now, before litigation arises, will be in a significantly better position than those that address the issue only after they receive a discovery request.

For transactional clients, the discovery risk outlined in this article is not a litigation problem to be managed after the fact. It is a deal structuring and governance problem to address before documents are created, before transactions close and before disputes arise. Companies that want to integrate AI governance into their broader legal and compliance framework are welcome to reach out to discuss how outside general counsel support can address these issues as a matter of business practice rather than crisis response.


You may also enjoy:

and if you like what you read, please subscribe below or in the right-hand column.