SUMMARY Attorney-client privilege faces significant challenges with AI use. When lawyers input client information into AI systems, they risk waiving privilege by sharing confidential data with third-party providers. The Rules of Professional Conduct require attorneys to maintain confidentiality, demonstrate technological competence and ensure adequate safeguards. Lawyers should anonymize information, use specialized legal AI tools with confidentiality protections, understand data handling policies and maintain human oversight to protect privilege while leveraging AI capabilities.
The Confidentiality Problem

When attorneys input client information into AI systems like ChatGPT, Claude or specialized legal AI platforms, they potentially expose confidential communications to third parties. The AI providers themselves become recipients of privileged information, which raises a fundamental question under traditional privilege doctrine: does sharing information with these outside entities destroy the privilege altogether?
The principle is well established in legal practice that attorney-client privilege requires confidentiality. Once privileged information is shared with third parties who are not necessary to the legal representation, courts have historically found that the privilege is waived. AI providers are clearly third parties in this relationship, existing entirely outside the attorney-client dynamic. This creates significant risk for practitioners who may not fully appreciate the implications of their AI usage.
The situation becomes even more concerning when considering how AI systems operate. Many platforms retain conversation data for extended periods. Some use input data to train and improve their models, meaning client information could persist indefinitely in ways attorneys cannot control or even track. This ongoing exposure amplifies the confidentiality risks far beyond the initial moment of input.
Terms of service for popular AI tools often explicitly disclaim any confidentiality protections. Some providers claim broad rights to use, analyze or retain input data. These contractual terms directly conflict with the ethical obligations attorneys have to protect client confidences and maintain privileged communications.
Potential Legal Frameworks
Despite these risks, some legal scholars and practitioners argue that AI tools could fit within existing privilege frameworks if properly managed. The agency doctrine offers one potential path forward. Under this theory, AI systems might be treated as agents of the attorney, similar to how paralegals, translators or expert consultants function in legal practice. These individuals and services can access privileged information without destroying the privilege because they are considered necessary to the attorney’s representation of the client.
For this framework to apply to AI, however, attorneys would need to demonstrate appropriate safeguards and controls. The tool would need to be used in a manner that maintains confidentiality, does not share information beyond what is necessary and operates under the attorney’s direction and supervision. This is a difficult standard to meet with many current AI platforms.
Protecting Privilege in Practice
Attorneys who wish to use AI tools while protecting privilege must take proactive steps. The most straightforward protection is anonymization. Before inputting any case-related information into an AI system, lawyers should remove all identifying details about clients, opposing parties, witnesses and specific case facts that could reveal confidential information. This approach allows attorneys to benefit from AI assistance for general legal questions and analysis while maintaining confidentiality.
Choosing the right tools also matters significantly. Some legal technology companies have developed AI platforms specifically designed for attorney use, with contractual confidentiality protections, business associate agreements and data handling practices that align with legal ethics requirements. These specialized tools carry less risk than general-purpose AI systems, though attorneys must still carefully review their terms and understand their limitations.
Understanding provider policies is essential. Attorneys should thoroughly investigate how any AI platform handles data, including retention periods, whether information is used for training, who has access to inputs and whether the provider will assert any ownership or usage rights. If these policies cannot provide adequate confidentiality assurances, the tool should not be used for privileged matters.
Client communication represents another important consideration. When attorneys plan to use AI tools in their representation, they should consider informing clients about this practice and the potential risks involved. In some situations, obtaining explicit client consent may be appropriate or even required. This transparency helps protect both the client’s interests and the attorney’s ethical standing.
Finally, attorneys must maintain human oversight of all AI-assisted work. Artificial intelligence should never replace the lawyer’s independent judgment or become the source of legal advice. The attorney remains fully responsible for all work product and must review, verify and take ownership of anything generated with AI assistance.
Rules of Professional Conduct
The ethical implications of AI use extend beyond privilege concerns into attorneys’ fundamental obligations under the Rules of Professional Conduct. Model Rule 1.6 requires lawyers to maintain the confidentiality of information relating to representation of a client. This duty is broader than attorney-client privilege and applies to all information the lawyer learns during the representation, regardless of its source or whether disclosure would be embarrassing or detrimental to the client.
When attorneys input client information into AI systems, they must consider whether they are making reasonable efforts to prevent inadvertent or unauthorized disclosure of confidential information. The rule does not prohibit using third-party services, but it requires lawyers to make reasonable efforts to ensure those services provide adequate safeguards. This means attorneys cannot simply assume an AI tool is safe because it is widely used or technologically sophisticated.
Rule 1.1 imposes a duty of competence that has taken on new dimensions in the AI era. The comment to this rule was updated in many jurisdictions to clarify that competent representation includes understanding the benefits and risks associated with relevant technology. Attorneys who use AI without understanding how it handles data, whether it retains information or what confidentiality protections it offers may be falling short of their competence obligations.
The duty of competence also requires lawyers to stay current with changes in law and practice, including technological developments. As courts and bar associations continue to issue guidance on AI use, attorneys must keep informed and adjust their practices accordingly. Ignorance of how AI systems operate is not a defense to an ethics violation.
Rule 1.4 requires lawyers to keep clients reasonably informed about the status of their matters and to explain matters to the extent reasonably necessary to permit clients to make informed decisions. When AI tools are used in representation, particularly if there are risks to confidentiality or privilege, attorneys should consider whether their duty of communication requires disclosure to the client. Some ethics opinions have suggested that clients should be informed when AI is used for substantive legal work.
Rule 5.3 addresses lawyers’ responsibilities regarding nonlawyer assistants and requires attorneys to ensure that such persons’ conduct is compatible with the professional obligations of the lawyer. While AI systems are not “persons” in the traditional sense, the principle extends logically to technological tools. Lawyers must take reasonable measures to ensure that AI services they use will not result in violations of the Rules of Professional Conduct.
An Evolving Landscape
The intersection of AI and attorney-client privilege remains an area where legal standards are still developing. Courts have not yet established clear precedent on how privilege applies to AI-assisted legal work. Bar associations and ethics committees are beginning to issue guidance, but comprehensive rules remain in formation. Several state bars have issued advisory opinions addressing AI use, with most emphasizing the need for caution and the ongoing applicability of existing ethical rules to new technology.
What is clear is that attorneys have multiple overlapping obligations under the Rules of Professional Conduct that bear on AI use. The duty to maintain confidentiality, the duty of competence, the duty to communicate with clients and the duty to supervise nonlawyer assistance all come into play when lawyers incorporate artificial intelligence into their practice. These obligations exist regardless of the specific technology involved and require attorneys to carefully evaluate any tool before using it with client information.

As AI becomes increasingly embedded in legal practice, lawyers must stay informed about both the technology and the evolving ethical standards that govern its use. The convenience and capabilities that AI offers cannot override the fundamental duties that make effective legal representation possible. Attorneys who fail to understand the ethical dimensions of their AI use risk not only waiving privilege but also violating their professional obligations in ways that could result in discipline, malpractice liability or loss of client trust.
You may also enjoy reading:
- AI-Powered Contract Analysis in M&A
- Science Fiction Cinema’s Lessons for AI Integration in Legal Practice
- New AI Capabilities Raise Critical Questions for Legal Practice
- ABA Formal Opinion on AI