What New Jersey Lawyers Must Know About AI and Professional Responsibility

Artificial intelligence is reshaping legal practice faster than most ethics rules were written to address. For New Jersey lawyers, that gap is not theoretical. In January 2024, the New Jersey Supreme Court issued Preliminary Guidelines on the Use of Artificial Intelligence by New Jersey Lawyers, making clear that the existing Rules of Professional Conduct apply fully to AI and that lawyers who fail to engage carefully with the technology risk ethical violations. Six rules in particular demand your attention: RPC 1.1, 1.6, 3.3, 5.1, 5.3 and 8.4.

RPC 1.1: Competence Now Includes AI Literacy

The duty of competence under RPC 1.1 requires the legal knowledge, skill, thoroughness and preparation reasonably necessary to represent a client. The New Jersey Supreme Court’s January 2024 Preliminary Guidelines stated without equivocation that “AI does not change the fundamental duties of legal professionals” but that lawyers must be aware of the new applications and potential challenges AI presents. Competence now encompasses understanding what the tools you use actually do.

That means understanding that large language models hallucinate. They fabricate case citations, misquote statutes and produce confident-sounding errors. The NJ Supreme Court’s Committee on Artificial Intelligence and the Courts flagged this specifically, noting that AI can “generate convincing, but false, information, including fake case law.” A lawyer who inputs a client’s legal question into an AI tool and submits the output without independent verification has not exercised the professional judgment RPC 1.1 demands.

The NJSBA Task Force on Artificial Intelligence and the Law, which issued its report in May 2024, reinforced this point, concluding that legal professionals must understand the risks, benefits and core principles of AI to operate it safely and ethically. The Task Force also specifically warned that tools designed for the general public should not be used for tasks that constitute the practice of law.

On the national level, ABA Formal Opinion 512 (2024) reached the same conclusion, requiring lawyers to understand generative AI sufficiently to evaluate its output and verify AI-generated content before relying on it in client matters. In New Jersey, you should treat that standard as a floor.

RPC 1.6: Confidentiality Does Not Pause When You Open a Chat Window

RPC 1.6 prohibits a lawyer from revealing information relating to the representation of a client without informed consent. The duty extends beyond attorney-client privileged communications to all information relating to the representation, regardless of source. And it reaches into how you interact with AI platforms.

The New Jersey Supreme Court’s Preliminary Guidelines addressed this directly. RPC 1.6(f) requires a lawyer to “make reasonable efforts to prevent the inadvertent or unauthorized disclosure of, or unauthorized access to, information related to the representation of a client.” The Guidelines concluded that a lawyer is responsible for ensuring the security of an AI system before entering any non-public client information. That is a specific, affirmative obligation.

Most commercial AI platforms train on user inputs by default unless you opt out. Even enterprise-grade tools with contractual data protections route data through third-party servers. Before using any AI tool with client-identifiable information, you must review the platform’s terms of service and data retention policies. In many cases, client consent or input anonymization is required. The NJSBA Task Force report specifically recommended that lawyers identify and document how data, especially client data, is transmitted, used and stored by any AI tool before deploying it.

RPC 3.3: A NJ Attorney Was Just Sanctioned Again for AI Hallucinations

RPC 3.3 requires a lawyer to uphold candor to the tribunal, including by not knowingly making a false statement of material fact or law and not offering evidence the lawyer knows to be false. The New Jersey Supreme Court’s Preliminary Guidelines cited RPC 3.3 explicitly, stating that a lawyer who uses AI in preparing legal pleadings, arguments or evidence “remains responsible to ensure the validity of those submissions” and that AI use “does not provide an excuse for the submission of false, fake, or misleading content.”

This is not abstract. On April 22, 2026, U.S. District Judge Kai N. Scott sanctioned Cherry Hill attorney Raja Rajan $5,000 for submitting a brief containing five erroneous citations and one entirely fabricated case, all AI-generated. It was Rajan’s second sanction for the same conduct: Scott had previously fined him $2,500 and ordered him to attend CLE on AI hallucinations. The judge noted that verifying citations is a first-year law school requirement, not a new burden created by AI.

The Third Circuit addressed the issue for the first time this year, reprimanding an attorney for an AI-generated filing that contained seven citations riddled with inaccuracies and one that simply did not exist. And Sullivan and Cromwell, one of the most prestigious firms on Wall Street, apologized last week for submitting a court filing with AI hallucinations. The lesson is consistent regardless of firm size or experience level: every citation, every quoted passage, every legal proposition generated by AI must be independently verified before it appears in a court submission.

RPC 5.1: Supervisory Lawyers Are Responsible for How Their Firms Use AI

RPC 5.1 places obligations on partners and supervisory lawyers to make reasonable efforts to ensure that the firm has in place measures to ensure all lawyers conform to the Rules of Professional Conduct. It also makes a supervising lawyer responsible for a subordinate’s violation when the supervisor orders the conduct or ratifies it with knowledge of its consequences.

The New Jersey Supreme Court’s Preliminary Guidelines addressed the oversight obligation specifically under this rule, stating that firms and lawyers are responsible for “overseeing other lawyers and nonlawyer staff, as well as law students and interns” and that this requirement “extends to ensuring the ethical use of AI by other lawyers and nonlawyer staff.”

The NJSBA Task Force report was specific about what reasonable efforts require: the Task Force concluded that all law firms should adopt a written organizational AI policy with a risk assessment framework and provided a sample template as an appendix. A firm without a written policy is a firm without a defensible position when a subordinate submits AI-generated work product without verification.

For solo and small firm practitioners, this obligation falls entirely on you. There is no supervising partner to absorb the responsibility. The NJSBA has been actively developing resources through its AI Committee and NJICLE programming, and the NJSBA’s AI Task Force report is publicly available. Ignorance of what staff or associates are doing with AI does not satisfy the RPC 5.1 standard.

RPC 5.3: The Supervisory Framework Extends to AI Tools

RPC 5.3 requires a lawyer with direct supervisory authority over a nonlawyer to make reasonable efforts to ensure that the person’s conduct is compatible with the lawyer’s professional obligations. A lawyer is responsible for a nonlawyer’s conduct that would be a rule violation if the lawyer ordered or ratified that conduct with knowledge.

The New Jersey Supreme Court’s Preliminary Guidelines explicitly invoked RPC 5.3 alongside RPC 5.1 in its section on oversight, confirming that both rules apply to the use of AI by lawyers and nonlawyer staff alike. Several state bars and the ABA have similarly concluded that the supervisory framework extends to AI tools when a lawyer is delegating legal work to a system and relying on its output.

The NJSBA Task Force report reinforced the practical implication: when assessing AI tools and services, lawyers must identify and document how data is transmitted, used and stored, and that assessment must inform whether a particular tool is appropriate for its intended use. Deploying a tool without that analysis is not supervision. It is delegation without oversight, and it carries the same risks under RPC 5.3 that unsupervised paralegal work has always carried.

RPC 8.4: The Misconduct Rule Has No AI Exception

RPC 8.4 prohibits conduct involving dishonesty, fraud, deceit or misrepresentation, as well as conduct prejudicial to the administration of justice. The New Jersey Supreme Court’s Preliminary Guidelines addressed RPC 8.4(c) and 8.4(d) explicitly, noting that the duty to avoid falsification and the duty of candor to clients and the court apply fully in the AI context.

Using AI to generate false evidence, fabricating testimony, or submitting knowing misrepresentations while attributing authorship to a machine falls squarely within RPC 8.4. The NJ Supreme Court Guidelines also cited RPC 3.4(b), which prohibits a lawyer from falsifying evidence or assisting a client in doing so, and specifically stated that lawyers are prohibited from “using AI to manipulate or create evidence” and from “allowing a client to use AI to manipulate or create evidence.”

Less obvious applications include using AI to generate synthetic voices or images for use in proceedings, using AI to circumvent court formatting requirements, or using AI outputs to support representations the lawyer knows are false. RPC 8.4 is the backstop. The algorithm is never an excuse.

The New CLE Requirement: Technology Credits Are Now Mandatory

Effective January 1, 2027, New Jersey lawyers will be required to earn one CLE credit in a technology-related course every two years. The New Jersey Supreme Court approved this requirement in April 2025 and issued the implementing notice in December 2025. The qualifying subject matter includes developments in AI and other emerging technologies affecting the overall practice of law and specific legal practice areas.

This requirement did not emerge in a vacuum. It was a direct recommendation of the NJSBA Task Force on AI and the Law, which proposed that one of the five required ethics credits for CLE compliance be technology-related. The Task Force and the Court are aligned: AI literacy is now a formal component of what New Jersey defines as a competent, ethically compliant lawyer.

The Practical Bottom Line for New Jersey Lawyers

The New Jersey Supreme Court has been explicit since January 2024: the Rules of Professional Conduct are unchanged by AI, and they apply to it fully. RPC 1.1 requires you to understand what you are using. RPC 1.6 requires you to protect what you input. RPC 3.3 requires you to verify what you submit. RPCs 5.1 and 5.3 require you to supervise the work product AI generates. RPC 8.4 prohibits you from using AI dishonestly.

The safest approach is the most obvious one: treat AI output the way you would treat work from a capable but inexperienced law clerk. Useful as a starting point. Always requiring your professional judgment. Never submitted without your independent review and verification. The NJSBA AI Committee and NJICLE continue to develop resources for New Jersey practitioners. The Attorney Ethics Hotline at (609) 815-2924 remains available for specific questions about prospective AI-related conduct. There is no shortage of guidance available. There is no excuse for not using it.


You may also enjoy:

and if you like what you read, please subscribe below or in the right-hand column.