ABA Rule 1.6 and AI: Can Attorneys Use AI Tools Without Client Consent?
The most common question we receive from attorneys evaluating AI tools is also the most important: do I need client consent to use AI in my practice? The honest answer is that it depends on what the tool does with client information. Some uses require consent. Some do not. The line falls along a specific architectural distinction that vendors rarely emphasize in their marketing.
This article walks through ABA Model Rule 1.6 (Confidentiality of Information) as it applies to AI use, identifies the conditions under which consent is required, and explains why the architecture of the tool (cloud vs. on-premise) materially affects the analysis. The framing is national; the article cross-references Florida Bar Rule 4-1.6 and Florida Bar Opinion 24-1 (January 2024) where the state interpretation is on point.
What ABA Model Rule 1.6 Actually Requires
ABA Model Rule 1.6(a) prohibits a lawyer from revealing information relating to the representation of a client unless the client gives informed consent, the disclosure is impliedly authorized to carry out the representation, or one of the rule's enumerated exceptions applies.
ABA Model Rule 1.6(c), added in 2012 as part of the Ethics 20/20 amendments, requires a lawyer to make reasonable efforts to prevent the inadvertent or unauthorized disclosure of, or unauthorized access to, information relating to the representation.
The rule has been adopted in some form by every state bar in the United States. Florida adopted the rule as Florida Bar Rule 4-1.6, which is materially identical for purposes of this analysis. The article uses ABA Model Rule 1.6 as the reference point and notes Florida-specific interpretations where relevant.
The rule's structure is straightforward: confidentiality is the default, disclosure is permitted only with consent or under enumerated exceptions, and the lawyer has a separate duty to prevent unauthorized disclosure or access.
The Disclosure Threshold
The first question in any AI analysis under Rule 1.6 is whether the AI use involves a disclosure. The rule's prohibition is on revealing information; if no revelation occurs, the rule's consent requirement is not triggered.
Disclosure under Rule 1.6 is generally understood to mean the transmission of information to a person or entity outside the lawyer-client relationship. The relationship includes the lawyer, the client, and persons whose involvement is impliedly authorized to carry out the representation (for example, the lawyer's paralegal, the lawyer's expert witness, or a court reporter at a deposition).
A third-party vendor processing client information on its servers is generally not within the scope of "impliedly authorized" persons unless the client has been informed of the vendor's involvement or the involvement is so customary that consent can be inferred. The 2012 Ethics 20/20 amendments and the related ABA Formal Opinions address technology vendors specifically; outsourcing to third-party tech vendors generally requires client awareness or consent depending on the sensitivity of the information.
For AI tools, the disclosure analysis turns on whether client information is transmitted to a third party in the course of using the tool.
Cloud AI: Disclosure Occurs
A cloud-hosted AI tool processes client information on the vendor's servers. When the lawyer uploads a document, asks a question that includes client facts, or otherwise transmits information to the tool, that information leaves the firm's network and arrives on the vendor's infrastructure.
This is a disclosure under Rule 1.6. The client information has been revealed to a third party (the vendor). Whether the vendor retains the information, trains models on it, or uses it for any other purpose is a separate question that affects the magnitude of the disclosure but not its existence.
Under Rule 1.6(a), the disclosure requires informed client consent unless an exception applies. The lawyer's options are:
- Obtain informed client consent before using the tool with information relating to the representation
- Use the tool only with information that has been redacted to remove client-identifying details
- Determine that the disclosure is impliedly authorized to carry out the representation, on the basis of the vendor's role and the client's understanding
- Determine that an enumerated exception applies (rare in the AI context)
The first option is the most common. ABA Formal Opinion 512 (2024) and similar state-level guidance (including Florida Bar Opinion 24-1) treat informed consent as the operative compliance mechanism for cloud AI use that involves client information.
Informed consent under Rule 1.6 requires the client to understand the nature of the disclosure, the recipient, and the foreseeable consequences. For AI vendor disclosure, this typically means the client understands the vendor receives the information, what the vendor does with it (retention, use, training), and what risks are involved. Documenting the consent in the engagement file is good practice.
The Reasonable Efforts Duty Under Rule 1.6(c)
Even when consent has been obtained, Rule 1.6(c) imposes a separate duty: the lawyer must make reasonable efforts to prevent inadvertent or unauthorized disclosure of, or unauthorized access to, information relating to the representation.
For cloud AI tools, this duty translates into vendor due diligence. The lawyer must take reasonable steps to verify that the vendor has appropriate security controls, contractual protections, and operational practices to prevent unauthorized access to the information.
Reasonable due diligence typically includes:
- Reviewing the vendor's security certifications (SOC 2, ISO 27001, equivalent)
- Reading the vendor's data handling and retention terms
- Confirming contractual prohibitions on training models with the firm's inputs
- Reviewing the vendor's breach notification commitments
- Confirming the vendor's incident history (publicly disclosed breaches, regulatory actions)
The reasonable efforts duty is ongoing. A vendor whose practices change in a material way during the engagement creates a new diligence obligation. The lawyer cannot rely on a one-time review at the start of the relationship.
On-Premise AI: No Third-Party Disclosure
An on-premise AI tool processes client information on the firm's own hardware. The information does not leave the firm's network. No third party receives the information in the course of routine processing.
For Rule 1.6 analysis, this means no disclosure to a third party occurs. The rule's consent requirement under Rule 1.6(a) is not triggered for that processing because the act of revealing information to others has not happened.
This is a clean architectural distinction, not a loophole. Rule 1.6 governs disclosure to others. An on-premise system processes information in the same way the firm's existing document management system processes information: locally, on firm-controlled infrastructure, without transmission to third parties.
The reasonable efforts duty under Rule 1.6(c) still applies. The firm must protect the on-premise system the same way it protects its other internal infrastructure: encryption at rest and in transit, access controls, audit logs, backup integrity, physical security of the hardware. These are the same controls the firm already maintains for its document management system, accounting system, and email infrastructure.
For most firms, the on-premise system slots into existing controls. The vendor due diligence burden that cloud AI requires is replaced by the firm's existing internal controls, which are already in place.
The Practical Result
The ABA Rule 1.6 analysis produces a clear operational distinction:
| Architecture | Rule 1.6(a) Consent Required? | Rule 1.6(c) Duty Operationalized As |
|---|---|---|
| Cloud AI with vendor processing | Yes, before client information is transmitted | Vendor due diligence, contract review, ongoing monitoring |
| On-premise AI on firm hardware | No, because no third-party disclosure occurs | Firm's existing internal infrastructure controls |
The distinction is not about the quality of either architecture. Cloud AI vendors with strong data handling practices can be used compliantly with appropriate consent and diligence. On-premise systems with weak internal controls can fail Rule 1.6(c) just as cloud systems can. The architectural distinction matters because it determines which compliance burden the firm absorbs and how that burden interacts with the firm's existing operations.
ABA Formal Opinion 512: The 2024 Update
ABA Formal Opinion 512 (issued July 2024) addressed generative AI specifically. The opinion tracks the analysis above. Key points include:
- The use of generative AI with client information may require informed client consent under Rule 1.6, depending on the tool's data handling
- The lawyer has a duty of competence under Rule 1.1 to understand the AI tool's limitations and to verify its output
- The lawyer has a duty of supervision under Rule 5.3 over AI as a form of nonlawyer assistance
- The lawyer has a duty of candor under Rule 3.3 that requires verification of AI-generated work product before submission to a tribunal
- Reasonable fees under Rule 1.5 must reflect efficiency gains from AI tools
ABA Formal Opinion 512 does not specifically distinguish cloud vs. on-premise architectures, but its consent analysis tracks the disclosure question. Where the AI use does not involve disclosure of client information to a third party, the consent requirement under Rule 1.6 is not triggered.
Florida Bar Opinion 24-1 (January 2024, predating Formal Opinion 512) applies the same framework to Florida practice. See our companion article on Opinion 24-1 for the state-level analysis.
What "Without Client Consent" Actually Means
To answer the article's title question directly: an attorney can use AI tools without client consent under ABA Model Rule 1.6 in the following situations:
- The tool processes information entirely on the firm's own hardware (on-premise architecture), so no third-party disclosure occurs
- The tool processes information that has been redacted to remove all client-identifying details
- The tool processes only general information (legal research, drafting from non-client templates) that does not involve client information at all
An attorney generally must obtain informed client consent under Rule 1.6 in the following situations:
- The tool is cloud-hosted and the attorney transmits client information to the vendor's servers
- The tool retains or trains models on the firm's inputs (regardless of architecture)
- The vendor's role in the processing is sufficiently outside the customary scope of legal practice that implied authorization cannot be inferred
The categorical answer is that consent is sometimes required and sometimes not. The architectural and operational details determine which category applies.
What This Means for Vendor Selection
For a firm building an AI capability, the Rule 1.6 analysis suggests two paths.
The first path is to adopt cloud AI tools and build the consent and vendor due diligence machinery to support them. This works well for firms with sophisticated client intake processes, clients accustomed to consent disclosures, and the operational capacity to maintain ongoing vendor diligence.
The second path is to adopt on-premise tools and rely on the firm's existing internal infrastructure controls. This works well for firms whose practice areas have heightened confidentiality concerns (family law, immigration, criminal defense, certain commercial matters), firms whose clients are not comfortable with third-party data transmission, and firms that prefer to avoid the consent and diligence machinery altogether.
Neither path is universally better. The choice depends on the firm's practice composition, client base, and operational preferences.
What Mi Assist Legal Does
Mi Assist Legal is an on-premise AI document search system installed on a Mac Mini or compatible server inside the firm's office. The system indexes the firm's case files locally and returns answers to natural-language questions with source citations. No client information leaves the firm's network in the course of routine processing.
The architectural choice is deliberate. For firms whose Rule 1.6 analysis favors avoiding the third-party disclosure question, the on-premise architecture eliminates the consent and ongoing vendor diligence burden that cloud AI introduces. The firm's existing internal controls satisfy Rule 1.6(c). The security architecture page describes the deployment in detail.
Frequently Asked Questions
Q: Does Rule 1.6 distinguish between sensitive and non-sensitive client information?
The rule applies to all information relating to the representation, regardless of perceived sensitivity. Some types of information (medical records, financial details, communications about minor children) carry heightened compliance attention because the consequences of disclosure are more severe, but the rule's threshold is the relationship to the representation, not the sensitivity of the data.
Q: Can general consent in the engagement letter satisfy the consent requirement for any AI use?
Generally no. Informed consent under Rule 1.6 requires the client to understand the nature of the disclosure, the recipient, and the foreseeable consequences. General language in an engagement letter that does not identify specific AI vendors, the data flow, or the retention practices is unlikely to constitute informed consent for material AI uses. ABA Formal Opinion 512 supports this conclusion.
Q: What is "impliedly authorized" disclosure under Rule 1.6(a)?
Implied authorization covers disclosures that are necessary to carry out the representation in customary practice. Communicating with a court, sharing information with the firm's experts under appropriate confidentiality arrangements, and similar disclosures are typically impliedly authorized. Whether AI vendor disclosure is impliedly authorized depends on the customary practice in the relevant practice area; the ABA's position has been generally cautious on this point.
Q: Does Rule 1.6 apply to closed matters?
Yes. The duty of confidentiality survives the termination of the lawyer-client relationship. Information from closed matters that is processed by an AI tool is still subject to Rule 1.6.
Q: How does Rule 1.6 interact with HIPAA for matters involving medical records?
Both rules apply. Rule 1.6 governs the lawyer's ethical duty of confidentiality. HIPAA governs the handling of protected health information. An AI tool processing PHI must comply with both: HIPAA business associate requirements (where applicable) and Rule 1.6 disclosure analysis. On-premise systems generally simplify both analyses because no third-party transmission occurs.
---
This article is intended for educational purposes for attorneys evaluating AI tools. It does not constitute legal advice. Attorneys should consult the current text of ABA Model Rule 1.6, ABA Formal Opinion 512, applicable state rules, and any state-specific ethics opinions before making compliance decisions.
Mi Assist Legal
Private AI document search for Florida law firms.
Mi Assist Legal installs on a Mac Mini or server inside your firm. No cloud. No third-party access. Designed for Florida Bar Rule 4-1.6 and ABA Model Rule 1.6 compliance by architecture.
Book a Consultation