Client confidentiality
isn't a setting.
It's architecture.
A federal court has now confirmed what careful practitioners already suspected:
submitting client information to a public AI platform is disclosure to a third party.
Privilege dies. Confidentiality obligations attach.
eRacks builds on-premise AI servers where client data, case files, and work product
never leave your firm's infrastructure — not to OpenAI, not to Anthropic, not to anyone.
Trusted by law firms and in-house legal departments across the United States.
— Judge Jed S. Rakoff
Consumer AI vs. enterprise AI vs.
on-premise — the privilege analysis
Not all AI deployment models carry the same legal risk for your firm and your clients. Here is how they compare under the Heppner framework.
| Factor | Consumer AI (ChatGPT Free/Plus, Claude Pro) |
Enterprise AI (with contractual BAA/NDA) |
eRacks On-Premise AI |
|---|---|---|---|
| Client data leaves the firm | ✕ Yes — every prompt | ⚠ Yes — encrypted, contractual controls | ✓ Never — local only |
| Third-party disclosure risk (Heppner) | ✕ High — confirmed by court | ⚠ Reduced — depends on contract | ✓ Eliminated architecturally |
| Used for model training | ✕ Yes (default on most tiers) | ⚠ Typically no — verify BAA | ✓ Never — open-source, local model |
| Can be subpoenaed / seized | ✕ Yes — Heppner FBI seizure | ⚠ Possible with legal process | ✓ Only through firm's own infrastructure |
| Reasonable expectation of confidentiality | ✕ None per Judge Rakoff | ⚠ Partial — contractual only | ✓ Full — no external party involved |
| Model Rule 1.1 technology competence | ✕ At risk | ⚠ Satisfiable with documentation | ✓ Satisfies confidentiality obligation |
| Monthly cost | $20–$30/user | $30–$60+/user | $0/month after purchase |
What happens to client data
in each scenario
What law firms run on their
eRacks AI servers
Summarize discovery documents, depositions, and case files. Identify key facts, dates, and inconsistencies across hundreds of pages in minutes.
Flag non-standard clauses, missing provisions, and deviation from firm templates. Accelerate first-pass review without sending contracts outside the firm.
Generate first-draft arguments, structure briefs, and synthesize research into motion sections — with your client facts staying on your server.
Process data rooms and due diligence files into structured summaries, issue lists, and risk memos — without sending sensitive M&A information to cloud services.
Draft status updates, advice letters, and client-facing summaries from case notes. Maintain consistent voice and tone while saving attorney time.
Query your own precedent library, standard forms, and prior work product through a private RAG system — your institutional knowledge, instantly searchable.
"After the Heppner ruling, we couldn't continue using cloud AI for client matters. eRacks had us running a private server within two weeks. Our attorneys use it every day — client files stay inside our walls."
What attorneys ask before
making a decision
What exactly did the Heppner court hold about AI and privilege?
In United States v. Heppner, No. 25 CR. 503 (JSR) (S.D.N.Y. Feb. 17, 2026), Judge Jed Rakoff held that documents generated using a consumer AI platform were not protected by attorney-client privilege or the work-product doctrine. The court identified two independent grounds: (1) An AI platform is not an attorney and cannot form an attorney-client relationship; (2) there was no reasonable expectation of confidentiality because Anthropic's privacy policy expressly permits data use for model training and disclosure to government authorities. The court noted that enterprise tools with contractual confidentiality protections may present a different analysis — and on-premise deployment, where no third party ever receives the data, resolves the concern entirely at the architectural level.
Does this ruling apply only to criminal cases?
The holding arose in a criminal context, but the underlying privilege analysis — that disclosing information to a consumer AI platform destroys confidentiality — applies equally in civil litigation and regulatory matters. Multiple state bars have issued guidance cautioning lawyers about using consumer AI with client data, citing confidentiality obligations under Model Rule 1.6. The Heppner decision is the first federal court ruling on this issue but will be broadly cited as persuasive authority. Practitioners should assume the privilege risk exists in any context.
What about enterprise versions of ChatGPT or Claude?
Enterprise agreements change the contractual landscape — vendors agree not to train on your data and provide stronger confidentiality terms. The Heppner court explicitly noted these tools may present a different privilege analysis. However, even with enterprise contracts: (1) client data still travels to the vendor's servers; (2) you are dependent on the vendor's security posture and compliance; (3) the vendor can still be served with legal process. On-premise eliminates all three risks simultaneously. For the highest-sensitivity client matters, on-premise is the only architecture where confidentiality is guaranteed by physics, not contract.
How does this satisfy Model Rule 1.1's technology competence requirement?
Model Rule 1.1 Comment 8 requires attorneys to understand "the benefits and risks associated with relevant technology." Using consumer AI with client data — now that Heppner has confirmed the privilege risk — creates a competence issue if attorneys proceed without appropriate safeguards. An on-premise AI server directly satisfies the confidentiality preservation obligation: there is no third-party transmission, no training risk, and no exposure to government process through the AI vendor. Several state bar associations have issued guidance specifically recommending attorney-directed, non-consumer AI tools for sensitive client work.
Can the AI be directed by counsel, as the Heppner court suggested matters?
Yes — and this is an important nuance. The Heppner court noted that if counsel had directed the client to use the AI tool for the research, the work-product analysis might have differed. On an eRacks firm server, all AI use is under the firm's infrastructure and effectively under the direction of the supervising attorneys. This strengthens work-product arguments significantly compared to clients using consumer tools on their own initiative. For formal work-product protection, document that AI use is attorney-directed by building it into your matter intake and retainer documentation.
Your clients' information.
Your server. Your control.
Tell us your firm size, practice areas, and primary workflows. We'll spec the right AI server configuration and provide a full quote — typically within one business day. eRacks has served legal customers for over 20 years.
eRacks Open Source Systems