How to maintain attorney-client privilege in the age of public AI

How to maintain attorney-client privilege in the age of public AI

It starts innocently enough. An associate pastes a draft contract into a public AI tool to “tighten the language.” A paralegal asks a chatbot to summarize deposition notes. It feels efficient. It feels harmless.

But there’s one question many don’t stop to ask: where did that confidential information just go?

For law firms across New York and the Tri-State Area, the rise of public AI tools is creating a new kind of risk — one that doesn’t look like a cyberattack, yet can quietly undermine attorney-client privilege.

Why public AI tools create a real privilege risk

The attorney-client privilege depends on one core idea: confidential communications must remain confidential. Once that confidentiality is compromised, even unintentionally, the privilege can be weakened — or lost.

Public AI platforms complicate this. Many public AI tools such as ChatGPT and Gemini process user inputs through external servers. In some cases, that data may be retained, reviewed, or used to improve the system. That means sensitive client information could leave your firm’s controlled environment without a clear audit trail.

For legal practices, this isn’t just a theoretical concern. Bar associations and ethics committees are already weighing in, emphasizing that lawyers must understand how technology handles client data before using it. In a region like New York, where firms handle high-value transactions and complex litigation, the stakes are even higher.

What most firms don’t realize about AI usage

The biggest issue isn’t malicious behavior but everyday convenience.

Staff members are using AI tools the same way they use search engines: quickly, casually, and without thinking about data exposure. The problem is that AI tools don’t just “look up” information. They process what you give them. That means even a simple prompt could include:

  • Client names, case details, or internal notes
  • Draft agreements or legal strategies
  • Sensitive timelines or financial information

Once that data is entered into a public system, your firm may no longer have full control over how it is stored or used. And that’s where it gets risky: attorney-client privilege doesn’t require intent to be waived. It can be compromised by how information is handled.

How to use AI without putting privilege at risk

Avoiding AI altogether isn’t realistic. The focus should be on responsible, controlled use. Here are practical safeguards legal practices should adopt:

  • Set clear internal policies on what can and cannot be entered into AI tools.
  • Use approved, secure AI platforms designed for legal or enterprise use.
  • Train staff regularly so they understand the risks, not just the benefits.
  • Limit sensitive data exposure by anonymizing or redacting inputs.
  • Work with IT partners who can evaluate and monitor AI usage.

While these steps don’t eliminate risk, they significantly reduce the chance of accidental disclosure.

The hidden compliance and client trust implications

Beyond privilege, client expectations add another layer of risk. Clients assume their information is handled with the utmost care, and trust is central to every relationship. A single misstep involving confidential data can damage relationships that took years to build.

There’s also growing regulatory scrutiny. Data privacy requirements continue to evolve, and firms are expected to demonstrate not just intent, but control. If your firm cannot clearly explain how client data is protected when using AI, that becomes a business liability.

This is where many firms fall behind. They adopt new tools quickly, but governance and oversight lag.

Why this is ultimately an IT strategy issue

At its core, this isn’t just about AI. It’s about how your firm manages technology as a whole.

Firms that maintain strong privilege protections tend to have clearly defined technology policies, secure systems for handling sensitive data, and ongoing oversight of how tools are used. Without a strong foundation, a firm with the best intentions can experience security gaps. And AI simply exposes those gaps faster.

A smarter way to move forward

AI can absolutely support legal work by improving efficiency, assisting with research, and streamlining internal processes. But it needs a controlled environment that protects client confidentiality.

If you’re not fully confident in how your firm is using AI today, that’s worth addressing sooner rather than later.

Healthy IT works with legal practices throughout the Tri-State Area to assess technology risks, strengthen data protection, and build smarter systems that support both productivity and compliance. Want an assist from our experts? Then get in touch with us.