For Legal Professionals
AI Agents for Lawyers
In February 2026, a federal judge ruled that documents prepared with consumer AI tools aren't protected by attorney-client privilege. The reasoning was straightforward: the platform's privacy policy allowed disclosure to third parties, so there was no reasonable expectation of confidentiality.
Meanwhile, legal AI adoption doubled in a single year. Lawyers are using these tools because they work. The question isn't whether to use AI. It's how to use it without putting your client's confidentiality at risk.
What lawyers are actually doing with AI
This isn't hypothetical adoption. An industry survey of 400+ US legal professionals (8am, 2026) found that 69% of lawyers now use AI tools, up from 31% a year earlier. Not experimenting. Using.
58% use AI for drafting correspondence. 43% use it for document drafting. The most common outcome? 38% of lawyers save 1–5 hours per week; 14% save 6–10 hours.
69%
of lawyers now use AI tools, doubled from 31% in one year
8am 2026 Legal Industry Report
46%
cite data security as their top barrier to AI adoption
Secretariat/ACEDS 2025
26s
for AI to review 5 NDAs vs. 92 minutes by a human lawyer
LawGeex NDA review study (2018) — models have advanced significantly since.
The privilege problem
On February 10, 2026, Judge Jed Rakoff of the Southern District of New York issued the first federal ruling on whether documents prepared with consumer AI tools are protected by attorney-client privilege.
The defendant in US v. Heppner used Anthropic's consumer Claude product to prepare 31 documents outlining his defense strategy. The court ordered them produced. Privilege denied, on three independent grounds.
No attorney-client relationship. An AI tool is not an attorney. Communications with it don't create the relationship that privilege requires.
No reasonable expectation of confidentiality. Anthropic's consumer privacy policy allows data collection and disclosure to third parties, including government entities. Using the tool under those terms waives confidentiality.
Not prepared at counsel's direction. The defendant used the tool without attorney supervision. Work product doctrine requires materials be prepared by or at the direction of counsel in anticipation of litigation.
Gibson Dunn's analysis identified three safeguards that could have changed the outcome: enterprise-tier access with no-training clauses, contractual confidentiality commitments, and attorney supervision of AI use. Each one maps to an infrastructure choice.
What the bar requires
In July 2024, the ABA issued Formal Opinion 512, its first ethics guidance on AI tools. Five Model Rules apply directly.
Rule 1.1: Competence
Lawyers must understand how AI tools work well enough to use them competently. You don't need to understand the architecture, but you must understand what the tool does with your data.
Rule 1.6: Confidentiality
Client information entered into AI tools must remain confidential. Investigate whether the tool trains on inputs or shares data before using it with client information.
Rule 1.4: Communication
If AI materially affects client representation, disclose it. Not boilerplate. Genuinely informed consent.
Rule 5.1/5.3: Supervision
AI-generated work product requires the same supervision as work delegated to a junior associate. Review, verify, sign off.
Rule 3.3: Candor
AI-generated legal citations must be verified. Over 300 judges now have standing orders requiring disclosure of AI use in filings. Submitting unverified AI output to a court is a candor violation.
State bars are moving independently. Florida Bar Opinion 24-1 requires informed consent before using AI with client data. Oregon's Opinion 2025-205 adds a twist: if using an open model, lawyers must either get client consent or anonymize all data first. Texas Opinion 705 makes verification non-negotiable.
The work it handles
Your agent runs on a dedicated machine with a browser, file handling, persistent memory, and scheduled tasks. Here's how those capabilities apply to legal work.
Contract review
File handlingPersistent memoryUpload contracts to your agent's workspace. It reads the document, flags deviations from your standard terms, and highlights risk clauses. After reviewing 20 contracts, it knows what “standard” means in your practice. Every correction sticks. In a 2025 benchmarking study, the top AI tool achieved 73.3% reliability on contract drafting vs. 56.7% for human lawyers. In a separate LawGeex study, AI reviewed 5 NDAs in 26 seconds vs. 92 minutes for human reviewers.
Legal research
Web browsingWeb searchYour agent browses open sources, PACER, and any MCP-connected databases. It compiles precedents, drafts a research memo with citations, and saves it to your workspace. Every citation must be verified. AI hallucinates case law confidently (see risks below).
Note: No native Westlaw/LexisNexis integration. Your agent uses open web sources and any databases you connect via MCP.
Document drafting
File handlingPersistent memoryYour agent drafts in your voice because it remembers corrections. It reads prior work in the workspace, matches formatting and tone, and improves with every edit. Not a template engine. A trained assistant that compounds.
Recurring tasks
Scheduled tasksMessaging channelsSet a weekly schedule: monitor regulatory updates, check dockets, flag filing deadlines. Your agent runs these automatically and reports via Slack, WhatsApp, or email. It doesn't wait for you to ask. You define the cadence, it handles the rest.
A day with your vessel
Monday morning. Three contracts arrived overnight from a client closing a deal. You forward them to your agent via WhatsApp. By the time you've made coffee, it's flagged a non-standard indemnification clause in the second contract and a missing IP assignment in the third. The first one's clean.
You review the flags, correct one (the indemnification clause is actually fine for this client's risk profile), and the agent notes it. Next time it sees that pattern in this client's contracts, it won't flag it.
After lunch, you ask it to research recent case law on non-compete enforceability in New York post-FTC rule. It browses open sources, pulls relevant decisions, and drafts a three-page memo with citations. You verify each one. Two check out. One is a state court decision it mischaracterized. You correct it. It won't make that mistake the same way again.
End of day: you set it to check PACER overnight for new filings in two open matters and send a Slack summary by 7am. Tomorrow morning, it'll be waiting for you.
What to watch out for
AI tools are powerful. They're also unreliable in ways that matter enormously in legal practice. Honest accounting of the risks is a prerequisite for responsible use.
Damien Charlotin's tracker documents over 500 cases in US courts alone where lawyers cited AI-generated fake case law. Sanctions range from $3,000 to $10,000 per incident. A Stanford Law study found hallucination rates of 17–33% even in purpose-built legal AI tools. Every output must be verified.
An AllRize 2025 study found 38.8% of law firms have no AI integration at all. 54% provide no AI training to staff. Only 9% have a written AI policy. Lawyers are using consumer AI tools on their personal devices, outside any firm governance, with client data.
Stanford's Mark Lemley has raised the concern of lost “reps” for junior lawyers. If AI handles the research and first drafts, how do associates develop the judgment that eventually makes them partners? This isn't a reason to avoid AI. It's a reason to be deliberate about how you integrate it.
Clients are arriving at consultations with ChatGPT-generated legal opinions. Some are partially right. Some are confidently wrong. Managing client expectations when they come pre-loaded with AI-generated advice is a new skill lawyers need to develop.
Extend your agent
Your agent connects to the tools you already use. Some integrations work today. Others are on the roadmap via MCP connectors.
Document management
File handlingMCP (roadmap)iManage, NetDocuments. Agent reads and writes documents in your workspace. MCP connectors on the roadmap.
Court filing systems
Web browsingPACER, state e-filing. Agent checks dockets, monitors filings, flags deadlines.
Communication
Messaging channelsSlack, Teams, email, WhatsApp. 20+ channels, same agent, shared memory.
Calendar + deadlines
Scheduled tasksAutomated filing deadline monitoring, regulatory change alerts, recurring docket checks.
Legal databases
MCP (roadmap)Westlaw, LexisNexis. Native integrations planned. Today: agent browses open web sources and PACER.
Practice management
MCP (roadmap)Clio, PracticePanther. Feed matter context to your agent. Today: manual upload.
These integrations layer on top of a dedicated machine running an open-source agent. Your agent's workspace is isolated. Its memory persists. Every tool you connect makes it more useful, without sharing your data with another platform.
Privilege across borders
If your practice crosses borders, the regulatory landscape varies by jurisdiction. Vessel runs on GCP, currently in us-central1. Your data stays in the region where your VM runs.
No uniform federal rule on AI use. ABA Opinion 512 is advisory. State bars diverge: New York has no mandatory disclosure requirement; Florida requires informed consent; Oregon requires consent or anonymization for open models.300+ individual judges have issued their own standing orders.
GDPR adds a layer the US doesn't have: any AI processing of personal data requires a Data Processing Agreement, a Data Protection Impact Assessment, and compliance with cross-border transfer restrictions. The CCBE Guide outlines requirements for EU lawyers. The EU AI Act classifies legal AI as “high-risk” (effective August 2026), requiring conformity assessments. The SRA is watching AI use closely in the UK.
Quebec's Law 25 is GDPR-like: explicit consent for personal data processing, cross-border transfer restrictions. Law societies in Ontario, Alberta, and British Columbia have issued AI guidance. No federal AI legislation yet.
The Victorian Legal Services Board issued a joint statement on AI use. Queensland has made disciplinary referrals with costs up to $10,000. No AI Act equivalent. Lightest regulatory touch of the five jurisdictions, but that's changing. The Australian government is consulting on mandatory AI guardrails.
What Vessel gives you today
Everything above is true regardless of what platform you use. The privilege problem, the bar requirements, the hallucination risks. Those don't go away with different infrastructure.
What infrastructure does change is the confidentiality posture. The Heppner ruling turned on the platform's terms, not on AI itself. Here's what Vessel's architecture provides, what's coming, and what it can't change.
Gibson Dunn safeguards → Vessel architecture
Available now
On the roadmap
Architectural reality
Your responsibility
Your vessel of OpenClaw is a tool, not a substitute for professional judgment. You are responsible for verifying every output (citations, analysis, and recommendations) before using it in client work or court filings. The agent proposes. You decide.
Explore how it works, how isolation works, read about data ownership, or see how Vessel handles the agent runtime.

