Why Vessel
Why Your Expertise Deserves Its Own Machine
Your AI agent isn't a stateless function. It's the accumulation of every correction you've made, every preference you've shaped, every edge case you've taught it to handle. That accumulated judgment is your competitive advantage. And right now, most platforms store it on shared infrastructure alongside everyone else's.
What your agent actually holds
A litigation partner spends six months training an AI agent on her firm's brief-writing style. Every correction compounds into something genuinely valuable. “Too formal, it rewrites.” “Cite the holding, not the dicta.” “This judge hates passive voice.” Not the model weights (those belong to the AI provider), but the behavioral layer on top: the prompts, the context, the accumulated corrections that make this agent hers.
Consultant
Structures client deliverables in her firm's framework. The agent knows the methodology, the formatting, the client preferences.
Financial Advisor
Trained on compliance requirements and client communication style. Knows what can and can't be said in writing.
Senior Marketer
Writes in her brand voice, knows the approval workflow, remembers which claims legal rejected last quarter.
This isn't “data.” It's accumulated judgment. The difference between a generic tool and one that actually works the way you do. And it sits on a server somewhere, sharing resources with strangers.
The cost of shared infrastructure
Most AI agent platforms run on shared containers. Your agent runs alongside dozens, sometimes hundreds, of other tenants on the same machine, sharing the same operating system kernel. The isolation between them is a software boundary called a namespace. It's fast, it's efficient, and it's been breached repeatedly.
The kernel is the core of the operating system. It controls memory access, file permissions, and process isolation. When multiple containers share a kernel, a vulnerability in that kernel can expose every tenant on the machine. It's the equivalent of a shared office where every firm's filing cabinet is in the same unlocked room. The lock is software. It's been picked before.
91%
of runtime container scans failed in production
Source: Sysdig 2024 Cloud-Native Security and Usage Report
That's the environment your agent's context is living in.
Shared containers
- Shared kernel across tenants
- Namespace isolation (software boundary)
- Noisy neighbor performance impact
- Single kernel vulnerability exposes all tenants
- Shared network stack
Dedicated VM
- Own kernel, own operating system
- Hardware-level isolation (hypervisor boundary)
- Guaranteed compute resources
- Kernel vulnerabilities contained to your instance
- Isolated network with tunnel-only access
What actually goes wrong
Container escapes aren't theoretical. They happen, they get CVEs, and they affect real production systems.
“Leaky Vessels.” A flaw in runc, the container runtime used by Docker and most Kubernetes deployments. An attacker inside a container could escape to the host operating system. Required patching every container runtime in production. Snyk Research called it “one of the most severe container vulnerabilities in years.”
containerd, the runtime underlying most cloud Kubernetes services, allowed containers with host network access to escalate privileges and access other containers on the same host.
“Inception.” A Spectre-class side-channel attack on AMD processors that could leak data across security boundaries, including between containers on the same physical CPU. No container runtime patch fixes it. Hypervisor isolation with mandatory IBPB flushing on context switches is the meaningful mitigation. Containers sharing a kernel do not get that boundary.
Now add AI agents to this picture. An agent processes untrusted input by design: it summarizes web pages, reads uploaded documents, executes code. A prompt injection that triggers a container escape isn't science fiction. It's a plausible attack chain with well-documented components at every step.
Research from Wiz Research has repeatedly demonstrated cross-tenant attacks in cloud environments, from exploiting shared PostgreSQL instances to escaping managed container services. Their work has led to multiple critical patches across AWS, Azure, and GCP. The pattern is consistent: software boundaries get breached; hardware boundaries hold.
What “your own machine” means
Each vessel runs on a dedicated e2-standard-2 GCP Compute Engine VM. Not a container on a shared machine. A full virtual machine with its own kernel, its own memory, and its own disk. Google's hypervisor (the hardware-level layer that separates VMs) is the same technology that isolates GCP customers from each other in production.
Your vessel has no public IP address. No SSH access. No inbound ports. The only way to reach it is through an encrypted Cloudflare Tunnel, a private connection from the VM to Cloudflare's network. No firewall rules to misconfigure. No ports to scan. The attack surface is minimal by design, not by policy.
Your vessel / isolation layers
Google documents the security properties of their hypervisor in their infrastructure security design overview. VM isolation isn't a feature we added. It's a property of the infrastructure we chose specifically because shared containers aren't good enough for what AI agents do.
The trade-off
Dedicated VMs cost more than shared containers. That's real. No hedge.
A container is a slice of a machine. A VM is the whole machine. You're paying for compute that's exclusively yours: no noisy neighbors, no shared kernel, no inherited vulnerabilities from other tenants.
For experiments, internal tools, low-risk workloads? Shared containers are a perfectly valid choice. They're cheap, fast to spin up, and the isolation trade-offs don't matter much when the stakes are low.
But AI agents that hold client communications, legal strategy, financial analysis, proprietary frameworks? Agents that process untrusted input from the open web? Agents whose accumulated corrections represent months of expert refinement?
That's a different threat model. And it deserves different infrastructure.
If your expertise is your edge, it deserves its own machine.
Read the full security architecture, or check your OpenClaw instance for free, or generate a hardened config.

