
Can OpenClaw Read Your Files? Here's What's Actually True.
A business owner's fear that OpenClaw will expose their local files is understandable. When OpenClaw runs inside a Vessel on a dedicated VM, that fear is structurally solved, not promised away.
A colleague who runs a consulting practice told me he'd read something online suggesting that OpenClaw would read all his data and that client information could end up in the wrong hands. He wanted to know whether he needed to do some kind of data cleanup before running it, and whether his clients' information was safe.
The answer is no cleanup needed, and yes the data is safe. But the reason matters, because "trust us, it's fine" is not an answer for a professional who carries liability for their clients' information. Here is the actual mechanism.
The Vessel is a separate computer
When you run OpenClaw through Vessel, it does not run on your machine. It runs inside a dedicated virtual machine in the cloud. That VM is the Vessel.
Your laptop and the Vessel are two distinct computers. They do not share a file system, they do not share memory, and there is no network path between them except the one you open when you visit the Vessel dashboard in your browser. Your Documents folder, your client files, your financial records, your desktop: none of these are on the Vessel. They are on your machine. The Vessel has never seen them and has no way to reach them.
This is not a permissions setting. It is not a policy. It is a physical separation. The agent lives in the cloud. Your files live on your hardware. Those are two different computers.
So the question "will OpenClaw read my files" has a simple answer when you run it on Vessel: it cannot, because your files are not there.
What OpenClaw can access
The Vessel contains OpenClaw and the connections you have explicitly authorized. That is the full inventory of what the agent can see.
If you connect your Gmail account, it can read and send email from that account. If you connect Slack, it can read and send messages in the channels you permit. If you connect your calendar, it can read your schedule. These connections go through standard authorization flows, the same ones you use when you allow any application to connect to Google or Slack. You approve each one. You define the scope.
That list is the entire perimeter. Nothing on your laptop is inside it. Nothing you have not explicitly connected is inside it.
What happens if someone hacks the Vessel
This is the second question worth answering directly, because "separate computer" raises an obvious follow-up: what if someone gets into that computer?
Each Vessel is an isolated virtual machine. The isolation is enforced at the hardware level by the cloud infrastructure it runs on. One VM cannot read the memory of another VM. One VM cannot access the disk of another VM. This is not a software promise, it is how the underlying hardware virtualization works. Google Cloud Platform's hypervisor enforces it at the physical level.
This matters for two reasons.
First, if someone compromised your Vessel, they would get a Linux box running OpenClaw, plus whatever services you had connected via OAuth. They would not get your local files, because those are on your machine, not the Vessel. The blast radius is bounded.
Second, if someone compromised any Vessel, they would not be able to cross into another customer's Vessel. Each one is walled off from every other one at the hardware level. This is structurally different from shared container hosting, where a container escape can put an attacker on the host machine that other containers share. On dedicated VMs, that path does not exist.
Compare: running OpenClaw on your laptop
Running OpenClaw locally is not the data disaster people imagine, but it does create a different risk worth understanding.
The agent still cannot read arbitrary files it was not given access to. That part is the same. What changes is that your agent is now running on the same machine as everything sensitive you own. Your client contracts, your financial records, your saved credentials: all on the same hardware as the agent process. If something goes wrong with the software, you are dealing with it on the machine that holds everything.
Running on a dedicated Vessel means those two things never share a machine. Something going wrong on the Vessel stays contained to the Vessel. Your laptop remains what it was before: a separate computer that the agent has no access to.
The LLM API question
One more concern worth addressing: when OpenClaw processes a request, it sends a prompt to the AI model you have configured (Anthropic, OpenAI, or Google Gemini) through their API. That conversation does travel to their servers. This is not hidden and it is not unique to OpenClaw. It is exactly the same data flow as pasting a document into Claude or ChatGPT yourself.
All three major providers are explicit that paid API traffic is not used for model training.
Anthropic: "We will not use your chats or coding sessions to train our models, unless you choose to participate in our Development Partner Program." (source)
OpenAI: "Data sent to the OpenAI API is not used to train or improve OpenAI models (unless you explicitly opt in to share data with us)." (source)
Google Gemini: "When you use Paid Services, Google doesn't use your prompts or responses to improve our products." (source)
One caveat: Google's free Gemini API tier operates under different terms and does use content for product improvement. Vessel uses the paid API tier, so the above applies.
The practical point in all three cases is the same: the data that travels to the model is only the content you put into the conversation. It is not a background sweep of your files. It is not ambient collection. It is a deliberate API call with the context you chose to include.
The memory and context that OpenClaw builds over time, the knowledge it accumulates about how you work, stays on the Vessel. On shared hosting, that server belongs to someone else. On a dedicated Vessel, it belongs to you.
No cleanup needed, and here is why
My colleague asked whether he needed to sanitize his systems before running an AI agent. The reason the answer is no: the agent is not on his systems. It is on a separate computer in the cloud that has never seen his local files and cannot reach them.
The question worth asking before you start is not "what is on my machine that the agent might find." It is "what services am I going to connect to this agent, and am I comfortable with it acting on my behalf in those places." That is a much narrower question, and it is entirely in your control.
Make a short list of the connections you plan to authorize. Gmail, Slack, calendar, whatever is relevant to the work you want it to do. That list is the perimeter. Everything outside it remains exactly where it is.
The structural answer
The privacy guarantee here is not a promise made in a terms of service. It is a consequence of architecture. The Vessel is a separate computer. Your files are on a different machine. Hardware-level VM isolation means no other customer's Vessel can see yours, and yours cannot see theirs.
For a professional who carries accountability to clients, that distinction matters. "We promise not to look" is a policy. "There is no path from that machine to your files" is a structure.
I'm building Vessel, dedicated private hosting for OpenClaw agents. Each agent runs on its own isolated server in the cloud. More on how the isolation works: vesselofone.com/platform/security and vesselofone.com/why/isolation.

