The Platform

From Click to Running Agent

No servers to configure. No Docker to install. No networking to debug. You name your agent, click launch, and start working in minutes.

Three steps. That's it.

01

Configure your agent

A guided wizard walks you through setup in five quick steps. No servers, no networking, no infrastructure decisions.

  • Name your agent. Pick a name or let us generate one.
  • Connect an LLM. Pick a provider (Anthropic, OpenAI, Google, OpenRouter), paste your API key, choose a model tier.
  • Add web search (optional). Brave Search or Perplexity API key lets the agent find current information.
  • Connect channels. WhatsApp, Telegram, Discord, Slack, and more. Pick the ones you use.
  • Review and launch. Confirm your choices and click launch.

No region selection, no instance sizing, no networking config. We handle all of that. Self-hosting instead? Generate a hardened config for free.

02

Your agent comes alive

Behind the scenes, a dedicated virtual machine spins up on Google Cloud. OpenClaw boots with your configuration already applied. A private encrypted tunnel connects your agent to the internet. Health monitoring kicks in. The dashboard shows you the progress in real time.

Typically 2–5 minutes. You'll see the status change from provisioning to active.

03

Start working

Click “Open vessel” and you're in. Your LLM is connected, your channels are live, and built-in tools (file handling, web browsing via Playwright, persistent memory, scheduled tasks) are ready to go. Start giving the agent work. Every correction makes it sharper.

Your access token is generated automatically. For the full list of capabilities, see What is OpenClaw?

What happens under the hood

You don't need to know any of this. But if you're the type who wants to understand what you're paying for, here's the full picture.

0s

You click "Launch"

Your vessel is registered and queued for provisioning.

~10s

VM created

A dedicated GCP e2-standard-2 instance boots with its own kernel, disk, and network stack.

~30s

OpenClaw starts

The agent runtime launches inside Docker on your VM. Configuration files are initialized.

~60s

Tunnel connects

A private Cloudflare Tunnel establishes an encrypted connection. No public IP is assigned.

~90s

Health check passes

The sidecar watchdog verifies the agent is responsive. Your access token is generated.

2–5 min

Active

Your dashboard shows "Active." Click to open your agent and start working.

What you get

Every vessel is a complete, isolated environment for your AI agent. Not a shared container. Not a multi-tenant platform. Your own machine.

Dedicated compute

A GCP virtual machine with its own kernel, memory, and disk. No noisy neighbors. No shared resources. Performance doesn't degrade because someone else's agent is busy.

Private networking

No public IP address. No open ports. All access goes through an encrypted Cloudflare Tunnel. Your agent is reachable but not exposed.

Automated operations

Health checks every 5 minutes and automatic restarts on failure are in progress. Daily OpenClaw updates with safe rollback are planned. You never touch the infrastructure. Monitoring is shipping soon.

Your data, isolated

Conversations, memory, and configuration live on your VM's disk. Not in a shared database. Not accessible to other tenants. Not used for training.

It gets better the more you use it

This is the part most AI tools get wrong. They give you a generic model and call it done. OpenClaw is different: it's a persistent agent that accumulates context over time. Every correction, every piece of feedback, every “no, do it this way” changes how it works going forward.

A lawyer corrects a contract clause format once. The agent remembers. A consultant adjusts the tone of a client report. It sticks. A marketer refines the brand voice until the agent sounds exactly like the team. This isn't fine-tuning a model. It's building accumulated judgment that compounds with every interaction.

And because all of that context lives on your own machine, it's yours. Not shared with other users. Not fed back into a training pipeline. Not accessible to anyone but you.

Every correction sticks. Every interaction compounds. The agent handles execution. You handle judgment.

What you bring. What we handle.

Vessel handles the infrastructure. You bring the expertise.

You bring

  • Your LLM API key: Claude Opus, Sonnet, or Haiku from Anthropic. GPT-4.1 from OpenAI. Gemini 2.5 Pro from Google. Or use OpenRouter for access to all providers.
  • Your messaging connections: WhatsApp (QR scan), Telegram (BotFather token), Discord (bot from discord.com/developers), Slack (app install), and 20+ more.
  • Your expertise and corrections
  • Your judgment on what good output looks like

We handle

  • Server provisioning and VM management
  • OpenClaw installation and configuration
  • Networking, tunnels, and security hardening
  • Health monitoring, auto-restarts, and updates

Ready to start?

Vessel is currently in private beta with a small group of professionals testing the platform. Sign up for early access and we'll reach out when your spot is ready.

No credit card required to join the waitlist. No commitment. Just an email and a short description of what you'd build with your own AI agent.

Your expertise deserves an agent.

Name it. Launch it. Start working. No infrastructure required.