Who this is for: This guide is for anyone running Ubuntu 22.04+, Debian 12+, or compatible distros who wants to set up OpenClaw as a personal AI agent. We cover the full process: installing Node.js 22, running OpenClaw, setting up systemd for 24/7 operation, connecting Telegram, and configuring local AI models with Ollama. No shortcuts, no skipped steps.
Why Linux Is Excellent for OpenClaw
Linux users have always had a head start with self-hosted tools, and OpenClaw is no exception. The combination of systemd for reliable process management, native shell scripting for custom automations, and the ability to run headlessly on a server makes Linux arguably the best platform for running OpenClaw as a serious, always-on AI agent.
Ubuntu and Debian are the two most popular distros in the OpenClaw community for good reason: they have stable, long-term support releases; excellent Node.js package support; and the most tested systemd configurations for daemon management. If you're running a home server, VPS, or a dedicated desktop machine, this guide covers your setup completely.
One key advantage Linux has over macOS and Windows: you can run OpenClaw headlessly on a cheap VPS for as little as $5/month, giving you a cloud-hosted personal AI agent that runs 24/7 without any home hardware. For users who want their AI available even when they're away from home, this is a compelling option.
System Requirements
Step 1: Install Node.js 22 on Ubuntu/Debian
Ubuntu and Debian's default apt repositories include an older Node.js version. OpenClaw requires Node.js 22 or later. Install it using the official NodeSource repository:
Step 2: Install OpenClaw
With Node.js 22 installed, run the official OpenClaw installer:
If you get a "permission denied" error with npm global install, fix npm's permissions first:
Step 3: Run the Onboarding Wizard
The interactive wizard guides you through:
Enter your Anthropic Claude API key (get it at console.anthropic.com) or OpenAI API key. For a fully offline setup, choose Ollama (configured in Step 5).
Open Telegram → search @BotFather → /newbot → set name and username → copy the HTTP API token → paste it into the wizard. Takes 2 minutes.
Name your agent and provide a background context — your job, projects, preferences. This initial context shapes every future interaction.
Step 4: Set Up systemd for 24/7 Operation
The cleanest way to run OpenClaw permanently on Linux is with a systemd user service. This keeps OpenClaw running across reboots and restarts it automatically if it crashes.
Option A: Use the Built-in Daemon Install (Easiest)
This automatically creates and enables a systemd user service. Check its status with:
Option B: Manual systemd Service (For Servers / System-Wide)
For a system-wide service (runs even without a logged-in user), create the service file manually:
Paste the following content:
Step 5: Add Local AI Models with Ollama (Optional)
For a completely private, no-API-cost setup, install Ollama and run AI models locally:
In your OpenClaw configuration, set the AI provider to Ollama with endpoint http://localhost:11434. Your agent now processes all requests locally — zero internet needed for AI inference, zero API costs, complete privacy.
Privacy-First Configuration on Linux
Linux users tend to care deeply about privacy, and OpenClaw is architected to support it. Here are the key privacy settings to configure:
With Ollama running locally, no query text ever leaves your machine. All AI inference happens in-process. Combined with OpenClaw's local memory storage, you have a 100% private AI agent.
OpenClaw stores memory in ~/.openclaw/. For sensitive use, encrypt your home directory with LUKS or use ecryptfs to protect this data at rest.
By default, OpenClaw's gateway listens only on localhost (127.0.0.1:18789). Keep it this way unless you specifically need remote access. Use SSH tunneling or Tailscale for secure remote access instead of exposing the port directly.
Create a dedicated user for OpenClaw with minimal permissions. This limits the blast radius if the process is ever compromised. Use Linux's principle of least privilege.
Common Linux Issues and Fixes
❌ "openclaw: command not found" after npm install
Fix: Your npm global bin directory isn't in PATH. Add it: echo 'export PATH=~/.npm-global/bin:$PATH' >> ~/.bashrc && source ~/.bashrc
❌ systemd service fails to start
Fix: Check the path to the openclaw binary: which openclaw and update the ExecStart line in the service file accordingly. Also check logs with journalctl -u openclaw -n 50.
❌ API calls time out or fail from Linux VPS
Fix: Your VPS may be in a region where Anthropic/OpenAI API endpoints are slow or blocked. Use a VPN or proxy to route API traffic through a US/EU server. See GreenVPN below.
❌ Ollama out of memory error
Fix: The model requires more RAM than available. Use a smaller model: if 14B fails on 8GB RAM, switch to 7B. Check available memory with free -h before pulling models.
API Access on Linux: Why a VPN Matters
If you're running OpenClaw in cloud API mode (Claude or GPT-4o), your Linux machine sends requests to API servers in the US. In many regions — particularly Asia and the Middle East — these connections are throttled or unreliable at peak hours.
For a Linux server or desktop running OpenClaw as a permanent service, the VPN needs to be always-on and stable. You don't want it dropping connections and breaking your agent's ability to respond. Configure your VPN client to reconnect automatically and set it up as a systemd service alongside OpenClaw.
Even if you use local Ollama models for AI inference, you still need reliable internet for Telegram (which routes through Telegram's API servers). A fast, stable VPN ensures your Telegram bot messages arrive and are processed without delay.
GreenVPN: Built for Always-On Linux Servers
GreenVPN is the first choice for Linux server operators who run OpenClaw continuously. With 1000Mbps gigabit bandwidth, 70+ global server locations, and 10+ years of uninterrupted operation, GreenVPN gives your Linux AI agent the stable, fast connectivity it needs — day and night.
- ✅ 1000Mbps gigabit bandwidth — zero API call delays
- ✅ 70+ countries — low latency to US Claude and OpenAI servers
- ✅ Only $1.5/month — fits any home server budget
- ✅ 30-day money-back guarantee — zero risk
- ✅ Linux CLI support — run as a systemd service alongside OpenClaw
- ✅ 10+ years of proven stability — the most reliable VPN for serious users
Frequently Asked Questions
Q: Can I run OpenClaw on a Raspberry Pi?
A: Yes, on a Raspberry Pi 4 or Pi 5 with 4GB+ RAM running 64-bit Raspberry Pi OS. Use cloud API mode (Claude/GPT) for best performance. Running Ollama local models on a Pi works but is slow for models larger than 3B parameters. The Pi 5 with 8GB RAM handles 7B models reasonably well.
Q: Can I run OpenClaw on a VPS?
A: Absolutely. A $6/month VPS with 2GB RAM runs OpenClaw in cloud API mode perfectly. This gives you a truly always-on agent that works even when your home internet or hardware is down. Popular VPS providers like DigitalOcean, Hetzner, and Vultr all work well.
Q: How do I update OpenClaw on Linux?
A: Stop the service, update, then restart: sudo systemctl stop openclaw && npm update -g openclaw && sudo systemctl start openclaw. OpenClaw updates frequently — check the GitHub releases page or subscribe to release notifications.
Q: Does OpenClaw work on Arch Linux?
A: Yes. Arch and its derivatives (Manjaro, EndeavourOS) work fine. Install Node.js via sudo pacman -S nodejs npm — Arch's repos usually have recent Node.js versions. The rest of the setup is identical to the Ubuntu/Debian guide above.