Remote OpenClaw Blog
OpenClaw on Raspberry Pi 5: Complete Setup Guide [2026]
What changed
This post was reviewed and updated to reflect current deployment, security hardening, and operations guidance.
What should operators know about OpenClaw on Raspberry Pi 5: Complete Setup Guide [2026]?
Answer: The Raspberry Pi 5 is the first Pi model with enough horsepower to run OpenClaw comfortably. With 8GB of RAM, a quad-core Cortex-A76 processor clocking up to 2.4GHz, and native PCIe support, it finally crossed the threshold from "barely possible" to "actually practical" for AI agent workloads. This guide covers practical deployment decisions, security controls, and operations steps.
Complete guide to running OpenClaw on Raspberry Pi 5. Docker setup, Ollama integration, performance expectations, always-on configuration, and cooling tips for a reliable 24/7 AI agent.
Marketplace
Free skills and AI personas for OpenClaw — deploy a pre-built agent in 15 minutes.
Browse the Marketplace →Join the Community
Join 500+ OpenClaw operators sharing deployment guides, security configs, and workflow automations.
Why Run OpenClaw on a Raspberry Pi 5?
The Raspberry Pi 5 is the first Pi model with enough horsepower to run OpenClaw comfortably. With 8GB of RAM, a quad-core Cortex-A76 processor clocking up to 2.4GHz, and native PCIe support, it finally crossed the threshold from "barely possible" to "actually practical" for AI agent workloads.
There are several compelling reasons to run OpenClaw on a Pi 5 instead of renting a VPS. The most obvious is cost. A VPS capable of running OpenClaw costs $5-20 per month. A Raspberry Pi 5 costs about $80 one-time, draws roughly 5-8 watts under load, and adds maybe $1-2 per month to your electricity bill. Over a year, you save $50-200 compared to cloud hosting.
The second reason is privacy. When you run OpenClaw on a Pi sitting on your desk, your conversation logs, API keys, and agent memory never leave your local network. There is no VPS provider who could theoretically access your data, no cloud account that could be compromised. Your data stays physically in your possession.
The third reason is always-on availability without recurring costs. Once set up, your Pi runs indefinitely. No monthly invoices, no payment method expirations, no surprise bills. It just sits there, running your agent, costing almost nothing.
The tradeoff is performance. A Pi 5 is significantly slower than a modern VPS for CPU-intensive tasks. If you are running Ollama with local models, inference will be slower than cloud APIs. And if you need to serve multiple users simultaneously, a Pi will struggle. But for personal automation — managing your calendar, processing emails, running scheduled tasks, answering messages on one or two channels — a Pi 5 handles the workload just fine.
Hardware Requirements
Here is exactly what you need to buy:
| Component | Recommendation | Approx. Price |
|---|---|---|
| Raspberry Pi 5 | 8GB model (4GB is not enough) | $80 |
| Power supply | Official Raspberry Pi 27W USB-C PSU | $12 |
| MicroSD card | Samsung EVO Plus 128GB (A2 rated) | $14 |
| Cooling | Official Pi 5 Active Cooler or Argon NEO 5 | $5-25 |
| Case | Official Pi 5 case or Argon NEO 5 case | $10-25 |
| Ethernet cable | Cat 6 (Wi-Fi works but ethernet is more reliable) | $5 |
Total: approximately $100-160 depending on case and cooling choices.
The 8GB model is non-negotiable. OpenClaw uses 800MB-1.2GB of RAM. Docker overhead adds another 100-200MB. If you want to run Ollama alongside it with even the smallest model, you need at least 6GB total. The 4GB Pi 5 will work for OpenClaw alone (without local models), but you will be constantly near the memory ceiling with no room for growth.
For storage, an A2-rated microSD card is important. The A2 specification guarantees faster random read/write speeds, which directly impacts Docker container startup times and OpenClaw's memory system performance. Avoid cheap no-name cards — they will degrade quickly under the constant writes from logging and memory operations.
If you want even better storage performance, the Pi 5's PCIe connector supports NVMe SSDs via an M.2 HAT. An NVMe SSD will give you roughly 5-10x faster storage I/O compared to microSD. This is overkill for most OpenClaw deployments, but if you plan to run Ollama with models that need fast disk access for loading, it makes a noticeable difference.
Operating System Setup
Start with Raspberry Pi OS Lite (64-bit). The Lite version has no desktop environment, which saves about 500MB of RAM that OpenClaw can use instead. You will manage everything via SSH, which is how you should manage a headless server anyway.
Flash the image using Raspberry Pi Imager. During the flashing process, click the gear icon to pre-configure your WiFi credentials, hostname, SSH access, and locale settings. Set the hostname to something memorable like openclaw-pi so you can access it as openclaw-pi.local on your network.
After first boot, SSH in and run the initial updates:
sudo apt update && sudo apt upgrade -y
Then configure the Pi for headless server operation using raspi-config:
sudo raspi-config
Navigate to System Options and disable the screen blanking and auto-login to desktop. Under Performance, set the GPU memory to 16MB (the minimum) since you are running headless and do not need GPU memory for display. This frees up an additional 200MB+ of system RAM.
Set the Pi to boot to CLI (console) rather than desktop. Enable SSH if you did not do it during flashing. Reboot when prompted.
Installing Docker on Pi 5
Docker is the cleanest way to run OpenClaw on a Pi. It handles all dependencies, makes updates simple, and ensures you are running the same container image that works on every other platform.
Install Docker using the official convenience script:
curl -fsSL https://get.docker.com | sh
sudo usermod -aG docker $USER
newgrp docker
Verify the installation:
docker --version
docker compose version
Both commands should return version numbers. The Pi 5 runs ARM64, and OpenClaw publishes official ARM64 Docker images, so there are no compatibility issues. You will be running native ARM binaries, not emulated x86.
One important Docker configuration for the Pi: set the default log driver to limit log file sizes. Docker logs can fill up your microSD card if left unchecked. Create or edit /etc/docker/daemon.json:
{
"log-driver": "json-file",
"log-opts": {
"max-size": "10m",
"max-file": "3"
}
}
Restart Docker after this change: sudo systemctl restart docker
Deploying OpenClaw with Docker Compose
Create a directory for your OpenClaw deployment:
mkdir -p ~/openclaw && cd ~/openclaw
Create a docker-compose.yml file:
version: "3.8"
services:
openclaw:
image: openclaw/openclaw:latest
container_name: openclaw
restart: always
ports:
- "3008:3008"
volumes:
- ./data:/app/data
environment:
- OPENCLAW_MODEL_PROVIDER=ollama
- OPENCLAW_MODEL_NAME=phi3:mini
- OLLAMA_BASE_URL=http://host.docker.internal:11434
- OPENCLAW_GATEWAY_TOKEN=your-secret-token-here
extra_hosts:
- "host.docker.internal:host-gateway"
Replace your-secret-token-here with a strong random string. This token protects your OpenClaw web UI and API. Generate one with: openssl rand -hex 32
If you plan to use cloud AI models (Claude, GPT-4o) instead of local Ollama models, replace the model environment variables with your API key:
- OPENCLAW_MODEL_PROVIDER=anthropic
- OPENCLAW_MODEL_NAME=claude-sonnet-4-20250514
- ANTHROPIC_API_KEY=your-anthropic-key
Start the container:
docker compose up -d
Check it is running:
docker logs openclaw --tail 50
You should see OpenClaw's startup sequence. Access the web UI at http://openclaw-pi.local:3008 from any device on your network.
Running Ollama for Local Models
Running Ollama alongside OpenClaw gives you a completely self-contained AI agent with zero API costs. The Pi 5 can handle small models surprisingly well.
Install Ollama:
curl -fsSL https://ollama.com/install.sh | sh
Pull a model that works well on Pi 5 hardware:
ollama pull phi3:mini
Phi-3 Mini (3.8B parameters) is the sweet spot for Pi 5. It uses approximately 2.5GB of RAM in its default Q4_K_M quantization, leaving enough headroom for OpenClaw and the operating system. Here is how the popular small models compare on Pi 5:
| Model | Parameters | RAM Usage | Speed (tokens/sec) | Quality |
|---|---|---|---|---|
| TinyLlama 1.1B | 1.1B | ~900MB | ~18 t/s | Basic |
| Phi-3 Mini | 3.8B | ~2.5GB | ~8 t/s | Good |
| Gemma 2B | 2B | ~1.5GB | ~12 t/s | Decent |
| Llama 3 8B (Q4) | 8B | ~5GB | ~3 t/s | Very good |
Llama 3 8B technically fits in memory on an 8GB Pi 5, but it leaves almost nothing for OpenClaw and the OS, causing heavy swap usage and very slow inference. Stick with Phi-3 Mini or smaller unless you have added swap space and can tolerate 3-5 second response times per token batch.
Verify Ollama is running and accessible:
curl http://localhost:11434/api/tags
This should return a JSON list of your installed models. OpenClaw connects to Ollama via the OLLAMA_BASE_URL environment variable set in your Docker Compose file.
Performance Expectations and Benchmarks
Let's set realistic expectations. A Raspberry Pi 5 is not a cloud server. Here is what you can expect for different workloads:
With cloud API models (Claude, GPT-4o): Performance is essentially the same as any other deployment. The Pi sends API requests and receives responses. The bottleneck is the API latency, not the Pi hardware. Response times are typically 1-5 seconds depending on the model and prompt length. This is the best experience on a Pi.
With Ollama + Phi-3 Mini: Expect approximately 8 tokens per second for generation. A typical 200-token response takes about 25 seconds. This is noticeably slower than cloud APIs but perfectly usable for asynchronous tasks like processing emails, generating reports, or responding to messages where a 30-second delay is acceptable.
With Ollama + TinyLlama: Faster at roughly 18 tokens per second, but the model quality is significantly lower. TinyLlama struggles with complex reasoning tasks. Best used for simple classification, routing, or template-based responses.
OpenClaw web UI: The management dashboard loads in 2-3 seconds on first visit (versus near-instant on a VPS). Subsequent navigation is responsive. You will not notice meaningful slowdown in the UI during normal operation.
Concurrent users: A Pi 5 can handle 1-2 simultaneous conversations comfortably. Beyond that, you will start seeing response delays as the CPU queues requests. This is a single-user or small-household device, not a production server for multiple clients.
The hybrid approach works well: use Ollama for simple, frequent tasks (message routing, quick classifications) and route complex tasks to cloud APIs (Claude for reasoning, GPT-4o for code generation). OpenClaw's multi-model routing makes this configuration straightforward.
Always-On Configuration
For a Pi that runs OpenClaw 24/7, you need to handle several reliability concerns.
Automatic container restart: The restart: always directive in Docker Compose handles container crashes. Docker will automatically restart OpenClaw if it crashes or if the Pi reboots.
Docker auto-start on boot: Enable the Docker service:
sudo systemctl enable docker
Disable sleep and power saving: Edit /boot/firmware/config.txt (or /boot/config.txt on older setups) and add:
disable_splash=1
dtparam=act_led_trigger=none
In raspi-config, ensure there is no screen blanking or sleep timeout configured.
Health check cron job: Add a simple health check that runs every 5 minutes:
crontab -e
Add this line:
*/5 * * * * docker inspect --format='{{.State.Running}}' openclaw | grep -q true || docker compose -f ~/openclaw/docker-compose.yml up -d
This checks if the OpenClaw container is running and restarts it if not. It is a belt-and-suspenders approach on top of Docker's own restart policy.
Swap space: Even with 8GB of RAM, adding swap prevents out-of-memory kills during temporary memory spikes:
sudo dphys-swapfile swapoff
sudo nano /etc/dphys-swapfile
# Set CONF_SWAPSIZE=2048
sudo dphys-swapfile setup
sudo dphys-swapfile swapon
This gives you 2GB of swap on the microSD card. It is slow, but it prevents crashes. If you have an NVMe SSD, put the swap there instead for much better swap performance.
Power stability: The official Raspberry Pi 27W USB-C power supply is essential. Underpowered supplies cause under-voltage throttling, which dramatically reduces performance and can corrupt the filesystem. For mission-critical deployments, consider a UPS HAT like the PiSugar 3 or Geekworm X728, which provide battery backup during short power outages and clean shutdown during longer ones.
Cooling and Thermal Management
The Pi 5 runs significantly hotter than previous models. Under sustained load (which is exactly what running OpenClaw and Ollama produces), the CPU will thermal throttle without adequate cooling. Thermal throttling reduces the CPU clock speed, which directly impacts Ollama inference performance and OpenClaw response times.
Minimum: Official Pi 5 Active Cooler ($5). This is a small heatsink-fan combo that clips directly onto the Pi 5 board. It keeps temperatures below 60C under sustained load. This is the minimum recommended cooling for running OpenClaw 24/7.
Better: Argon NEO 5 BRED case ($25). This is an aluminum case that acts as a giant heatsink with a built-in fan. It keeps temperatures below 50C even under full load. The enclosed design also protects the board from dust, which matters for a device that will sit running for months or years.
Monitor temperatures: Check your CPU temperature with:
vcgencmd measure_temp
Under sustained OpenClaw + Ollama load:
- Below 60C: Healthy, no throttling
- 60-70C: Warm but still within spec
- 70-80C: Starting to throttle, need better cooling
- Above 80C: Heavy throttling, fix immediately
You can add temperature monitoring to your cron jobs or have OpenClaw itself monitor and alert you if temperatures exceed thresholds. Some operators set up a simple skill that checks vcgencmd measure_temp every 10 minutes and sends a Telegram alert if the temperature exceeds 75C.
Frequently Asked Questions
Can Raspberry Pi 5 run OpenClaw?
Yes. The Raspberry Pi 5 with 8GB RAM has enough resources to run OpenClaw via Docker. It can also run Ollama with small models like Phi-3 Mini or TinyLlama for fully local AI agent operation. Performance is adequate for personal use and low-traffic deployments.
How much RAM does OpenClaw use on Raspberry Pi 5?
OpenClaw itself uses approximately 800MB to 1.2GB of RAM on a Pi 5. If you also run Ollama with a small model, expect total memory usage around 5-6GB, leaving some headroom on an 8GB Pi 5. The 4GB model is not recommended.
Can I run Ollama alongside OpenClaw on a Raspberry Pi 5?
Yes, but only with small models. Phi-3 Mini (3.8B parameters) and TinyLlama (1.1B) work well. Larger models like Llama 3 8B will cause memory pressure and swap thrashing. Use quantized GGUF versions (Q4_K_M) for the best balance of quality and performance.
How do I keep OpenClaw running 24/7 on a Raspberry Pi 5?
Use Docker with restart: always in your docker-compose.yml. Disable sleep and screen blanking in raspi-config. Set up a cron job to check the container health every 5 minutes. For power stability, use the official Raspberry Pi 27W USB-C power supply and consider a UPS HAT for protection against outages.
