The standard OpenClaw deployment story goes like this: spin up a VPS, install Node.js, configure your API keys, run as a systemd service. It works well. It costs $6-20/month for hosting.
MimiClaw takes a different direction entirely: it runs the core OpenClaw agent architecture on an ESP32-S3 microcontroller — a chip that costs about $5.
No Linux. No Node.js. No server. Pure C, running directly on the chip.
What MimiClaw Actually Is
MimiClaw (github.com/memovai/mimiclaw) is an open-source project that reimplements the OpenClaw agent architecture for embedded hardware. You plug an ESP32-S3 development board into USB power, connect it to WiFi, and it becomes a persistent AI assistant accessible through Telegram.
The pitch on the repository:
> "The world's first AI assistant (OpenClaw) on a $5 chip. No Linux. No Node.js. Just pure C."
At 2,600+ GitHub stars and growing, it's clearly struck a nerve with the maker and IoT community.
How It Works
The architecture is surprisingly clean given the hardware constraints:
- You send a message to your Telegram bot
- The ESP32-S3 picks it up over WiFi
- It runs a local agent loop — the LLM call goes out to Anthropic or OpenAI via HTTPS
- Tools execute (web search, time, cron jobs)
- The reply comes back to Telegram
The chip itself isn't running the language model — that still happens in the cloud via API calls to Claude or GPT-4. What runs on the chip is the agent loop, memory management, tool execution, conversation routing, and the Telegram integration.
In other words: the expensive AI reasoning happens remotely, but the agent infrastructure runs locally on $5 of hardware.
Memory and Persistence
This is where MimiClaw gets interesting. All memory lives on the chip's flash storage as plain text files:
| File | Purpose | |------|---------| | SOUL.md | Personality — edit to change behavior | | USER.md | Info about you — name, preferences, language | | MEMORY.md | Long-term memory across reboots | | HEARTBEAT.md | Task list the agent checks autonomously | | cron.json | Scheduled jobs created by the AI | | tg_12345.jsonl | Chat history per conversation |
Memory survives reboots. The agent remembers your preferences, past conversations, and ongoing tasks even after power cycling — because it's stored to flash, not RAM.
The Heartbeat feature is particularly clever: the agent periodically reads HEARTBEAT.md and acts on any uncompleted tasks it finds. Write tasks to the file, and the agent picks them up autonomously on the next heartbeat cycle (default: 30 minutes). No prompt needed.
The Cron System
Like full OpenClaw, MimiClaw supports scheduled tasks — but with a twist: the AI creates its own cron jobs. Using the cron_add tool, the LLM can schedule recurring or one-shot tasks during conversation:
"Remind me to review my finances every first Monday of the month" — the agent creates the cron job, it persists to flash, and it fires even if you haven't sent a message in weeks.
Jobs survive reboots because cron.json lives on SPIFFS (the chip's flash filesystem).
Hardware Requirements
The project recommends:
- ESP32-S3 dev board with 16MB flash and 8MB PSRAM (e.g., Xiaozhi AI board, ~$10)
- USB-C cable for power and flashing
- That's it
Total hardware cost: under $15 for a self-contained, always-on AI agent that draws about 0.5W.
The key constraint: you need to use the correct USB port on the board. Most ESP32-S3 boards have two USB-C ports — one labeled USB (native, required) and one labeled COM. Using the wrong one causes flash failures.
Getting Started
You'll need ESP-IDF v5.5+ installed (Espressif's official toolchain), then:
git clone https://github.com/memovai/mimiclaw.git
cd mimiclaw
idf.py set-target esp32s3
# Configure credentials
cp main/mimi_secrets.h.example main/mimi_secrets.h
# Edit mimi_secrets.h with your WiFi, Telegram token, API key
# Build and flash
idf.py fullclean && idf.py build
idf.py -p PORT flash monitor
Runtime configuration is available via serial CLI — you can change WiFi, API keys, and model provider without recompiling:
mimi> set_api_key sk-ant-api03-...
mimi> set_model_provider openai
mimi> config_show
Switching Between Claude and GPT-4
MimiClaw supports both Anthropic (Claude) and OpenAI (GPT) as providers, switchable at runtime. Claude is the default for complex reasoning; GPT-4o works well for faster responses. You flip between them with a serial command — no recompile needed.
Who This Is For
MimiClaw is a project for people who want:
Always-on with zero ongoing hosting cost. After the $10-15 hardware spend, the only cost is API calls. No VPS, no monthly subscription, no server to maintain. Plug it in and forget about it.
A physical agent, not a virtual one. There's something meaningfully different about a dedicated hardware device for your AI assistant versus a process running on a shared cloud server. For some people this matters.
Maker/hacker sensibility. If you enjoy embedded development and want to understand an AI agent stack at the level of C code running on bare metal, this is a fascinating project to dig into.
Extreme privacy for the infrastructure layer. Your agent loop, memory, and conversation history live on a chip you physically own. The only external communication is the HTTPS API call to your LLM provider.
The Limitations
A few things to be honest about:
ESP-IDF setup is non-trivial. If you've never done embedded development, the toolchain setup has a learning curve. This isn't npm install territory.
No browser automation. The chip can call web search APIs, but it can't control a browser or interact with complex web UIs the way a VPS-based deployment can.
Limited compute for local models. You can call cloud LLMs fine, but running a local model on the chip itself isn't realistic with current hardware. Ollama stays on your Mac.
Still early. The project is a few weeks old (first release was last week). Expect rough edges.
That said, 2,600 stars in under two weeks suggests it's solving a real problem for a real audience.
Links:
- GitHub: github.com/memovai/mimiclaw
- Website: mimiclaw.io
Prefer your OpenClaw on a proper VPS without the hardware hassle? Remote OpenClaw handles the full server deployment, Telegram connection, and hardening so your agent runs reliably in the cloud. See the packages.