If OpenClaw currently feels like a slightly more capable version of ChatGPT inside Telegram, you're not using it wrong — you're just not using it fully. The gap between "AI chatbot" and "autonomous AI agent" comes down to a handful of configuration choices that most people never make.
Here are five techniques that change what OpenClaw actually is.
1. Sub-Agents: Stop Making Your Main Agent Do Everything
The single biggest upgrade you can make to your OpenClaw setup is introducing sub-agents.
Most people run one agent and throw everything at it — research, coding, scheduling, writing, analysis. This overloads the main context window, mixes concerns that should be separated, and produces less reliable output because the agent is being asked to be a generalist for every task.
The better model: A main agent that thinks and delegates, and specialist sub-agents that execute.
Your main agent becomes your thinking partner — the one you brainstorm with, make decisions with, and hand tasks off from. It rarely does the actual work itself. Instead, it routes tasks to dedicated sub-agents:
- A developer sub-agent for any coding work
- A research sub-agent for investigation and information gathering
- A content sub-agent for writing and editing
- A general assistant for admin and scheduling
How to create a sub-agent:
Open your Telegram or Slack chat and paste a prompt like:
"Create a new persistent agent named [Name]. Make them my dedicated [role] assistant. Use [model] as their primary model. Route all [task type] tasks to [Name]. Leave my main agent unchanged and let me know when the new agent is ready."
Refresh your OpenClaw gateway dashboard and the new agent will appear with its own profile.
Why this matters beyond just organisation: Sub-agents have their own sessions and context windows. When your developer agent writes a full application, all that code and back-and-forth stays in the developer's session — not your main one. Your main context stays lean. Your main agent stays fast and cheap.
2. Proactive Scheduling: From Reactive to Autonomous
The difference between a chatbot and an agent is that a chatbot waits to be asked. An agent acts on its own when it's supposed to.
OpenClaw supports scheduled tasks through cron jobs. Most people know this, but very few set it up to do anything genuinely useful.
The morning brief is the classic starting point. Set it up once with a prompt like:
"Every morning at 8:00 a.m., send me a report that includes: the top 3 AI news stories, my tasks for today from my to-do list, 2–3 content ideas based on recent trends in [your niche], and any time-sensitive tasks you think I should handle today."
That's passive value delivered daily without any effort.
More ambitious: Ask your agent to autonomously improve itself over time.
"Every day at 9:00 a.m., work on something you identified yesterday that could make my workflows better. Surprise me with what you built or improved."
This prompt turns your agent into something that iterates and learns on its own — building new automations, refining existing ones, and occasionally surprising you with something you didn't know you needed.
A note on heartbeat costs: Proactive activity costs tokens. Agents waking up frequently on expensive models can get expensive fast. Set active hours. Use cheaper models for background tasks. Reserve your primary model for the interactions that need it. (See our full cost guide for specifics.)
3. Fixing the Memory Problem — For Real
Memory in OpenClaw works until it doesn't. Here's why it breaks down and what to do about it.
Every session, important information gets added to your memory.md file. As that file grows, OpenClaw periodically compresses it to keep token costs manageable. The compression is lossy. Details that felt important disappear into summaries.
Short-term fix — the pre-compaction save:
Use this prompt to run automatically before compression happens:
"Before any context compaction runs, save the most important things from this session to a dedicated memory file: key decisions made, projects I mentioned, preferences I stated, anything I should know next time we talk."
This ensures the details worth keeping get explicitly saved before the compaction can erase them.
Long-term fix — structured memory with Obsidian:
The more robust solution is connecting your OpenClaw instance to an Obsidian vault — a folder of structured markdown files that your agent can read, write, and search. Nothing gets auto-compressed. Everything persists in a form that's readable by both the agent and by you.
Your agent can then search your notes like a knowledge base, add to them after every session, and arrive at new conversations already informed about your projects, preferences, and history.
(Full Obsidian setup instructions in our memory guide.)
4. Sub-Agents for Coding — Use Codex to Avoid API Costs
This is specifically valuable for anyone doing development work with OpenClaw.
If you have a ChatGPT Plus subscription, you can install Codex on your agent's machine and authenticate using your existing subscription rather than API tokens. Your OpenClaw agent can then spawn Codex sessions for coding tasks — using your subscription's included capacity rather than burning additional API credits.
Setup: install Codex on your agent machine, run auth to connect it to your ChatGPT account, then add an instruction to your agents.md that routes all development tasks to a Codex sub-agent automatically.
The result: your main agent orchestrates, Codex builds, and you're not paying per-token for development work.
5. Self-Improvement Loops: An Agent That Gets Better Over Time
This is the most advanced technique, and the one with the highest long-term value.
The basic version: install the self-improving agent skill from ClawHub. It creates a structure with files for hot memory, correction logs, and task-specific patterns. Every time you correct your agent or provide feedback, it logs that correction. It builds a record of what you liked, what you didn't, and how you want things done.
Over time, the agent stops making the same mistakes. Its output starts to converge on what you actually want.
The manual version (for those who prefer not to use community skills):
"Set up a self-improvement system. Create a folder called /self-improvement with three files: corrections.md for logging every time I correct you and what I wanted instead; preferences.md for things I've told you I prefer; and patterns.md for recurring requests and how I want them handled. Review these files at the start of every new session."
The key behaviour to reinforce: every time you give feedback — "I don't like that format," "next time do X instead," "that's not what I meant" — your agent logs it explicitly in the corrections file. That log becomes the foundation for improvement.
Why this matters: Most AI interactions are stateless. You correct the same thing again and again because nothing carries over. A self-improving agent is cumulative — it gets meaningfully better the more you use it.
Putting It Together: A Setup That's Actually Powerful
The common thread across all five techniques is shifting from reactive to proactive, and from generic to specific.
An OpenClaw setup that combines these techniques looks like:
- A lean main agent that thinks and delegates rather than doing everything
- Specialist sub-agents that handle coding, research, and content in isolated sessions
- Scheduled tasks that deliver value every morning without any prompting
- A memory system that actually persists meaningful information across sessions
- An improvement loop that gradually shapes the agent's behaviour toward your preferences
That's qualitatively different from a chatbot. That's closer to an AI employee who knows your work, gets better over time, and keeps running when you're not at your desk.
The setup takes work. The first week is configuration-heavy. But the compounding value — an agent that gets more capable and more calibrated every week — makes that upfront investment worthwhile.
Where to Start
If you're going to implement one thing from this guide, make it sub-agents. The separation of concerns alone will make your existing usage noticeably better — cleaner context, cheaper main sessions, more reliable specialist output.
From there: add scheduled tasks once your agent structure is stable, then tackle memory once you're clear on what information is worth preserving.
Related guides: OpenClaw Skills Guide | Reducing Token Costs | Multi-Agent Team Setup | Obsidian Memory Integration