Remote OpenClaw Blog
OpenClaw Dreaming Guide: Memory Consolidation Explained
14 min read ·
Remote OpenClaw Blog
14 min read ·
Dreaming is OpenClaw's autonomous memory consolidation system, inspired by how biological sleep consolidates short-term memories into long-term storage. It takes the raw, unstructured conversation history from your agent's daily interactions and distills it into structured, persistent knowledge that improves your agent's performance over time.
Without Dreaming, your OpenClaw agent treats every session as a fresh start. It reads MEMORY.md for persistent context, but that file only contains what you or previous sessions have manually written to it. Valuable patterns — your preferences, recurring workflows, project-specific knowledge, corrections you have made — live only in conversation logs that grow stale and eventually get pruned.
With Dreaming enabled, your agent processes those conversation logs overnight, extracts the valuable signal, and writes it to MEMORY.md automatically. Over days and weeks, your agent builds an increasingly rich understanding of your work patterns, preferences, and domain knowledge — without you needing to manually curate its memory.
Dreaming graduated from beta to General Availability in OpenClaw 4.5 (April 6, 2026) after six months of testing across thousands of operators. It is now production-ready and enabled by default in new installations.
The practical impact of Dreaming is that your agent gets better at its job without you actively teaching it. Here are concrete examples of what Dreaming captures and how it improves agent performance:
The compounding effect is significant. After a month of Dreaming, operators consistently report that their agents feel noticeably more capable and aligned with their working style. The agent "knows" things it was never explicitly told — because it learned them from the pattern of your interactions.
Dreaming runs in three sequential phases, each serving a distinct purpose. The naming deliberately echoes biological sleep phases because the functional analogy is surprisingly accurate.
Light Sleep is the first pass over your conversation history. The agent scans all conversations since the last Dreaming session and tags content by category:
Light Sleep is fast — it processes a full day's conversations in under 30 seconds for most operators. The output is a tagged inventory of candidate memories, each marked with a category and a preliminary relevance score.
Deep Sleep takes the tagged inventory from Light Sleep and applies the weighted scoring algorithm (detailed in the next section) to rank each candidate memory by importance. This is where Dreaming decides what matters enough to promote to permanent memory.
During Deep Sleep, the agent also checks for duplicates and conflicts with existing MEMORY.md content. If a candidate memory contradicts something already in MEMORY.md, Deep Sleep flags it as a potential update rather than a new entry. If a candidate memory is a more detailed version of an existing memory, Deep Sleep marks it for merging.
Deep Sleep is the most computationally intensive phase. For a typical day's conversations, it takes 1-3 minutes. For operators with very high conversation volumes, it can take up to 10 minutes.
REM is the final phase where Dreaming takes action. It takes the scored, deduplicated, conflict-resolved memories from Deep Sleep and writes them to MEMORY.md.
REM does not simply append raw text. It synthesizes memories into well-structured, concise entries that integrate with the existing content of MEMORY.md. If a new memory relates to an existing section, REM adds it to that section. If it represents a new topic, REM creates a new section with appropriate headers.
REM also handles memory decay — entries in MEMORY.md that have not been reinforced by recent conversations are flagged for potential removal. Dreaming does not delete them automatically (that would be too aggressive) but marks them with a low-confidence tag that the operator can review.
The name "REM" is apt: just as biological REM sleep is when the brain forms new neural connections and consolidates learning, OpenClaw's REM phase is when new knowledge is permanently integrated into the agent's persistent memory.
Dreaming uses a multi-factor scoring algorithm to determine which candidate memories deserve promotion to MEMORY.md. Understanding the scoring factors helps you influence what your agent remembers.
| Factor | Weight | What It Measures |
|---|---|---|
| Frequency | 30% | How often this information appears across conversations |
| Recency | 25% | How recently the information was referenced or reinforced |
| User Corrections | 25% | Whether this information corrects a previous agent mistake |
| Explicit Importance | 15% | Whether the user explicitly flagged this as important |
| Contextual Relevance | 5% | How relevant this information is to the agent's primary tasks |
The threshold for promotion is a score of 0.6 (on a 0-1 scale). Memories scoring above 0.6 are written to MEMORY.md. Memories scoring between 0.4 and 0.6 are held in a "pending" state and reconsidered in the next Dreaming session — if they are reinforced again, their score increases. Memories below 0.4 are discarded.
You can influence scoring in several ways:
promote command: Manually promoting a memory bypasses scoring entirely and writes it directly to MEMORY.md.MEMORY.md is a Markdown file that lives in your OpenClaw configuration directory (typically ~/.openclaw/MEMORY.md or in your project's .openclaw/MEMORY.md). It is the persistent memory store that your agent reads at the start of every session.
The file follows a structured format with sections and entries:
# MEMORY.md
## Project: webapp-frontend
- Uses React 18 with TypeScript
- Styling: Tailwind CSS v4, no custom CSS unless necessary
- Testing: Vitest + React Testing Library
- Deploy target: Vercel, auto-deploy from main branch
## Coding Preferences
- Prefer early returns over nested conditionals
- Use named exports, not default exports
- Always destructure props in function signatures
- Error handling: use Result type pattern, not try/catch
## Communication Style
- Keep responses concise — max 3 paragraphs unless asked for detail
- Always show code examples, not just descriptions
- Use bullet points for lists of 3+ items
## Team Context
- Sarah handles backend API (FastAPI + PostgreSQL)
- Mike owns the CI/CD pipeline (GitHub Actions)
- Standups at 9:30 AM PST daily
Dreaming reads this file, respects its existing structure, and adds new entries in the appropriate sections. If Dreaming discovers information about coding preferences, it adds to the "Coding Preferences" section rather than creating a duplicate section.
You can manually edit MEMORY.md at any time. Your manual edits are respected — Dreaming will not overwrite or remove content you have explicitly written. The file is designed to be a collaborative document between you and your agent's Dreaming process.
For detailed configuration of MEMORY.md paths, scoping, and project-level memory, see the OpenClaw Memory Configuration Guide.
The Dream Diary is an audit log introduced in OpenClaw 4.5 that records every Dreaming session in detail. It answers the question operators always ask: "What did Dreaming do last night?"
The Dream Diary is stored at ~/.openclaw/dream-diary.log and contains structured entries for each session:
[2026-04-06T03:00:12Z] Dreaming session started
[2026-04-06T03:00:12Z] Light Sleep: Scanning 47 conversations (128,342 tokens)
[2026-04-06T03:00:38Z] Light Sleep complete: 23 candidate memories tagged
[2026-04-06T03:00:38Z] Deep Sleep: Scoring 23 candidates
[2026-04-06T03:01:45Z] Deep Sleep complete: 8 candidates above threshold (0.6)
[2026-04-06T03:01:45Z] REM: Writing 8 memories to MEMORY.md
[2026-04-06T03:02:03Z] PROMOTED: [Preferences] "Use pnpm instead of npm for all projects" (score: 0.82)
[2026-04-06T03:02:03Z] PROMOTED: [Facts] "Production database is on Supabase, project ID: xyz123" (score: 0.71)
[2026-04-06T03:02:03Z] UPDATED: [Coding] "Error handling preference" — merged with existing entry (score: 0.88)
[2026-04-06T03:02:03Z] DISCARDED: [Noise] "Discussion about lunch options" (score: 0.12)
[2026-04-06T03:02:03Z] PENDING: [Workflow] "Deployment checklist sequence" (score: 0.54) — will reconsider next session
[2026-04-06T03:02:18Z] Dreaming session complete: 6 promoted, 2 updated, 12 discarded, 3 pending
The Dream Diary is invaluable for understanding how your agent's memory evolves over time. You can review it daily to catch any incorrect promotions (and manually correct them in MEMORY.md) or weekly to get a sense of what your agent is learning.
Marketplace
Free skills and AI personas for OpenClaw — browse the marketplace.
Browse the Marketplace →Dreaming configuration in OpenClaw 4.5 is deliberately simple. The minimal configuration to enable Dreaming is a single flag:
{
"dreaming": {
"enabled": true
}
}
With this minimal config, Dreaming uses sensible defaults: runs at 3AM local time, processes the last 24 hours of conversations, uses your default LLM model, and writes to the standard MEMORY.md location.
The full configuration options are:
{
"dreaming": {
"enabled": true,
"schedule": "0 3 * * *",
"history_window": "24h",
"model": "claude-sonnet-4",
"threshold": 0.6,
"max_promotions_per_session": 20,
"memory_path": "~/.openclaw/MEMORY.md",
"diary_path": "~/.openclaw/dream-diary.log",
"mode": "auto"
}
}
| Parameter | Default | Description |
|---|---|---|
| enabled | true (new installs) | Enable or disable Dreaming entirely |
| schedule | "0 3 * * *" | Cron expression for when Dreaming runs |
| history_window | "24h" | How far back to scan conversations |
| model | (your default) | LLM model to use for Dreaming processing |
| threshold | 0.6 | Minimum score for memory promotion |
| max_promotions_per_session | 20 | Cap on memories promoted per session |
| memory_path | ~/.openclaw/MEMORY.md | Where MEMORY.md lives |
| diary_path | ~/.openclaw/dream-diary.log | Where the Dream Diary is written |
| mode | "auto" | "auto" (fully autonomous) or "review" (requires manual approval) |
The default 3AM schedule is chosen for a reason: most operators are asleep, the agent is idle, and API rates are typically at their lowest (meaning faster inference). But you can change this to any schedule that suits your workflow.
Dreaming uses OpenClaw's built-in cron system — the same one used for scheduled tasks. No external cron daemon or systemd timer is needed.
# Default: Run at 3AM every day
"schedule": "0 3 * * *"
# Run at 2AM on weekdays only
"schedule": "0 2 * * 1-5"
# Run every 12 hours
"schedule": "0 */12 * * *"
# Run at midnight Sunday for weekly consolidation
"schedule": "0 0 * * 0"
For Dreaming to run on schedule, your OpenClaw instance must be running. If you are on a laptop that sleeps overnight, Dreaming will not execute — this is one of the reasons we recommend running OpenClaw on an always-on server or VPS for operators who want full Dreaming benefits.
If your machine was asleep during the scheduled time, Dreaming will run at the next available opportunity when OpenClaw starts up — it detects missed sessions and catches up automatically.
OpenClaw provides three commands for interacting with Dreaming:
/dreamingTriggers a manual Dreaming session immediately, bypassing the cron schedule. Useful when you have had a particularly productive session and want to consolidate memories right away rather than waiting until 3AM.
# In an OpenClaw session
/dreaming
# Output:
# Starting Dreaming session...
# Light Sleep: Scanning 12 conversations (34,218 tokens)
# Deep Sleep: Scoring 8 candidates
# REM: Promoting 3 memories to MEMORY.md
# Session complete. Check dream-diary.log for details.
promoteManually promotes a specific piece of information to MEMORY.md, bypassing the scoring system entirely. Use this when you know something is important and do not want to wait for Dreaming to pick it up organically.
# In an OpenClaw session
promote "Always use pnpm, never npm, for this project"
# Output:
# Promoted to MEMORY.md under [Preferences]:
# "Always use pnpm, never npm, for this project"
The promote command writes immediately — it does not queue the memory for the next Dreaming session. This is the fastest way to add persistent memory.
rem-harnessA diagnostic command that runs only the REM phase on a specific set of candidate memories. This is useful for testing and debugging — you can feed it a list of candidate memories and see exactly what REM would write to MEMORY.md without actually writing anything.
# Dry-run REM phase with test candidates
rem-harness --dry-run --input candidates.json
# Output:
# [DRY RUN] Would promote 4 of 7 candidates:
# 1. [Facts] "API rate limit is 100 req/min on free tier" (score: 0.78)
# 2. [Preferences] "Use kebab-case for CSS class names" (score: 0.72)
# 3. [Workflow] "Deploy sequence: test → build → deploy → notify" (score: 0.69)
# 4. [Corrections] "The staging URL is staging.example.com, not stage.example.com" (score: 0.91)
# [DRY RUN] No changes written to MEMORY.md
The rem-harness command is primarily for advanced operators who want to fine-tune their Dreaming configuration. Most operators will never need it.
Claude Dispatch, launched March 17, 2026, has no equivalent to Dreaming. This is one of the most significant differences between the two platforms.
| Capability | OpenClaw Dreaming | Claude Dispatch |
|---|---|---|
| Persistent memory | MEMORY.md (structured, versioned) | Conversation history only |
| Cross-session learning | Yes (automatic via Dreaming) | No |
| Memory consolidation | Autonomous (3 phases) | None |
| Preference learning | Automatic over time | Manual re-stating each session |
| Memory audit trail | Dream Diary | None |
| Manual memory control | promote, rem-harness, direct edit | None |
| Scheduled processing | Cron-based (default 3AM) | Not available |
Dispatch relies on conversation history within a session thread. If you start a new thread, you start with a blank slate (aside from whatever Claude's general memory feature captures, which is separate from Dispatch). There is no mechanism for Dispatch to learn from your interaction patterns, accumulate project-specific knowledge, or improve its behavior over time based on your corrections.
This is a fundamental architectural difference. OpenClaw with Dreaming is designed to be a long-lived agent that gets better the more you use it. Dispatch is designed to be a capable assistant that executes tasks in the moment but does not build lasting context.
For operators who use their agent daily and want it to accumulate expertise, Dreaming is a major differentiator. For operators who use an agent occasionally for one-off tasks, the difference matters less.
Dreaming does not need your most capable model. The processing is pattern recognition and text synthesis — tasks that mid-tier models handle well. You can save significant API costs by assigning a cheaper model specifically to Dreaming:
{
"dreaming": {
"enabled": true,
"model": "gpt-4.1-mini"
}
}
GPT-4.1-mini, Claude Haiku, or Qwen 2.5-72B all work well for Dreaming at a fraction of the cost of frontier models. The quality difference in memory consolidation is minimal because Dreaming is extracting structured information, not generating creative content.
If you want full control over what gets written to MEMORY.md, enable review mode:
{
"dreaming": {
"enabled": true,
"mode": "review"
}
}
In review mode, Dreaming runs all three phases but stops before writing to MEMORY.md. Instead, it writes proposed changes to ~/.openclaw/dreaming-pending.md. The next time you start an OpenClaw session, you are prompted to review and approve or reject each proposed memory change.
Review mode is useful during the first week of using Dreaming, when you want to verify that the scoring is capturing the right information before trusting it to run fully autonomously.
If you work on multiple projects and want separate memory stores for each, configure project-scoped MEMORY.md paths:
# In your project's .openclaw/config.json
{
"dreaming": {
"enabled": true,
"memory_path": "./.openclaw/MEMORY.md"
}
}
This writes project-specific memories to a MEMORY.md within the project directory rather than the global one. The global MEMORY.md still captures cross-project preferences (communication style, general coding conventions), while project-scoped files capture project-specific knowledge.
Yes. Dreaming uses your configured LLM to process conversation history and generate memory insights. Each Dreaming session consumes tokens proportional to the amount of conversation history being processed. A typical nightly session processing a day's worth of conversations costs between $0.02 and $0.15 in API tokens, depending on your model and conversation volume. You can control costs by limiting the history window or using a cheaper model specifically for Dreaming.
Not in real-time during the Dreaming process, but you can review everything after the fact using the Dream Diary. The Dream Diary logs every memory that was promoted, modified, or discarded during each session. If you find a memory that was incorrectly promoted, you can manually edit MEMORY.md to remove it. You can also use the promote command to manually approve or reject memories before they are written. For operators who want full control, setting Dreaming to "review" mode requires manual approval of each proposed memory change.
Dreaming is designed to run during idle periods and will not start if OpenClaw detects active usage. The default 3AM cron schedule is chosen specifically to avoid conflicts with active sessions. If you trigger Dreaming manually via the /dreaming command during an active session, it will process only completed conversations — not the one currently in progress. There is no performance impact on your active session.
No. As of April 2026, Claude Dispatch has no equivalent to Dreaming or any autonomous memory consolidation system. Dispatch retains conversation history within individual sessions but does not process that history to build persistent, cross-session memory. Each Dispatch session starts with whatever context is in the current conversation thread. OpenClaw's Dreaming system is a unique differentiator — it gives your agent the ability to learn and accumulate knowledge over time without manual memory management.