Remote OpenClaw

Remote OpenClaw Blog

How to Build Custom OpenClaw Personas From Skills

7 min read ·

An OpenClaw skill teaches your agent one thing. A persona teaches it how to be a specific kind of developer. When you combine multiple skills with memory configuration, priority ordering, and behavioral rules, you get an agent that works like a specialized team member — not a generic assistant.

This guide shows you how to build custom personas from skills, configure memory so the agent retains project context, set skill priorities so the agent knows which rules win when skills conflict, and test the persona before deploying it to your team.

What Is a Persona

A persona is a bundle of skills, memory templates, and configuration that shapes how your agent approaches work. Think of the difference between asking a general-purpose agent to review your React code versus asking a "Senior React Engineer" persona that knows your component library, follows your testing conventions, and remembers your architecture decisions.

Personas live in your project's .openclaw/personas/ directory. Each persona is a YAML file that references skills and adds persona-level configuration on top.

Creating Your First Persona

Start with the persona scaffold command:

openclaw persona create frontend-engineer

This creates a file at .openclaw/personas/frontend-engineer.yaml:

# .openclaw/personas/frontend-engineer.yaml
name: Frontend Engineer
description: A senior frontend engineer specializing in React and TypeScript
version: 1.0.0

skills: []

memory:
  enabled: false

priorities: []

rules: []

Now fill it in with skills. Browse the OpenClaw Bazaar skills directory to find skills that match your frontend stack:

skills:
  - name: react-19-patterns
    version: ">=2.0.0"
  - name: typescript-strict
    version: ">=1.5.0"
  - name: nextjs-app-router
    version: ">=3.0.0"
  - name: tailwind-css-v4
    version: ">=1.0.0"
  - name: testing-library-react
    version: ">=2.0.0"
  - name: accessibility-wcag
    version: ">=1.2.0"

Each skill reference includes a version constraint so your persona stays stable as skills evolve. Pinning to a minimum version ensures you get bug fixes and improvements without breaking changes.

Configuring Memory

Memory lets your persona retain information across conversations. Without memory, every new chat starts from zero — the agent does not know your project structure, your naming conventions, or decisions made in previous sessions.

Enable and configure memory in your persona:

memory:
  enabled: true
  storage: local          # local | cloud | git
  retention: session      # session | project | permanent
  categories:
    architecture:
      description: "Project architecture decisions and patterns"
      examples:
        - "We use the repository pattern for data access"
        - "All API routes go through a middleware chain"
    conventions:
      description: "Team coding conventions and preferences"
      examples:
        - "Use named exports, not default exports"
        - "Components use the .tsx extension, utilities use .ts"
    context:
      description: "Current project state and recent changes"
      auto_capture: true
      max_entries: 50

The categories section structures what the agent remembers. Architecture decisions persist across sessions, so the agent does not suggest patterns that contradict established choices. Conventions enforce team standards. Context auto-captures recent changes so the agent knows what code was modified recently.

You can seed memory with initial entries:

memory:
  seed:
    architecture:
      - "The app uses Next.js App Router with server components by default"
      - "State management uses Zustand, not Redux"
      - "API calls go through a typed fetch wrapper in lib/api-client.ts"
    conventions:
      - "All components accept a className prop for style overrides"
      - "Tests live next to their source files with a .test.tsx suffix"
      - "Error boundaries wrap each route segment"

This seed data gives the persona immediate context without requiring several conversations to build up knowledge.

Setting Skill Priorities

When multiple skills provide overlapping guidance, the agent needs to know which one takes precedence. For example, your typescript-strict skill might say to always use explicit return types, while react-19-patterns might show examples without them. Priorities resolve these conflicts:

priorities:
  - skill: typescript-strict
    weight: 10
    scope: all
  - skill: react-19-patterns
    weight: 9
    scope: "**/*.tsx"
  - skill: nextjs-app-router
    weight: 9
    scope: "app/**/*"
  - skill: testing-library-react
    weight: 8
    scope: "**/*.test.{ts,tsx}"
  - skill: tailwind-css-v4
    weight: 7
    scope: "**/*.tsx"
  - skill: accessibility-wcag
    weight: 8
    scope: "**/*.tsx"

Higher weight means higher priority. The scope field restricts where a skill's priority applies using glob patterns. TypeScript strictness applies everywhere. React patterns apply to .tsx files. Next.js patterns apply to the app/ directory. This prevents the Next.js skill from influencing code outside the app directory, like utility libraries or scripts.

Adding Persona-Level Rules

Beyond skills, personas can define rules that apply across all skills. These are team-specific conventions that might not fit into any single skill:

rules:
  - id: no-default-exports
    description: "Use named exports in all files except Next.js pages"
    severity: warning
    pattern: "export default"
    exclude: "app/**/page.tsx, app/**/layout.tsx, app/**/loading.tsx"

  - id: require-error-boundary
    description: "Every route segment must have an error boundary"
    severity: error
    check: "Each directory in app/ with a page.tsx must have an error.tsx"

  - id: max-component-length
    description: "Components should not exceed 200 lines"
    severity: suggestion
    check: "Component files should be under 200 lines; suggest extraction if over"

  - id: require-test-coverage
    description: "New components must have accompanying tests"
    severity: warning
    check: "Every new .tsx file should have a corresponding .test.tsx file"

Rules give you fine-grained control over agent behavior without creating a full skill for each convention. They are quick to write and easy to update as your team's standards evolve.

Marketplace

Free skills and AI personas for OpenClaw — browse the marketplace.

Browse the Marketplace →

Building a Backend Persona

Personas are not limited to frontend work. Here is a backend persona for a Node.js API project:

# .openclaw/personas/backend-engineer.yaml
name: Backend API Engineer
description: A backend engineer specializing in Node.js APIs with PostgreSQL

skills:
  - name: nodejs-api-patterns
    version: ">=2.0.0"
  - name: postgresql-expert
    version: ">=1.5.0"
  - name: express-middleware
    version: ">=1.0.0"
  - name: security-review
    version: ">=3.0.0"
  - name: api-testing-patterns
    version: ">=1.2.0"

memory:
  enabled: true
  retention: project
  seed:
    architecture:
      - "RESTful API with Express.js and TypeScript"
      - "PostgreSQL with Knex.js query builder, no ORM"
      - "JWT authentication with refresh token rotation"
    conventions:
      - "All routes are versioned under /api/v1/"
      - "Error responses follow RFC 7807 Problem Details format"
      - "Database queries use the repository pattern"

priorities:
  - skill: security-review
    weight: 10
    scope: all
  - skill: nodejs-api-patterns
    weight: 9
    scope: "src/**/*"
  - skill: postgresql-expert
    weight: 9
    scope: "src/repositories/**/*"
  - skill: api-testing-patterns
    weight: 8
    scope: "**/*.test.ts"

rules:
  - id: no-raw-sql-in-routes
    description: "Route handlers must not contain direct SQL queries"
    severity: error
    check: "SQL queries should only appear in repository files"

  - id: require-input-validation
    description: "All route handlers must validate input with zod schemas"
    severity: error
    check: "Every POST/PUT/PATCH handler must use a validation middleware"

Notice that security-review has the highest priority for the backend persona. Every piece of code the agent generates or reviews gets checked against security best practices first.

Testing Your Persona

Before rolling out a persona to your team, test it against real scenarios. OpenClaw provides a persona testing command:

openclaw persona test frontend-engineer --scenario "Create a new user profile component"

This runs the persona against a predefined scenario and shows you the agent's output. Check for:

  1. Skill activation — Did the agent use the expected skills?
  2. Priority resolution — When skills conflicted, did the higher-priority skill win?
  3. Memory usage — Did the agent reference seeded memory entries?
  4. Rule compliance — Does the generated code follow persona rules?

You can also create test suites for automated persona validation:

# .openclaw/personas/tests/frontend-engineer.test.yaml
tests:
  - name: "Component generation follows conventions"
    prompt: "Create a UserAvatar component"
    expect:
      - contains: "export function UserAvatar"    # Named export
      - contains: "className"                      # Accepts className prop
      - not_contains: "export default"             # No default export
      - file_created: "**/*.test.tsx"              # Test file created

  - name: "Uses correct state management"
    prompt: "Add global state for user preferences"
    expect:
      - contains: "zustand"                        # Uses Zustand, not Redux
      - not_contains: "redux"
      - not_contains: "createStore"

  - name: "Respects Next.js patterns in app directory"
    prompt: "Create a new dashboard page"
    expect:
      - contains: "export default"                 # Pages use default export
      - contains: "error.tsx"                      # Error boundary included

Run the full test suite with:

openclaw persona test frontend-engineer --suite

This validates that your persona behaves correctly across a range of scenarios. Run it after updating skills or changing persona configuration to catch regressions.

Sharing Personas Across a Team

Once your persona is tested, share it with your team by committing the persona file to your repository. Every developer who clones the repo gets the same persona configuration:

git add .openclaw/personas/frontend-engineer.yaml
git commit -m "Add frontend engineer persona"

Team members activate the persona with:

openclaw persona use frontend-engineer

This ensures every developer on your team gets consistent agent behavior. No more "it works differently on my machine" for AI-assisted development.

Personas are a powerful way to encode your team's expertise into your AI agent. Instead of each developer configuring skills individually, a well-built persona captures your collective knowledge and applies it consistently across every interaction.


Browse the Skills Directory

Find the right skill for your workflow. The OpenClaw Bazaar skills directory has over 2,300 community-rated skills — searchable, sortable, and free to install.

Browse Skills →

Try a Pre-Built Persona

Don't want to configure everything from scratch? OpenClaw personas come pre-loaded with skills, memory templates, and workflows designed for specific roles. Compare personas →