Remote OpenClaw Blog
How to Use OpenClaw MCP Servers With Docker
5 min read ·
MCP (Model Context Protocol) servers extend your AI coding agent with external tools and data sources. Running them in Docker containers gives you isolation, reproducibility, and security that bare-metal installations cannot match. This guide covers containerized MCP server setups, docker-compose configurations, networking, security best practices, and common server configurations.
Why Docker for MCP Servers?
MCP servers are long-running processes that your agent connects to for additional capabilities like database access, API integrations, or file system tools. Running them directly on your machine creates several problems:
- Dependency conflicts between different servers
- Security exposure from servers with file system or network access
- Inconsistent environments across team members
- Difficult cleanup when you stop using a server
Docker solves all of these. Each server runs in its own container with defined dependencies, limited permissions, and easy teardown.
Prerequisites
You need Docker and Docker Compose installed on your machine. Verify with:
docker --version
docker compose version
You also need OpenClaw configured with at least one skill installed from the Bazaar.
Basic MCP Server in Docker
Start with a single MCP server running in a container. Here is a Dockerfile for a filesystem MCP server:
# Dockerfile.mcp-filesystem
FROM node:22-slim
WORKDIR /app
RUN npm install -g @modelcontextprotocol/server-filesystem
EXPOSE 3100
CMD ["mcp-server-filesystem", "--port", "3100", "--root", "/workspace"]
Build and run it:
docker build -t mcp-filesystem -f Dockerfile.mcp-filesystem .
docker run -d \
--name mcp-filesystem \
-p 3100:3100 \
-v $(pwd):/workspace:ro \
mcp-filesystem
The :ro flag mounts your project directory as read-only inside the container, preventing the MCP server from modifying your files.
Docker Compose for Multiple Servers
Most real setups involve multiple MCP servers. Docker Compose makes this manageable:
# docker-compose.mcp.yml
version: "3.8"
services:
mcp-filesystem:
build:
context: .
dockerfile: Dockerfile.mcp-filesystem
ports:
- "3100:3100"
volumes:
- ./:/workspace:ro
restart: unless-stopped
networks:
- mcp-network
mcp-postgres:
image: mcp/postgres-server:latest
ports:
- "3101:3101"
environment:
DATABASE_URL: postgresql://user:password@postgres:5432/mydb
depends_on:
- postgres
restart: unless-stopped
networks:
- mcp-network
mcp-github:
image: mcp/github-server:latest
ports:
- "3102:3102"
environment:
GITHUB_TOKEN: ${GITHUB_TOKEN}
restart: unless-stopped
networks:
- mcp-network
postgres:
image: postgres:16
environment:
POSTGRES_USER: user
POSTGRES_PASSWORD: password
POSTGRES_DB: mydb
volumes:
- pgdata:/var/lib/postgresql/data
networks:
- mcp-network
networks:
mcp-network:
driver: bridge
volumes:
pgdata:
Start all servers:
docker compose -f docker-compose.mcp.yml up -d
Configuring OpenClaw to Use Dockerized MCP Servers
Tell OpenClaw where to find your MCP servers by updating the configuration:
# openclaw.toml
[mcp]
transport = "http"
[[mcp.servers]]
name = "filesystem"
url = "http://localhost:3100"
description = "Read-only access to the project workspace"
[[mcp.servers]]
name = "postgres"
url = "http://localhost:3101"
description = "PostgreSQL database queries and schema inspection"
[[mcp.servers]]
name = "github"
url = "http://localhost:3102"
description = "GitHub API access for issues, PRs, and repos"
Verify the connections:
openclaw mcp status
This shows which servers are connected and healthy.
Networking Best Practices
Use a Dedicated Network
Always create a dedicated Docker network for MCP servers. This isolates them from other containers and gives you control over inter-service communication:
networks:
mcp-network:
driver: bridge
internal: false # Set to true if servers should not access the internet
Limit Port Exposure
Only expose ports that OpenClaw needs to connect to. Internal communication between MCP servers (like the postgres server talking to the postgres database) should happen over the Docker network, not through published ports:
mcp-postgres:
ports:
- "127.0.0.1:3101:3101" # Bind to localhost only
networks:
- mcp-network
postgres:
# No ports exposed to host — only accessible within mcp-network
networks:
- mcp-network
DNS Resolution
Services within the same Docker network can reach each other by service name. The mcp-postgres service connects to postgres:5432, not localhost:5432. This is why the DATABASE_URL in the compose file uses postgres as the hostname.
Marketplace
Free skills and AI personas for OpenClaw — browse the marketplace.
Browse the Marketplace →Security Hardening
MCP servers can have significant access to your data. Lock them down with these practices.
Run as Non-Root
Add a non-root user to your Dockerfiles:
FROM node:22-slim
RUN groupadd -r mcp && useradd -r -g mcp mcp
WORKDIR /app
RUN npm install -g @modelcontextprotocol/server-filesystem
USER mcp
EXPOSE 3100
CMD ["mcp-server-filesystem", "--port", "3100", "--root", "/workspace"]
Read-Only File Systems
Mount the container's root filesystem as read-only and provide writable tmpfs mounts only where needed:
mcp-filesystem:
read_only: true
tmpfs:
- /tmp
volumes:
- ./:/workspace:ro
Resource Limits
Prevent runaway MCP servers from consuming all your resources:
mcp-filesystem:
deploy:
resources:
limits:
cpus: "0.5"
memory: 512M
reservations:
cpus: "0.1"
memory: 128M
Secrets Management
Never hard-code secrets in docker-compose files. Use environment variables or Docker secrets:
mcp-github:
environment:
GITHUB_TOKEN: ${GITHUB_TOKEN}
# Or use Docker secrets for production:
secrets:
- github_token
secrets:
github_token:
file: ./secrets/github_token.txt
Create a .env file for local development (and add it to .gitignore):
# .env (gitignored)
GITHUB_TOKEN=ghp_your_token_here
Common MCP Server Configurations
Here are ready-to-use configurations for popular MCP servers.
Fetch Server (Web Requests)
mcp-fetch:
image: mcp/fetch-server:latest
ports:
- "127.0.0.1:3103:3103"
environment:
MAX_RESPONSE_SIZE: "5mb"
ALLOWED_DOMAINS: "api.github.com,api.openai.com"
networks:
- mcp-network
Memory Server (Persistent Knowledge Graph)
mcp-memory:
image: mcp/memory-server:latest
ports:
- "127.0.0.1:3104:3104"
volumes:
- memory-data:/data
networks:
- mcp-network
volumes:
memory-data:
SQLite Server
mcp-sqlite:
image: mcp/sqlite-server:latest
ports:
- "127.0.0.1:3105:3105"
volumes:
- ./data:/data:ro
environment:
DATABASE_PATH: /data/app.db
networks:
- mcp-network
Lifecycle Management
Starting and Stopping
# Start all MCP servers
docker compose -f docker-compose.mcp.yml up -d
# Stop all servers
docker compose -f docker-compose.mcp.yml down
# View logs
docker compose -f docker-compose.mcp.yml logs -f mcp-filesystem
# Restart a specific server
docker compose -f docker-compose.mcp.yml restart mcp-postgres
Health Checks
Add health checks to your compose file so Docker restarts unhealthy servers:
mcp-filesystem:
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:3100/health"]
interval: 30s
timeout: 10s
retries: 3
start_period: 10s
Auto-Start With Your Project
Create a script that starts MCP servers alongside your development environment:
#!/bin/bash
# scripts/dev.sh
echo "Starting MCP servers..."
docker compose -f docker-compose.mcp.yml up -d
echo "Waiting for servers to be healthy..."
docker compose -f docker-compose.mcp.yml wait mcp-filesystem mcp-postgres
echo "Starting development server..."
bun dev
Summary
Running MCP servers in Docker gives you consistent, secure, and reproducible environments for extending your AI coding agent. Start with a single containerized server, expand to a multi-service docker-compose setup, and lock everything down with security best practices. Combine this setup with skills from the OpenClaw Bazaar to build a powerful, fully configured development environment.
Browse the Skills Directory
Find the right skill for your workflow. The OpenClaw Bazaar skills directory has over 2,300 community-rated skills — searchable, sortable, and free to install.
Built a Skill? List It on the Bazaar
If you have built a skill that others would find useful, publish it on the Bazaar. Reach thousands of developers and get feedback from the community.