Remote OpenClaw Blog
How to Build Your Own MCP Server for OpenClaw
6 min read ·
The Model Context Protocol is an open standard that lets AI agents communicate with external tools and services. While there are hundreds of pre-built MCP servers available in the OpenClaw Bazaar, sometimes you need one tailored to your own internal tools, proprietary APIs, or unique workflows. This guide walks you through building a custom MCP server from scratch — from understanding the protocol to publishing your finished server for the community.
Architecture Overview
An MCP server is a lightweight process that communicates with AI agents over a standardized protocol. The agent sends requests, the server processes them, and sends back structured responses. All communication happens over standard input and output (stdio) or HTTP with Server-Sent Events (SSE).
At a high level, the architecture looks like this:
AI Agent <---> MCP Client <---> MCP Server <---> Your API / Service
The agent never talks to your service directly. Instead, the MCP client inside the agent sends protocol-compliant messages to your server, and your server translates those into whatever API calls your service needs.
Every MCP server exposes one or more of three primitive types:
- Tools: Functions the agent can call with specific parameters and receive structured results. Think of these as API endpoints.
- Resources: Data the agent can read, like files, database records, or documentation. These provide context without requiring function calls.
- Prompts: Pre-defined prompt templates that guide the agent through specific workflows.
Most custom servers focus on tools, since they provide the most interactive functionality.
Understanding the Protocol Specification
The MCP protocol uses JSON-RPC 2.0 as its message format. Every message is a JSON object with a specific structure. There are three types of messages:
Requests are sent from the client to the server. They include a method name and optional parameters:
{
"jsonrpc": "2.0",
"id": 1,
"method": "tools/list",
"params": {}
}
Responses are sent back from the server with the result:
{
"jsonrpc": "2.0",
"id": 1,
"result": {
"tools": [
{
"name": "get_status",
"description": "Get the current deployment status",
"inputSchema": {
"type": "object",
"properties": {
"environment": { "type": "string" }
},
"required": ["environment"]
}
}
]
}
}
Notifications are one-way messages that do not expect a response. They are used for progress updates and logging.
The lifecycle of a connection follows a predictable pattern. The client sends an initialize request, the server responds with its capabilities, and then the client can call tools/list to discover available tools and tools/call to invoke them.
Building a Custom Server Step by Step
Let us build a custom MCP server that exposes a deployment status API. This example uses TypeScript and the official MCP SDK, which handles the protocol details so you can focus on your business logic.
Step 1: Set Up the Project
Create a new project and install the MCP SDK:
mkdir mcp-deploy-status
cd mcp-deploy-status
npm init -y
npm install @modelcontextprotocol/sdk zod
npm install -D typescript @types/node
Create a tsconfig.json for your TypeScript configuration:
{
"compilerOptions": {
"target": "ES2022",
"module": "Node16",
"moduleResolution": "Node16",
"outDir": "./dist",
"rootDir": "./src",
"strict": true,
"esModuleInterop": true
},
"include": ["src/**/*"]
}
Step 2: Define Your Tools
Create src/index.ts and set up the server with its tools:
import { McpServer } from "@modelcontextprotocol/sdk/server/mcp.js";
import { StdioServerTransport } from "@modelcontextprotocol/sdk/server/stdio.js";
import { z } from "zod";
const server = new McpServer({
name: "deploy-status",
version: "1.0.0",
});
server.tool(
"get_deploy_status",
"Get the current deployment status for an environment",
{
environment: z.enum(["production", "staging", "development"]),
},
async ({ environment }) => {
const status = await fetchDeployStatus(environment);
return {
content: [
{
type: "text",
text: JSON.stringify(status, null, 2),
},
],
};
}
);
server.tool(
"list_recent_deploys",
"List recent deployments with their status and timestamp",
{
environment: z.enum(["production", "staging", "development"]),
limit: z.number().min(1).max(50).default(10),
},
async ({ environment, limit }) => {
const deploys = await fetchRecentDeploys(environment, limit);
return {
content: [
{
type: "text",
text: JSON.stringify(deploys, null, 2),
},
],
};
}
);
server.tool(
"trigger_rollback",
"Roll back to a previous deployment by ID",
{
environment: z.enum(["production", "staging", "development"]),
deployId: z.string(),
},
async ({ environment, deployId }) => {
const result = await triggerRollback(environment, deployId);
return {
content: [
{
type: "text",
text: JSON.stringify(result, null, 2),
},
],
};
}
);
Step 3: Implement Your Business Logic
Add the functions that talk to your actual deployment system. This is where you integrate with your internal APIs:
async function fetchDeployStatus(environment: string) {
const response = await fetch(
\`https://deploy.internal.example.com/api/status/\${environment}\`,
{
headers: {
Authorization: \`Bearer \${process.env.DEPLOY_API_TOKEN}\`,
},
}
);
return response.json();
}
async function fetchRecentDeploys(environment: string, limit: number) {
const response = await fetch(
\`https://deploy.internal.example.com/api/deploys/\${environment}?limit=\${limit}\`,
{
headers: {
Authorization: \`Bearer \${process.env.DEPLOY_API_TOKEN}\`,
},
}
);
return response.json();
}
async function triggerRollback(environment: string, deployId: string) {
const response = await fetch(
\`https://deploy.internal.example.com/api/rollback\`,
{
method: "POST",
headers: {
"Content-Type": "application/json",
Authorization: \`Bearer \${process.env.DEPLOY_API_TOKEN}\`,
},
body: JSON.stringify({ environment, deployId }),
}
);
return response.json();
}
Step 4: Start the Server
Add the transport connection at the bottom of your src/index.ts:
Marketplace
Free skills and AI personas for OpenClaw — browse the marketplace.
Browse the Marketplace →async function main() {
const transport = new StdioServerTransport();
await server.connect(transport);
console.error("Deploy Status MCP Server running on stdio");
}
main().catch(console.error);
Note that we log to stderr instead of stdout because stdout is reserved for MCP protocol messages.
Step 5: Add the Entry Point
Update your package.json to include a binary entry point:
{
"name": "mcp-deploy-status",
"version": "1.0.0",
"bin": {
"mcp-deploy-status": "./dist/index.js"
},
"scripts": {
"build": "tsc",
"start": "node dist/index.js"
}
}
Add a shebang to the top of your src/index.ts:
#!/usr/bin/env node
Build the project:
npm run build
Testing Your MCP Server
Testing is critical before publishing. The MCP ecosystem includes tools that make testing straightforward.
Manual Testing with the MCP Inspector
The MCP Inspector is an interactive tool for testing your server:
npx @modelcontextprotocol/inspector node dist/index.js
This opens a browser-based UI where you can call each tool, inspect the JSON-RPC messages, and verify that responses match your expectations.
Automated Testing
Write unit tests for your business logic functions and integration tests for the MCP protocol layer. Here is a simple test using the SDK client:
import { McpClient } from "@modelcontextprotocol/sdk/client/mcp.js";
import { StdioClientTransport } from "@modelcontextprotocol/sdk/client/stdio.js";
const transport = new StdioClientTransport({
command: "node",
args: ["dist/index.js"],
});
const client = new McpClient();
await client.connect(transport);
const tools = await client.listTools();
console.log("Available tools:", tools);
const result = await client.callTool("get_deploy_status", {
environment: "staging",
});
console.log("Result:", result);
await client.close();
Testing Checklist
Before publishing, verify these items:
- All tools return valid MCP response objects with a
contentarray. - Error cases return meaningful error messages instead of crashing the server.
- The server handles missing environment variables gracefully.
- Input validation rejects invalid parameters with clear error descriptions.
- The server starts cleanly and responds to the
initializehandshake.
Publishing Your MCP Server
Once your server is tested and ready, you can publish it for others to use.
Publish to npm
Package your server as an npm package so users can install it globally or use it with npx:
npm login
npm publish
Users can then run your server with:
npx mcp-deploy-status
Register on OpenClaw Bazaar
Submit your server to the OpenClaw Bazaar so it appears in the skills directory. Create an OpenClaw skill definition that references your npm package:
[skill]
name = "mcp-deploy-status"
description = "MCP server for checking deployment status and triggering rollbacks"
version = "1.0.0"
author = "your-username"
[skill.mcp]
command = "npx"
args = ["mcp-deploy-status"]
[skill.mcp.env]
DEPLOY_API_TOKEN = { description = "API token for your deployment system", required = true }
Push this to a public GitHub repository and submit it to the OpenClaw Bazaar directory. The community review process typically takes a few days, after which your server will be discoverable by all OpenClaw users.
Documentation Best Practices
Include a clear README with your server that covers the available tools and their parameters, required environment variables, example configurations, and common use cases. Good documentation dramatically increases adoption.
Next Steps
Building an MCP server is the first step. As your server matures, consider adding resources for static data the agent can reference, prompts for guided workflows, and rate limiting or caching for expensive API calls. The MCP specification continues to evolve, so keep an eye on the official documentation for new capabilities you can leverage.
Browse the Skills Directory
Find the right skill for your workflow. The OpenClaw Bazaar skills directory has over 2,300 community-rated skills — searchable, sortable, and free to install.
Personas Include MCP Servers
OpenClaw personas come with pre-configured MCP server connections — no manual setup needed. Pick a persona and the right servers are already wired in. Compare personas →