vigil
AI agent safety guardrails for tool calls.
Setup & Installation
Install command
clawhub install robinoppenstam/vigilIf the CLI is not installed:
Install command
npx clawhub@latest install robinoppenstam/vigilOr install with OpenClaw CLI:
Install command
openclaw skills install robinoppenstam/vigilor paste the repo link into your assistant's chat
Install command
https://github.com/openclaw/skills/tree/main/skills/robinoppenstam/vigilWhat This Skill Does
Vigil validates AI agent tool calls before they execute, blocking operations like destructive shell commands, SSRF, SQL injection, path traversal, and credential leaks. It runs as a drop-in npm package with 22 rules, zero runtime dependencies, and under 2ms latency per check.
Zero runtime dependencies and sub-2ms latency mean it can run inline on every tool call without adding measurable overhead to the agent pipeline.
When to Use It
- Blocking rm -rf commands issued by autonomous agents
- Preventing SSRF in agent-driven API and HTTP calls
- Catching SQL injection before a database tool executes
- Auditing all tool calls made by a shell-executing agent
- Adding a safety layer to an existing MCP server
View original SKILL.md file
# Vigil — Agent Safety Guardrails
Validates what AI agents DO, not what they SAY. Drop-in safety layer for any tool-calling agent.
## Prerequisites
This skill requires the `vigil-agent-safety` npm package (12.3KB, Apache 2.0 license):
```bash
npm install vigil-agent-safety
```
- **Source code:** https://github.com/hexitlabs/vigil
- **npm:** https://www.npmjs.com/package/vigil-agent-safety
- **The npm package has zero runtime dependencies.** This skill is a wrapper that calls that package.
## Quick Start
```typescript
import { checkAction } from 'vigil-agent-safety';
const result = checkAction({
agent: 'my-agent',
tool: 'exec',
params: { command: 'rm -rf /' },
});
// result.decision === "BLOCK"
// result.reason === "Destructive command pattern"
// result.latencyMs === 0.3
```
## What It Catches
- Destructive commands (rm -rf, mkfs, reverse shells) → BLOCK
- SSRF (metadata endpoints, localhost, internal IPs) → BLOCK
- Data exfiltration (curl to external, .ssh/id_rsa access) → BLOCK
- SQL injection (DROP TABLE, UNION SELECT) → BLOCK
- Path traversal (../../../etc/shadow) → BLOCK
- Prompt injection (ignore instructions, [INST] tags) → BLOCK
- Encoding attacks (base64 decode, eval(atob())) → BLOCK
- Credential leaks (API keys, AWS keys, tokens) → ESCALATE
22 rules. Zero dependencies. Under 2ms per check.
## Modes
```typescript
import { configure } from 'vigil-agent-safety';
// warn = log violations but don't block (recommended to start)
configure({ mode: 'warn' });
// enforce = block dangerous calls
configure({ mode: 'enforce' });
// log = silent logging only
configure({ mode: 'log' });
```
## Use with Clawdbot
Add Vigil as a safety layer for your agent tool calls. The `scripts/vigil-check.js` wrapper lets you validate from the command line:
```bash
# Check a tool call
node scripts/vigil-check.js exec '{"command":"rm -rf /"}'
# → BLOCK: Destructive command pattern
# Check a safe call
node scripts/vigil-check.js read '{"path":"./README.md"}'
# → ALLOW
```
## Policies
Load built-in policy templates:
```typescript
import { loadPolicy } from 'vigil-agent-safety';
loadPolicy('restrictive'); // Tightest rules
loadPolicy('moderate'); // Balanced (default)
loadPolicy('permissive'); // Minimal blocking
```
## CLI
```bash
npx vigil-agent-safety check --tool exec --params '{"command":"ls -la"}'
npx vigil-agent-safety policies
```
## Links
- GitHub: https://github.com/hexitlabs/vigil
- npm: https://www.npmjs.com/package/vigil-agent-safety
- Docs: https://hexitlabs.com/vigil
Example Workflow
Here's how your AI assistant might use this skill in practice.
User asks: Blocking rm -rf commands issued by autonomous agents
- 1Blocking rm -rf commands issued by autonomous agents
- 2Preventing SSRF in agent-driven API and HTTP calls
- 3Catching SQL injection before a database tool executes
- 4Auditing all tool calls made by a shell-executing agent
- 5Adding a safety layer to an existing MCP server
AI agent safety guardrails for tool calls.
Security Audits
These signals reflect official OpenClaw status values. A Suspicious status means the skill should be used with extra caution.