arc-metrics-dashboard
Track and visualize your agent's operational metrics.
Setup & Installation
Install command
clawhub install trypto1019/arc-metrics-dashboardIf the CLI is not installed:
Install command
npx clawhub@latest install trypto1019/arc-metrics-dashboardOr install with OpenClaw CLI:
Install command
openclaw skills install trypto1019/arc-metrics-dashboardor paste the repo link into your assistant's chat
Install command
https://github.com/openclaw/skills/tree/main/skills/trypto1019/arc-metrics-dashboardWhat This Skill Does
Records operational metrics for agents: API calls, task completions, errors, and timed durations. Generates text-based dashboards and exports data as JSON or CSV. Data is stored as daily JSON files in ~/.openclaw/metrics/ with no database required.
Stores metrics as flat JSON files with no database or infrastructure setup, making it usable immediately in any agent environment.
When to Use It
- Tracking API call volume across providers over a week
- Monitoring error rates during overnight agent runs
- Measuring average task duration after a code change
- Identifying which skills an agent uses most frequently
- Exporting daily metrics to CSV for external analysis
View original SKILL.md file
# Metrics Dashboard
Track your agent's operational health. Record events, count things, measure durations, and generate reports.
## Why This Exists
Agents run 24/7 but have no way to answer basic questions: How many tasks did I complete? What's my error rate? How long do API calls take? Which skills do I use most? Without metrics, you're flying blind.
## Commands
### Record a metric
```bash
python3 {baseDir}/scripts/metrics.py record --name api_calls --value 1 --tags '{"provider": "openrouter", "model": "gpt-4"}'
```
### Record a duration
```bash
python3 {baseDir}/scripts/metrics.py timer --name task_duration --seconds 12.5 --tags '{"task": "scan_skill"}'
```
### Increment a counter
```bash
python3 {baseDir}/scripts/metrics.py counter --name posts_published --increment 1
```
### Record an error
```bash
python3 {baseDir}/scripts/metrics.py error --name moltbook_verify_fail --message "Challenge solver returned wrong answer"
```
### View dashboard
```bash
python3 {baseDir}/scripts/metrics.py dashboard
```
### View metrics for today
```bash
python3 {baseDir}/scripts/metrics.py view --period day
```
### View specific metric history
```bash
python3 {baseDir}/scripts/metrics.py view --name api_calls --period week
```
### Export metrics
```bash
python3 {baseDir}/scripts/metrics.py export --format json > metrics.json
python3 {baseDir}/scripts/metrics.py export --format csv > metrics.csv
```
## Dashboard Output
The text-based dashboard shows:
- Uptime since first metric recorded
- Total events today
- Top metrics by count
- Error rate
- Average durations for timed operations
- Custom counter values
## Metric Types
- **counter** — Things you count (posts published, skills scanned, comments made)
- **timer** — Things you measure in seconds (API response time, task duration)
- **event** — Things that happened (errors, deployments, restarts)
- **gauge** — Current values (karma, budget remaining, queue depth)
## Storage
Metrics are stored in `~/.openclaw/metrics/` as daily JSON files. Lightweight, no database required.
## Integration
Works with the compliance audit trail — log metrics events alongside audit entries for full operational visibility.
Example Workflow
Here's how your AI assistant might use this skill in practice.
User asks: Tracking API call volume across providers over a week
- 1Tracking API call volume across providers over a week
- 2Monitoring error rates during overnight agent runs
- 3Measuring average task duration after a code change
- 4Identifying which skills an agent uses most frequently
- 5Exporting daily metrics to CSV for external analysis
Track and visualize your agent's operational metrics.
Security Audits
These signals reflect official OpenClaw status values. A Suspicious status means the skill should be used with extra caution.