Remote OpenClaw

Remote OpenClaw Blog

API Rate Limits Killing Your Skills? How to Fix OpenClaw 429 Errors

5 min read ·

You installed a handful of skills from the Bazaar, set up a few automated workflows, and now your OpenClaw agent is throwing 429 errors every hour. The skills are not broken. Your API provider is telling you to slow down.

Rate limit errors are one of the most common issues for operators who run multiple skills simultaneously. Each skill that calls the AI model consumes part of your provider quota, and when several skills fire in quick succession, the quota runs dry fast. The good news is that every rate limit scenario has a straightforward solution.

Where Rate Limit Errors Actually Come From

The "API rate limit reached" message does not originate from OpenClaw. It comes from your AI model provider -- Anthropic, OpenAI, Google, or whichever service your skills are calling. The provider is enforcing usage caps to protect its infrastructure.

Providers measure limits in three ways:

  • Requests per minute (RPM): How many API calls you can make in sixty seconds, regardless of size.
  • Tokens per minute (TPM): Total input and output tokens processed per minute.
  • Tokens per day (TPD): A daily ceiling on total token usage, common on free tiers.

Skills from the Bazaar trigger rate limits in predictable ways. A research skill that runs multi-step analysis might send five or six API calls in rapid succession. Scheduled skills that fire on cron timers can overlap when you set them all to the same minute. Skills with large prompt templates consume heavy token counts per request, eating through your TPM quota quickly.

Provider Limits You Should Know

Here are the approximate limits for major providers as of early 2026:

ProviderFree TierPaid Tier 1Paid Tier 2+
Anthropic (Claude)5 RPM, 25K TPM50 RPM, 50K TPM1,000 RPM, 200K+ TPM
OpenAI (GPT)3 RPM, 40K TPM60 RPM, 60K TPM5,000 RPM, 600K+ TPM
Google (Gemini)15 RPM, 1M TPM360 RPM, 4M TPM1,000 RPM, 10M TPM
DeepSeekN/A60 RPM, 300K TPMVaries

The difference between free and paid tiers is enormous. If you are running multiple skills from the Bazaar on a free-tier API key, you will hit rate limits within minutes of any sustained activity.

Diagnosing Which Limit You Are Hitting

Before applying fixes, determine the specific limit you are exceeding and the skill responsible.

Check OpenClaw logs for 429 responses. The error response from the API provider usually includes headers showing your remaining quota and reset timing. Look for lines containing "429" or "rate_limit":

# Docker deployment
docker compose logs openclaw | grep -i "rate"

# systemd deployment
journalctl -u openclaw | grep -i "rate"

Check your provider dashboard. Log into your provider console and look at usage graphs. Spikes that correlate with your skill execution times reveal which skills are the heaviest consumers.

Identify the triggering skill. Was it a scheduled skill that runs on a timer? A research skill performing multi-round analysis? A skill that processes batches of items? The OpenClaw web dashboard shows conversation history with timestamps to help you correlate.

Fix 1: Upgrade Your Provider Tier

The simplest fix is upgrading your account with your API provider. Each provider has automatic tier upgrades based on payment history.

For Anthropic, adding a payment method and depositing $5 moves you from free to Tier 1. Higher tiers unlock at $40, $200, and $1,000+ cumulative spend. For OpenAI, the first payment triggers the Tier 1 upgrade. For Google, the free tier is already generous (15 RPM, 1M TPM), so upgrading is less urgent unless you run token-heavy skills.

Most operators running three or more active skills need at least Tier 1 on their primary provider.

Fix 2: Stagger Skill Schedules

If your rate limit errors come from burst usage -- multiple scheduled skills firing simultaneously -- spreading them out eliminates the problem.

Marketplace

Free skills and AI personas for OpenClaw — browse the marketplace.

Browse the Marketplace →
# Problematic: all skills at :00
0 8 * * * openclaw run morning-briefing
0 8 * * * openclaw run check-emails
0 8 * * * openclaw run review-calendar

# Fixed: staggered by 5 minutes
0 8 * * * openclaw run morning-briefing
5 8 * * * openclaw run check-emails
10 8 * * * openclaw run review-calendar

You can also enable a global request throttle to prevent bursts:

OPENCLAW_API_DELAY_MS=2000

This inserts a two-second pause between consecutive API calls, limiting you to 30 requests per minute and staying safely within most tier limits.

Fix 3: Route Skills to Different Models

Not every skill needs the most powerful model. A skill that classifies emails or generates simple summaries works perfectly well on a cheaper, less rate-limited model.

Configure per-skill model overrides in your OpenClaw configuration:

{
  "skills": {
    "email-processor": {
      "model": "gpt-4o-mini"
    },
    "research-assistant": {
      "model": "claude-sonnet-4-20250514"
    }
  }
}

Reserve your premium model quota for skills that genuinely need strong reasoning. Route classification, summarization, and formatting skills to smaller models with higher rate limits and lower costs.

Fix 4: Multi-Provider Routing for Skill-Heavy Setups

The most robust solution for operators running many skills is multi-model routing. Configure OpenClaw to distribute requests across multiple providers automatically. When one provider returns a rate limit error, OpenClaw routes to the fallback.

{
  "routing": {
    "primary": "anthropic/claude-sonnet-4-20250514",
    "fallback": [
      "openai/gpt-4o",
      "google/gemini-2.0-flash"
    ],
    "fallbackOn": ["rate_limit", "server_error"],
    "retryDelay": 5000
  }
}

This requires API keys for multiple providers but effectively multiplies your total rate limit capacity. The user experience remains seamless -- there is no error, just a different model handling the overflow.

Preventing Future Rate Limits in Skill-Heavy Workflows

Once you have resolved the immediate error, take these steps to stay ahead of rate limits as your skill library grows:

  • Monitor usage proactively. Check your provider dashboards weekly. Set billing alerts at 70 percent of your tier limits.
  • Right-size every skill. When browsing the Bazaar, check whether a skill specifies a recommended model. Many skill authors note that their skill works well with smaller models.
  • Cache repeated queries. If multiple skills ask similar questions, enable conversation caching to avoid redundant API calls.
  • Limit conversation context. Long conversations resend all previous messages with each request. Configure maximum context length to truncate old messages and reduce token consumption per call.
  • Upgrade tiers proactively. Do not wait for rate limits to force an upgrade. When you consistently use more than 70 percent of your current limits, move to the next tier before failures start.

Rate limits are a normal part of operating an AI agent with an active skill set. With staggered scheduling, appropriate model selection per skill, and multi-provider routing, most operators running ten or more skills never see rate limit errors after initial configuration.


Browse the Skills Directory

Find the right skill for your workflow. The OpenClaw Bazaar skills directory has over 2,300 community-rated skills -- searchable, sortable, and free to install.

Browse Skills -->

Want a Pre-Built Setup?

If you would rather skip the browsing, OpenClaw personas come with curated skill sets already configured. Pick a persona that matches your role and start working immediately. Compare personas -->