Remote OpenClaw

Remote OpenClaw Blog

OpenClaw Browser Relay: What It Was and What Replaced It

Published: ·Last Updated:
What changed

This post was reviewed and updated to reflect current deployment, security hardening, and operations guidance.

What should operators know about OpenClaw Browser Relay: What It Was and What Replaced It?

Answer: The OpenClaw Browser Relay was a Chrome extension that shipped with OpenClaw versions prior to 3.22. It created a WebSocket bridge between the OpenClaw agent running on your server and a Chrome browser running on your local machine (or any machine with Chrome installed). This guide covers practical deployment decisions, security controls, and operations steps to run OpenClaw,.

Updated: · Author: Zac Frulloni

The OpenClaw Browser Relay Chrome extension was removed in 3.22. Learn what it did, why it was deprecated, and how to use the new built-in browser automation via Chromium.

Marketplace

Free skills and AI personas for OpenClaw — deploy a pre-built agent in 15 minutes.

Browse the Marketplace →

Join the Community

Join 500+ OpenClaw operators sharing deployment guides, security configs, and workflow automations.

What Was the Browser Relay?

The OpenClaw Browser Relay was a Chrome extension that shipped with OpenClaw versions prior to 3.22. It created a WebSocket bridge between the OpenClaw agent running on your server and a Chrome browser running on your local machine (or any machine with Chrome installed).

When the relay was active, OpenClaw could control your browser remotely. The agent could navigate to URLs, read page content, fill out forms, click buttons, take screenshots, and extract data from web pages. This gave the AI agent the ability to interact with websites that required authentication, dynamic JavaScript rendering, or multi-step workflows.

The relay worked by injecting a content script into every page you visited. This script communicated with a background service worker in the extension, which maintained a persistent WebSocket connection to OpenClaw's gateway. When the agent wanted to perform a browser action, it sent a command through the WebSocket, the extension executed it in the browser, and the result was sent back.

For operators who needed their agent to interact with web applications — logging into dashboards, scraping data from JavaScript-heavy sites, automating form submissions — the Browser Relay was the only option. It worked, but it came with significant trade-offs.


Why Was It Removed?

The Browser Relay was removed in OpenClaw 3.22 for four primary reasons:

1. Security concerns. The relay extension had access to every page you visited in Chrome. It could read cookies, session tokens, form data, and page content across all tabs and all domains. Even though OpenClaw only used this access when the agent specifically requested it, the permission scope was far too broad. A compromise of the WebSocket connection could expose sensitive browsing data.

2. Reliability problems. WebSocket connections between the browser extension and the OpenClaw server were fragile. Network interruptions, browser updates, Chrome suspending background tabs, and sleep/wake cycles all caused disconnections. Operators reported frequent "relay disconnected" errors that required manual intervention to fix.

3. Chrome extension policy changes. Google's Manifest V3 migration imposed restrictions on background service workers that broke the relay's persistent connection model. Adapting the extension to MV3 while maintaining functionality would have required significant engineering effort with no guarantee of long-term compatibility.

4. A better alternative existed. Headless browser automation via Puppeteer was already mature, well-documented, and more secure than a browser extension relay. It made more sense to bundle a headless Chromium instance with OpenClaw than to maintain a complex extension that introduced security and reliability risks.

The decision to remove the relay was announced in the 3.22 release notes. Operators were given two release cycles of deprecation warnings before the extension was fully removed.


What Replaced the Browser Relay?

OpenClaw 3.22 introduced built-in browser automation using a headless Chromium instance controlled via Puppeteer. This runs entirely inside the OpenClaw container — no extension, no external browser, no WebSocket bridge.

The headless Chromium approach provides the same capabilities as the relay:

  • Navigate to URLs and render JavaScript-heavy pages
  • Fill forms and click buttons
  • Extract text, HTML, and structured data from pages
  • Take screenshots
  • Handle authentication flows (login forms, OAuth redirects)
  • Execute JavaScript on pages

The critical difference is that this Chromium instance is isolated. It runs inside the container with its own profile, cookies, and session storage. It does not have access to your personal browser sessions, bookmarks, or saved passwords. When the container restarts, the browser state is wiped clean.

Puppeteer provides a programmatic API that is more reliable than the WebSocket relay. Commands execute synchronously with proper error handling, timeouts, and retry logic. There is no network hop between the agent and the browser — everything runs on the same machine.


Marketplace

4 AI personas and 7 free skills — browse the marketplace.

Browse Marketplace →

How Do You Use the New Browser Automation?

If you are running OpenClaw via Docker (the recommended method), browser automation works out of the box. The official Docker image includes Chromium and Puppeteer pre-installed. No additional configuration is needed.

To enable browser automation, ensure the following environment variable is set (it is enabled by default in 3.22+):

BROWSER_ENABLED=true

The agent can now use browser commands in its tool calls. For example, it can navigate to a page, wait for content to load, and extract data. Behind the scenes, OpenClaw launches a headless Chromium instance on first use and reuses it for subsequent requests.

If you are running OpenClaw outside Docker (bare metal or custom container), you need to install Chromium separately:

# Ubuntu/Debian
sudo apt-get install chromium-browser

# Then set the path in your .env
BROWSER_EXECUTABLE_PATH=/usr/bin/chromium-browser

You can configure browser behavior with additional environment variables:

BROWSER_HEADLESS=true          # Default: true. Set to false for debugging.
BROWSER_TIMEOUT=30000          # Navigation timeout in milliseconds
BROWSER_VIEWPORT_WIDTH=1280    # Browser viewport width
BROWSER_VIEWPORT_HEIGHT=720    # Browser viewport height
BROWSER_USER_AGENT="custom"    # Custom user agent string

For debugging, set BROWSER_HEADLESS=false and forward the display (in Docker, this requires X11 forwarding or a VNC setup). This lets you see exactly what the browser is doing in real time.


How Do You Migrate from the Relay?

If you were using the Browser Relay before 3.22, migration is straightforward:

  1. Remove the Chrome extension. Go to chrome://extensions and remove the OpenClaw Browser Relay extension. It no longer serves any purpose.
  2. Update OpenClaw. Pull the latest Docker image with docker pull openclaw/openclaw:latest. The new image includes Chromium.
  3. Remove relay configuration. Delete any BROWSER_RELAY_* environment variables from your .env or docker-compose.yml. These are no longer recognized.
  4. Test browser automation. Ask your agent to navigate to a website and extract content. Verify that it works without the extension.

If your agent previously relied on accessing your personal browser's authenticated sessions (logged-in sites, cookies), you will need to handle authentication differently. The headless Chromium starts with a clean profile. Options include:

  • Having the agent log in via the headless browser using stored credentials
  • Mounting a Chrome profile directory into the container
  • Using API-based access instead of browser automation for authenticated services

What Are the Limitations of the New Approach?

The built-in browser automation is better than the relay in most ways, but it does have limitations:

  • No access to your personal browser. The headless Chromium is isolated. It cannot see your bookmarks, saved passwords, or active sessions. This is a security feature, but it means the agent cannot interact with sites you are already logged into.
  • Increased resource usage. Chromium uses 200-400MB of additional RAM. On resource-constrained deployments, this matters. You can disable browser automation entirely with BROWSER_ENABLED=false if you do not need it.
  • Some sites block headless browsers. Anti-bot systems like Cloudflare, reCAPTCHA, and DataDome can detect and block headless Chromium. The agent may fail to access pages that the relay could reach because the relay used a real, human-operated browser.
  • No multi-tab browsing. The current implementation uses a single browser context. The agent cannot maintain multiple tabs with different sessions simultaneously (this is planned for a future release).

For most operators, these limitations are minor compared to the security and reliability gains. If you have a specific use case that requires access to your personal browser sessions, the recommended approach is to use API integrations instead of browser automation where possible.