mulerouter-skills

Coding Agents & IDEs
v0.1.0
Benign

Generates images and videos using MuleRouter or MuleRun multimodal APIs.

695 downloads695 installsby @misaka43fd

Setup & Installation

Install command

clawhub install misaka43fd/mulerouter-skills

If the CLI is not installed:

Install command

npx clawhub@latest install misaka43fd/mulerouter-skills

Or install with OpenClaw CLI:

Install command

openclaw skills install misaka43fd/mulerouter-skills

or paste the repo link into your assistant's chat

Install command

https://github.com/openclaw/skills/tree/main/skills/misaka43fd/mulerouter-skills

What This Skill Does

Generates images and videos through the MuleRouter and MuleRun multimodal APIs. Supports text-to-image, text-to-video, image-to-video, and video editing operations including keyframe interpolation. Works with models such as Wan2.6, Veo3, Sora2, and Midjourney.

Provides a single CLI interface to multiple frontier image and video generation models without managing separate API clients for each provider.

When to Use It

  • Generate a video clip from a text prompt using Wan2.6
  • Animate a product photo into a short video
  • Create concept art images from descriptive text
  • Edit existing video with keyframe interpolation
  • Transform a still image into a cinematic zoom shot
View original SKILL.md file
# MuleRouter API

Generate images and videos using MuleRouter or MuleRun multimodal APIs.

## Configuration Check

Before running any commands, verify the environment is configured:

### Step 1: Check for existing configuration

```bash
# Check environment variables
echo "MULEROUTER_BASE_URL: $MULEROUTER_BASE_URL"
echo "MULEROUTER_SITE: $MULEROUTER_SITE"
echo "MULEROUTER_API_KEY: ${MULEROUTER_API_KEY:+[SET]}"

# Check for .env file
ls -la .env 2>/dev/null || echo "No .env file found"
```

### Step 2: Configure if needed

**Option A: Environment variables with custom base URL (highest priority)**
```bash
export MULEROUTER_BASE_URL="https://api.mulerouter.ai"  # or your custom API endpoint
export MULEROUTER_API_KEY="your-api-key"
```

**Option B: Environment variables with site (used if base URL not set)**
```bash
export MULEROUTER_SITE="mulerun"    # or "mulerouter"
export MULEROUTER_API_KEY="your-api-key"
```

**Option C: Create .env file**

Create `.env` in the current working directory:

```env
# Option 1: Use custom base URL (takes priority over SITE)
MULEROUTER_BASE_URL=https://api.mulerouter.ai
MULEROUTER_API_KEY=your-api-key

# Option 2: Use site (if BASE_URL not set)
# MULEROUTER_SITE=mulerun
# MULEROUTER_API_KEY=your-api-key
```

**Note:** `MULEROUTER_BASE_URL` takes priority over `MULEROUTER_SITE`. If both are set, `MULEROUTER_BASE_URL` is used.

**Note:** The tool only reads `.env` from the current directory. Run scripts from the skill root (`skills/mulerouter-skills/`).

### Step 3: Using `uv` to run scripts

The skill uses `uv` for dependency management and execution. Make sure `uv` is installed and available in your PATH.

Run `uv sync` to install dependencies.

## Quick Start

### 1. List available models

```bash
uv run python scripts/list_models.py
```

### 2. Check model parameters

```bash
uv run python models/alibaba/wan2.6-t2v/generation.py --list-params
```

### 3. Generate content

**Text-to-Video:**
```bash
uv run python models/alibaba/wan2.6-t2v/generation.py --prompt "A cat walking through a garden"
```

**Text-to-Image:**
```bash
uv run python models/alibaba/wan2.6-t2i/generation.py --prompt "A serene mountain lake"
```

**Image-to-Video:**
```bash
uv run python models/alibaba/wan2.6-i2v/generation.py --prompt "Gentle zoom in" --image "https://example.com/photo.jpg" #remote image url
```
```bash
uv run python models/alibaba/wan2.6-i2v/generation.py --prompt "Gentle zoom in" --image "/path/to/local/image.png" #local image path
```

## Image Input

For image parameters (`--image`, `--images`, etc.), **prefer local file paths** over base64.

```bash
# Preferred: local file path (auto-converted to base64)
--image /tmp/photo.png

--images ["/tmp/photo.png"]
```

The skill automatically converts local file paths to base64 before sending to the API. This avoids command-line length limits that occur with raw base64 strings.

## Workflow

1. Check configuration: verify `MULEROUTER_BASE_URL` or `MULEROUTER_SITE`, and `MULEROUTER_API_KEY` are set
2. Install dependencies: run `uv sync`
3. Run `uv run python scripts/list_models.py` to discover available models
4. Run `uv run python models/<path>/<action>.py --list-params` to see parameters
5. Execute with appropriate parameters
6. Parse output URLs from results

## Tips
1. For an image generation model, a suggested timeout is 5 minutes.
2. For a video generation model, a suggested timeout is 15 minutes.

## References

- [REFERENCE.md](references/REFERENCE.md) - API configuration and CLI options
- [MODELS.md](references/MODELS.md) - Complete model specifications

Example Workflow

Here's how your AI assistant might use this skill in practice.

INPUT

User asks: Generate a video clip from a text prompt using Wan2.6

AGENT
  1. 1Generate a video clip from a text prompt using Wan2.6
  2. 2Animate a product photo into a short video
  3. 3Create concept art images from descriptive text
  4. 4Edit existing video with keyframe interpolation
  5. 5Transform a still image into a cinematic zoom shot
OUTPUT
Generates images and videos using MuleRouter or MuleRun multimodal APIs.

Share this skill

Security Audits

VirusTotalBenign
OpenClawBenign
View full report

These signals reflect official OpenClaw status values. A Suspicious status means the skill should be used with extra caution.

Details

LanguageMarkdown
Last updatedMar 1, 2026