liblib-ai-gen
Generate images with Seedream4.5 and videos with Kling via LiblibAI API.
Setup & Installation
Install command
clawhub install xtaq/liblib-ai-genIf the CLI is not installed:
Install command
npx clawhub@latest install xtaq/liblib-ai-genOr install with OpenClaw CLI:
Install command
openclaw skills install xtaq/liblib-ai-genor paste the repo link into your assistant's chat
Install command
https://github.com/openclaw/skills/tree/main/skills/xtaq/liblib-ai-genWhat This Skill Does
Generates images using Seedream4.5 and videos using Kling models through LiblibAI's API. Supports text-to-image, text-to-video, and image-to-video workflows via a CLI script. Tasks run asynchronously with automatic polling until results are ready.
Combines image and video generation in a single CLI with automatic async polling, eliminating the need to manually check task status or use separate tools for each modality.
When to Use It
- Generating concept art from a text description
- Animating a static photo into a short video clip
- Creating illustrations for a blog post or slide deck
- Producing a promotional video clip from a written prompt
- Batch generating multiple image variations for a design review
View original SKILL.md file
# LiblibAI Image & Video Generation
Generate images (Seedream4.5) and videos (Kling) via LiblibAI's API.
## Prerequisites
Environment variables must be set:
- `LIB_ACCESS_KEY` — API access key
- `LIB_SECRET_KEY` — API secret key
## Usage
Run the CLI script at `scripts/liblib_client.py`:
```bash
# Generate image
python3 scripts/liblib_client.py image "a cute cat wearing a hat" --width 2048 --height 2048
# Generate video from text
python3 scripts/liblib_client.py text2video "a rocket launching into space" --model kling-v2-6 --duration 5
# Generate video from image
python3 scripts/liblib_client.py img2video "the cat turns its head" --start-frame https://example.com/cat.jpg
# Check task status
python3 scripts/liblib_client.py status <generateUuid>
```
## Image Generation (Seedream4.5)
- Endpoint: `POST /api/generate/seedreamV4`
- Model: `doubao-seedream-4-5-251128`
- Default size: 2048×2048. For 4.5, min total pixels = 3,686,400 (e.g. 2560×1440)
- Supports reference images (1-14), prompt expansion, and sequential image generation
- Options: `--width`, `--height`, `--count` (1-15), `--ref-images`, `--prompt-magic`
## Video Generation (Kling)
### Text-to-Video
- Endpoint: `POST /api/generate/video/kling/text2video`
- Models: `kling-v2-6` (latest, supports sound), `kling-v2-1-master`, `kling-v2-5-turbo`, etc.
- Options: `--model`, `--aspect` (16:9/9:16/1:1), `--duration` (5/10s), `--mode` (std/pro), `--sound` (on/off)
### Image-to-Video
- Endpoint: `POST /api/generate/video/kling/img2video`
- Provide `--start-frame` image URL; optionally `--end-frame` (v1-6 only)
- For kling-v2-6: uses `images` array instead of startFrame/endFrame
## Async Pattern
All generation tasks are async:
1. Submit task → get `generateUuid`
2. Poll `POST /api/generate/status` with `{ "generateUuid": "..." }`
3. Result contains `images[].imageUrl` or `videos[].videoUrl`
The script auto-polls by default. Use `--no-poll` to submit only.
## Notes
- QPS limit: 1 request/second for task submission
- Max concurrent tasks: 5
- Image URLs in results expire after 7 days
- For kling-v2-5-turbo and kling-v2-6, mode must be "pro" (default)
- Sound generation only supported on kling-v2-6+
Example Workflow
Here's how your AI assistant might use this skill in practice.
User asks: Generating concept art from a text description
- 1Generating concept art from a text description
- 2Animating a static photo into a short video clip
- 3Creating illustrations for a blog post or slide deck
- 4Producing a promotional video clip from a written prompt
- 5Batch generating multiple image variations for a design review
Generate images with Seedream4.5 and videos with Kling via LiblibAI API.
Security Audits
These signals reflect official OpenClaw status values. A Suspicious status means the skill should be used with extra caution.