skills$openclaw/fal-api
agmmnn8.9k

by agmmnn

fal-api – OpenClaw Skill

fal-api is an OpenClaw Skills integration for data analytics workflows. Generate images, videos, and audio via fal.ai API (FLUX, SDXL, Whisper, etc.)

8.9k stars9.4k forksSecurity L1
Updated Feb 7, 2026Created Feb 7, 2026data analytics

Skill Snapshot

namefal-api
descriptionGenerate images, videos, and audio via fal.ai API (FLUX, SDXL, Whisper, etc.) OpenClaw Skills integration.
owneragmmnn
repositoryagmmnn/fal-ai
languageMarkdown
licenseMIT
topics
securityL1
installopenclaw add @agmmnn/fal-ai
last updatedFeb 7, 2026

Maintainer

agmmnn

agmmnn

Maintains fal-api in the OpenClaw Skills directory.

View GitHub profile
File Explorer
4 files
.
_meta.json
264 B
fal_api.py
9.1 KB
README.md
571 B
SKILL.md
2.8 KB
SKILL.md

name: fal-api description: Generate images, videos, and audio via fal.ai API (FLUX, SDXL, Whisper, etc.) version: 0.1.0 metadata: { "openclaw": { "requires": { "env": ["FAL_KEY"] }, "primaryEnv": "FAL_KEY" }, }

fal.ai API Skill

Generate images, videos, and transcripts using fal.ai's API with support for FLUX, Stable Diffusion, Whisper, and more.

Features

  • Queue-based async generation (submit → poll → result)
  • Support for 600+ AI models
  • Image generation (FLUX, SDXL, Recraft)
  • Video generation (MiniMax, WAN)
  • Speech-to-text (Whisper)
  • Stdlib-only dependencies (no fal_client required)

Setup

  1. Get your API key from https://fal.ai/dashboard/keys
  2. Configure with:
export FAL_KEY="your-api-key"

Or via clawdbot config:

clawdbot config set skill.fal_api.key YOUR_API_KEY

Usage

Interactive Mode

You: Generate a cyberpunk cityscape with FLUX
Klawf: Creates the image and returns the URL

Python Script

from fal_api import FalAPI

api = FalAPI()

# Generate and wait
urls = api.generate_and_wait(
    prompt="A serene Japanese garden",
    model="flux-dev"
)
print(urls)

Available Models

ModelEndpointType
flux-schnellfal-ai/flux/schnellImage (fast)
flux-devfal-ai/flux/devImage
flux-profal-ai/flux-pro/v1.1-ultraImage (2K)
fast-sdxlfal-ai/fast-sdxlImage
recraft-v3fal-ai/recraft-v3Image
sd35-largefal-ai/stable-diffusion-v35-largeImage
minimax-videofal-ai/minimax-video/image-to-videoVideo
wan-videofal-ai/wan/v2.1/1.3b/text-to-videoVideo
whisperfal-ai/whisperAudio

For the full list, run:

python3 fal_api.py --list-models

Parameters

ParameterTypeDefaultDescription
promptstrrequiredImage/video description
modelstr"flux-dev"Model name from table above
image_sizestr"landscape_16_9"Preset: square, portrait_4_3, landscape_16_9, etc.
num_imagesint1Number of images to generate
seedintNoneRandom seed for reproducibility

Credits

Built following the krea-api skill pattern. Uses fal.ai's queue-based API for reliable async generation.

README.md

fal.ai API Skill

See SKILL.md for full documentation.

Quick Start

# Set your API key
export FAL_KEY="your-api-key"

# Generate an image
python3 fal_api.py --prompt "A cute robot cat" --model flux-schnell

# List available models
python3 fal_api.py --list-models

Configure Credentials

# Via environment
export FAL_KEY="your-api-key"

# Or via clawdbot config
clawdbot config set skill.fal_api.key YOUR_API_KEY

Requirements

  • Python 3.7+
  • No external dependencies (uses stdlib)

Permissions & Security

Security level L1: Low-risk skills with minimal permissions. Review inputs and outputs before running in production.

Requirements

  • OpenClaw CLI installed and configured.
  • Language: Markdown
  • License: MIT
  • Topics:

FAQ

How do I install fal-api?

Run openclaw add @agmmnn/fal-ai in your terminal. This installs fal-api into your OpenClaw Skills catalog.

Does this skill run locally or in the cloud?

OpenClaw Skills execute locally by default. Review the SKILL.md and permissions before running any skill.

Where can I verify the source code?

The source repository is available at https://github.com/openclaw/skills/tree/main/skills/agmmnn/fal-ai. Review commits and README documentation before installing.