skills$openclaw/ai-video-generation
okaris9.1k

by okaris

ai-video-generation – OpenClaw Skill

ai-video-generation is an OpenClaw Skills integration for productivity workflows. |

9.1k stars7.6k forksSecurity L1
Updated Feb 7, 2026Created Feb 7, 2026productivity

Skill Snapshot

nameai-video-generation
description| OpenClaw Skills integration.
ownerokaris
repositoryokaris/inference-shpath: ai-video-generation
languageMarkdown
licenseMIT
topics
securityL1
installopenclaw add @okaris/inference-sh:ai-video-generation
last updatedFeb 7, 2026

Maintainer

okaris

okaris

Maintains ai-video-generation in the OpenClaw Skills directory.

View GitHub profile
File Explorer
1 files
ai-video-generation
SKILL.md
4.5 KB
SKILL.md

name: ai-video-generation description: | Generate AI videos with Google Veo, Seedance, Wan, Grok and 40+ models via inference.sh CLI. Models: Veo 3.1, Veo 3, Seedance 1.5 Pro, Wan 2.5, Grok Imagine Video, OmniHuman, Fabric, HunyuanVideo. Capabilities: text-to-video, image-to-video, lipsync, avatar animation, video upscaling, foley sound. Use for: social media videos, marketing content, explainer videos, product demos, AI avatars. Triggers: video generation, ai video, text to video, image to video, veo, animate image, video from image, ai animation, video generator, generate video, t2v, i2v, ai video maker, create video with ai, runway alternative, pika alternative, sora alternative, kling alternative allowed-tools: Bash(infsh *)

AI Video Generation

Generate videos with 40+ AI models via inference.sh CLI.

Quick Start

# Install CLI
curl -fsSL https://cli.inference.sh | sh && infsh login

# Generate a video with Veo
infsh app run google/veo-3-1-fast --input '{"prompt": "drone shot flying over a forest"}'

Available Models

Text-to-Video

ModelApp IDBest For
Veo 3.1 Fastgoogle/veo-3-1-fastFast, with optional audio
Veo 3.1google/veo-3-1Best quality, frame interpolation
Veo 3google/veo-3High quality with audio
Veo 3 Fastgoogle/veo-3-fastFast with audio
Veo 2google/veo-2Realistic videos
Grok Videoxai/grok-imagine-videoxAI, configurable duration
Seedance 1.5 Probytedance/seedance-1-5-proWith first-frame control
Seedance 1.0 Probytedance/seedance-1-0-proUp to 1080p

Image-to-Video

ModelApp IDBest For
Wan 2.5falai/wan-2-5Animate any image
Wan 2.5 I2Vfalai/wan-2-5-i2vHigh quality i2v
Seedance Litebytedance/seedance-1-0-liteLightweight 720p

Avatar / Lipsync

ModelApp IDBest For
OmniHuman 1.5bytedance/omnihuman-1-5Multi-character
OmniHuman 1.0bytedance/omnihuman-1-0Single character
Fabric 1.0falai/fabric-1-0Image talks with lipsync
PixVerse Lipsyncfalai/pixverse-lipsyncRealistic lipsync

Utilities

ToolApp IDDescription
HunyuanVideo Foleyinfsh/hunyuanvideo-foleyAdd sound effects to video
Topaz Upscalerfalai/topaz-video-upscalerUpscale video quality
Media Mergerinfsh/media-mergerMerge videos with transitions

Browse All Video Apps

infsh app list --category video

Examples

Text-to-Video with Veo

infsh app run google/veo-3-1-fast --input '{
  "prompt": "A timelapse of a flower blooming in a garden"
}'

Grok Video

infsh app run xai/grok-imagine-video --input '{
  "prompt": "Waves crashing on a beach at sunset",
  "duration": 5
}'

Image-to-Video with Wan 2.5

infsh app run falai/wan-2-5 --input '{
  "image_url": "https://your-image.jpg"
}'

AI Avatar / Talking Head

infsh app run bytedance/omnihuman-1-5 --input '{
  "image_url": "https://portrait.jpg",
  "audio_url": "https://speech.mp3"
}'

Fabric Lipsync

infsh app run falai/fabric-1-0 --input '{
  "image_url": "https://face.jpg",
  "audio_url": "https://audio.mp3"
}'

PixVerse Lipsync

infsh app run falai/pixverse-lipsync --input '{
  "image_url": "https://portrait.jpg",
  "audio_url": "https://speech.mp3"
}'

Video Upscaling

infsh app run falai/topaz-video-upscaler --input '{"video_url": "https://..."}'

Add Sound Effects (Foley)

infsh app run infsh/hunyuanvideo-foley --input '{
  "video_url": "https://silent-video.mp4",
  "prompt": "footsteps on gravel, birds chirping"
}'
infsh app run infsh/media-merger --input '{
  "videos": ["https://clip1.mp4", "https://clip2.mp4"],
  "transition": "fade"
}'
# Full platform skill (all 100+ apps)
npx skills add inference-sh/skills@inference-sh

# Google Veo specific
npx skills add inference-sh/skills@google-veo

# AI avatars & lipsync
npx skills add inference-sh/skills@ai-avatar-video

# Text-to-speech (for video narration)
npx skills add inference-sh/skills@text-to-speech

# Image generation (for image-to-video)
npx skills add inference-sh/skills@ai-image-generation

# Twitter (post videos)
npx skills add inference-sh/skills@twitter-automation

Browse all apps: infsh app list

README.md

No README available.

Permissions & Security

Security level L1: Low-risk skills with minimal permissions. Review inputs and outputs before running in production.

Requirements

  • OpenClaw CLI installed and configured.
  • Language: Markdown
  • License: MIT
  • Topics:

FAQ

How do I install ai-video-generation?

Run openclaw add @okaris/inference-sh:ai-video-generation in your terminal. This installs ai-video-generation into your OpenClaw Skills catalog.

Does this skill run locally or in the cloud?

OpenClaw Skills execute locally by default. Review the SKILL.md and permissions before running any skill.

Where can I verify the source code?

The source repository is available at https://github.com/openclaw/skills/tree/main/skills/okaris/inference-sh. Review commits and README documentation before installing.