8.8k★ai-video-gen – OpenClaw Skill
ai-video-gen is an OpenClaw Skills integration for ai ml workflows. End-to-end AI video generation - create videos from text prompts using image generation, video synthesis, voice-over, and editing. Supports OpenAI DALL-E, Replicate models, LumaAI, Runway, and FFmpeg editing.
Skill Snapshot
| name | ai-video-gen |
| description | End-to-end AI video generation - create videos from text prompts using image generation, video synthesis, voice-over, and editing. Supports OpenAI DALL-E, Replicate models, LumaAI, Runway, and FFmpeg editing. OpenClaw Skills integration. |
| owner | rhanbourinajd |
| repository | rhanbourinajd/ai-video-gen |
| language | Markdown |
| license | MIT |
| topics | |
| security | L1 |
| install | openclaw add @rhanbourinajd/ai-video-gen |
| last updated | Feb 7, 2026 |
Maintainer

name: ai-video-gen description: End-to-end AI video generation - create videos from text prompts using image generation, video synthesis, voice-over, and editing. Supports OpenAI DALL-E, Replicate models, LumaAI, Runway, and FFmpeg editing.
AI Video Generation Skill
Generate complete videos from text descriptions using AI.
Capabilities
- Image Generation - DALL-E 3, Stable Diffusion, Flux
- Video Generation - LumaAI, Runway, Replicate models
- Voice-over - OpenAI TTS, ElevenLabs
- Video Editing - FFmpeg assembly, transitions, overlays
Quick Start
# Generate a complete video
python skills/ai-video-gen/generate_video.py --prompt "A sunset over mountains" --output sunset.mp4
# Just images to video
python skills/ai-video-gen/images_to_video.py --images img1.png img2.png --output result.mp4
# Add voiceover
python skills/ai-video-gen/add_voiceover.py --video input.mp4 --text "Your narration" --output final.mp4
Setup
Required API Keys
Add to your environment or .env file:
# Image Generation (pick one)
OPENAI_API_KEY=sk-... # DALL-E 3
REPLICATE_API_TOKEN=r8_... # Stable Diffusion, Flux
# Video Generation (pick one)
LUMAAI_API_KEY=luma_... # LumaAI Dream Machine
RUNWAY_API_KEY=... # Runway ML
REPLICATE_API_TOKEN=r8_... # Multiple models
# Voice (optional)
OPENAI_API_KEY=sk-... # OpenAI TTS
ELEVENLABS_API_KEY=... # ElevenLabs
# Or use FREE local options (no API needed)
Install Dependencies
pip install openai requests pillow replicate python-dotenv
FFmpeg
Already installed via winget.
Usage Examples
1. Text to Video (Full Pipeline)
python skills/ai-video-gen/generate_video.py \
--prompt "A futuristic city at night with flying cars" \
--duration 5 \
--voiceover "Welcome to the future" \
--output future_city.mp4
2. Multiple Scenes
python skills/ai-video-gen/multi_scene.py \
--scenes "Morning sunrise" "Busy city street" "Peaceful night" \
--duration 3 \
--output day_in_life.mp4
3. Image Sequence to Video
python skills/ai-video-gen/images_to_video.py \
--images frame1.png frame2.png frame3.png \
--fps 24 \
--output animation.mp4
Workflow Options
Budget Mode (FREE)
- Image: Stable Diffusion (local or free API)
- Video: Open source models
- Voice: OpenAI TTS (cheap) or free TTS
- Edit: FFmpeg
Quality Mode (Paid)
- Image: DALL-E 3 or Midjourney
- Video: Runway Gen-3 or LumaAI
- Voice: ElevenLabs
- Edit: FFmpeg + effects
Scripts Reference
generate_video.py- Main end-to-end generatorimages_to_video.py- Convert image sequence to videoadd_voiceover.py- Add narration to existing videomulti_scene.py- Create multi-scene videosedit_video.py- Apply effects, transitions, overlays
API Cost Estimates
- DALL-E 3: ~$0.04-0.08 per image
- Replicate: ~$0.01-0.10 per generation
- LumaAI: $0-0.50 per 5sec (free tier available)
- Runway: ~$0.05 per second
- OpenAI TTS: ~$0.015 per 1K characters
- ElevenLabs: ~$0.30 per 1K characters (better quality)
Examples
See examples/ folder for sample outputs and prompts.
AI Video Generator
Complete end-to-end AI video creation system.
✅ Installation Status
- FFmpeg installed
- Python 3.11.9 available
- Python dependencies (run
setup.bat) - API keys configured
Quick Start
1. Install Dependencies
cd skills/ai-video-gen
pip install -r requirements.txt
Or run setup.bat
2. Configure API Keys
Copy .env.example to .env and add your keys:
copy .env.example .env
notepad .env
Minimum required:
OPENAI_API_KEY- For both image (DALL-E) and voice (TTS)
Optional but recommended:
LUMAAI_API_KEY- For video generation (has free tier!)REPLICATE_API_TOKEN- Alternative for images/video
3. Generate Your First Video
python generate_video.py --prompt "A serene mountain landscape at sunset" --output test.mp4
With voiceover:
python generate_video.py \
--prompt "A futuristic city with flying cars" \
--voiceover "Welcome to the future" \
--output future.mp4
What You Need to Sign Up For
Free/Cheap Options (Start Here)
-
OpenAI - https://platform.openai.com
- Get API key for DALL-E + TTS
- Cost: ~$0.05-0.10 per video (image + voice)
-
LumaAI - https://lumalabs.ai
- Free tier: 30 generations/month
- Then $1-2 per video
Total cost to start: $0-0.15 per video
Premium Options (Better Quality)
-
Runway - https://runwayml.com
- Higher quality video generation
- ~$0.50-1.00 per 5-second video
-
ElevenLabs - https://elevenlabs.io
- Best voice quality
- ~$0.30 per 1K characters
-
Replicate - https://replicate.com
- Multiple AI models
- Pay-per-use, very cheap
Examples
Simple Video
python generate_video.py --prompt "Ocean waves crashing" --output waves.mp4
Multi-Image to Video
python images_to_video.py --images img1.png img2.png img3.png --output slideshow.mp4
Add Narration to Existing Video
python add_voiceover.py --video input.mp4 --text "Your narration here" --output final.mp4
Workflow
Text Prompt → DALL-E Image → LumaAI Video → + Voiceover → Final MP4
All automated in one command!
Cost Calculator
Budget Video (5 seconds):
- Image (DALL-E): $0.04
- Video (LumaAI free): $0
- Voice (OpenAI TTS): $0.01
- Total: $0.05
Quality Video (5 seconds):
- Image (DALL-E): $0.08
- Video (Runway): $0.50
- Voice (ElevenLabs): $0.30
- Total: $0.88
Troubleshooting
FFmpeg not found
Restart your terminal after installation, or add to PATH manually.
API Key errors
Make sure .env file exists and has valid keys (no quotes needed).
Python module errors
Run pip install -r requirements.txt
What's Next
The scripts are modular - you can:
- Use just image generation
- Use just video assembly from images
- Add effects and transitions
- Batch process multiple videos
- Create longer videos with scene transitions
Need help? Check the examples or ask!
Permissions & Security
Security level L1: Low-risk skills with minimal permissions. Review inputs and outputs before running in production.
Requirements
- OpenClaw CLI installed and configured.
- Language: Markdown
- License: MIT
- Topics:
FAQ
How do I install ai-video-gen?
Run openclaw add @rhanbourinajd/ai-video-gen in your terminal. This installs ai-video-gen into your OpenClaw Skills catalog.
Does this skill run locally or in the cloud?
OpenClaw Skills execute locally by default. Review the SKILL.md and permissions before running any skill.
Where can I verify the source code?
The source repository is available at https://github.com/openclaw/skills/tree/main/skills/rhanbourinajd/ai-video-gen. Review commits and README documentation before installing.
