skills$openclaw/google-veo
okaris3.6k

by okaris

google-veo – OpenClaw Skill

google-veo is an OpenClaw Skills integration for data analytics workflows. |

3.6k stars9.8k forksSecurity L1
Updated Feb 7, 2026Created Feb 7, 2026data analytics

Skill Snapshot

namegoogle-veo
description| OpenClaw Skills integration.
ownerokaris
repositoryokaris/inference-shpath: google-veo
languageMarkdown
licenseMIT
topics
securityL1
installopenclaw add @okaris/inference-sh:google-veo
last updatedFeb 7, 2026

Maintainer

okaris

okaris

Maintains google-veo in the OpenClaw Skills directory.

View GitHub profile
File Explorer
1 files
google-veo
SKILL.md
2.8 KB
SKILL.md

name: google-veo description: | Generate videos with Google Veo models via inference.sh CLI. Models: Veo 3.1, Veo 3.1 Fast, Veo 3, Veo 3 Fast, Veo 2. Capabilities: text-to-video, cinematic output, high quality video generation. Triggers: veo, google veo, veo 3, veo 2, veo 3.1, vertex ai video, google video generation, google video ai, veo model, veo video allowed-tools: Bash(infsh *)

Google Veo Video Generation

Generate videos with Google Veo models via inference.sh CLI.

Quick Start

curl -fsSL https://cli.inference.sh | sh && infsh login

infsh app run google/veo-3-1-fast --input '{"prompt": "drone shot over a mountain lake"}'

Veo Models

ModelApp IDSpeedQuality
Veo 3.1google/veo-3-1SlowerBest
Veo 3.1 Fastgoogle/veo-3-1-fastFastExcellent
Veo 3google/veo-3MediumExcellent
Veo 3 Fastgoogle/veo-3-fastFastVery Good
Veo 2google/veo-2MediumGood

Search Veo Apps

infsh app list --search "veo"

Examples

Cinematic Shot

infsh app run google/veo-3-1-fast --input '{
  "prompt": "Cinematic drone shot flying through a misty forest at sunrise, volumetric lighting"
}'

Product Demo

infsh app run google/veo-3 --input '{
  "prompt": "Sleek smartphone rotating on a dark reflective surface, studio lighting"
}'

Nature Scene

infsh app run google/veo-3-1-fast --input '{
  "prompt": "Timelapse of clouds moving over a mountain range, golden hour"
}'

Action Shot

infsh app run google/veo-3 --input '{
  "prompt": "Slow motion water droplet splashing into a pool, macro shot"
}'

Urban Scene

infsh app run google/veo-3-1-fast --input '{
  "prompt": "Busy city street at night with neon signs and rain reflections, Tokyo style"
}'

Prompt Tips

Camera movements: drone shot, tracking shot, pan, zoom, dolly, steadicam

Lighting: golden hour, blue hour, studio lighting, volumetric, neon, natural

Style: cinematic, documentary, commercial, artistic, realistic

Timing: slow motion, timelapse, real-time

# 1. Generate sample input to see all options
infsh app sample google/veo-3-1-fast --save input.json

# 2. Edit the prompt
# 3. Run
infsh app run google/veo-3-1-fast --input input.json
# Full platform skill (all 100+ apps)
npx skills add inference-sh/skills@inference-sh

# All video generation models
npx skills add inference-sh/skills@ai-video-generation

# AI avatars & lipsync
npx skills add inference-sh/skills@ai-avatar-video

# Image generation (for image-to-video)
npx skills add inference-sh/skills@ai-image-generation

Browse all video apps: infsh app list --category video

README.md

No README available.

Permissions & Security

Security level L1: Low-risk skills with minimal permissions. Review inputs and outputs before running in production.

Requirements

  • OpenClaw CLI installed and configured.
  • Language: Markdown
  • License: MIT
  • Topics:

FAQ

How do I install google-veo?

Run openclaw add @okaris/inference-sh:google-veo in your terminal. This installs google-veo into your OpenClaw Skills catalog.

Does this skill run locally or in the cloud?

OpenClaw Skills execute locally by default. Review the SKILL.md and permissions before running any skill.

Where can I verify the source code?

The source repository is available at https://github.com/openclaw/skills/tree/main/skills/okaris/inference-sh. Review commits and README documentation before installing.