skills$openclaw/loom-workflow
g9pedro3.2k

by g9pedro

loom-workflow – OpenClaw Skill

loom-workflow is an OpenClaw Skills integration for ai ml workflows. |

3.2k stars1.0k forksSecurity L1
Updated Feb 7, 2026Created Feb 7, 2026ai ml

Skill Snapshot

nameloom-workflow
description| OpenClaw Skills integration.
ownerg9pedro
repositoryg9pedro/loom-workflow
languageMarkdown
licenseMIT
topics
securityL1
installopenclaw add @g9pedro/loom-workflow
last updatedFeb 7, 2026

Maintainer

g9pedro

g9pedro

Maintains loom-workflow in the OpenClaw Skills directory.

View GitHub profile
File Explorer
9 files
.
scripts
analyze-workflow.py
8.5 KB
generate-lobster.py
7.0 KB
smart-extract.py
7.8 KB
test-output
video.info.json
22.6 KB
_meta.json
279 B
DESIGN.md
4.7 KB
SKILL.md
2.8 KB
SKILL.md

name: loom-workflow description: | AI-native workflow analyzer for Loom recordings. Breaks down recorded business processes into structured, automatable workflows. Use when:

  • Analyzing Loom videos to understand workflows
  • Extracting steps, tools, and decision points from screen recordings
  • Generating Lobster workflow files from video walkthroughs
  • Identifying ambiguities and human intervention points in processes

Loom Workflow Analyzer

Transforms Loom recordings into structured, automatable workflows.

Quick Start

# Full pipeline - download, extract, transcribe, analyze
{baseDir}/scripts/loom-workflow analyze https://loom.com/share/abc123

# Individual steps
{baseDir}/scripts/loom-workflow download https://loom.com/share/abc123
{baseDir}/scripts/loom-workflow extract ./video.mp4
{baseDir}/scripts/loom-workflow generate ./analysis.json

Pipeline

  1. Download - Fetches Loom video via yt-dlp
  2. Smart Extract - Captures frames at scene changes + transcript timing
  3. Transcribe - Whisper transcription with word-level timestamps
  4. Analyze - Multimodal AI analysis (requires vision model)
  5. Generate - Creates Lobster workflow with approval gates

Smart Frame Extraction

Frames are captured when:

  • Scene changes - Significant visual change (ffmpeg scene detection)
  • Speech starts - New narration segment begins
  • Combined - Speech + visual change = high-value moment
  • Gap fill - Max 10s without a frame

Analysis Output

The analyzer produces:

  • workflow-analysis.json - Structured workflow definition
  • workflow-summary.md - Human-readable summary
  • *.lobster - Executable Lobster workflow file

Ambiguity Detection

The analyzer flags:

  • Unclear mouse movements
  • Implicit knowledge ("the usual process")
  • Decision points ("depending on...")
  • Missing credentials/context
  • Tool dependencies

Vision Analysis Step

After extraction, use the generated prompt with a vision model:

# The prompt is at: output/workflow-analysis-prompt.md
# Attach frames from: output/frames/

# Example with Claude:
cat output/workflow-analysis-prompt.md | claude --images output/frames/*.jpg

Save the JSON response to workflow-analysis.json, then:

{baseDir}/scripts/loom-workflow generate ./output/workflow-analysis.json

Lobster Integration

Generated workflows use:

  • approve gates for destructive/external actions
  • llm-task for classification/decision steps
  • Resume tokens for interrupted workflows
  • JSON piping between steps

Requirements

  • yt-dlp - Video download
  • ffmpeg - Frame extraction + scene detection
  • whisper - Audio transcription
  • Vision-capable LLM for analysis step

Multilingual Support

Works with any language - Whisper auto-detects and transcribes. Analysis should be prompted in the video's language for best results.

README.md

No README available.

Permissions & Security

Security level L1: Low-risk skills with minimal permissions. Review inputs and outputs before running in production.

Requirements

- `yt-dlp` - Video download - `ffmpeg` - Frame extraction + scene detection - `whisper` - Audio transcription - Vision-capable LLM for analysis step

FAQ

How do I install loom-workflow?

Run openclaw add @g9pedro/loom-workflow in your terminal. This installs loom-workflow into your OpenClaw Skills catalog.

Does this skill run locally or in the cloud?

OpenClaw Skills execute locally by default. Review the SKILL.md and permissions before running any skill.

Where can I verify the source code?

The source repository is available at https://github.com/openclaw/skills/tree/main/skills/g9pedro/loom-workflow. Review commits and README documentation before installing.