skills$openclaw/spaces-listener
jamesalmeida8.9k

by jamesalmeida

spaces-listener – OpenClaw Skill

spaces-listener is an OpenClaw Skills integration for ai ml workflows. Record, transcribe, and summarize X/Twitter Spaces — live or replays. Auto-downloads audio via yt-dlp, transcribes with Whisper, and generates AI summaries.

8.9k stars3.2k forksSecurity L1
Updated Feb 7, 2026Created Feb 7, 2026ai ml

Skill Snapshot

namespaces-listener
descriptionRecord, transcribe, and summarize X/Twitter Spaces — live or replays. Auto-downloads audio via yt-dlp, transcribes with Whisper, and generates AI summaries. OpenClaw Skills integration.
ownerjamesalmeida
repositoryjamesalmeida/spaces-listener
languageMarkdown
licenseMIT
topics
securityL1
installopenclaw add @jamesalmeida/spaces-listener
last updatedFeb 7, 2026

Maintainer

jamesalmeida

jamesalmeida

Maintains spaces-listener in the OpenClaw Skills directory.

View GitHub profile
File Explorer
3 files
.
_meta.json
818 B
README.md
5.6 KB
SKILL.md
3.2 KB
SKILL.md

name: spaces-listener description: Record, transcribe, and summarize X/Twitter Spaces — live or replays. Auto-downloads audio via yt-dlp, transcribes with Whisper, and generates AI summaries. version: 1.6.0 author: jamesalmeida tags: [twitter, x, spaces, transcription, summarization, audio, recording] when: "User asks to record, transcribe, or listen to an X/Twitter Space" examples:

  • "Record this Space"
  • "Transcribe this X Space"
  • "Listen to this Twitter Space and transcribe it"
  • "Download this Space audio" metadata: openclaw: requires: bins: ["yt-dlp", "ffmpeg"] emoji: "🎧"

spaces-listener

Record, transcribe, and summarize X/Twitter Spaces — live or replays. Supports multiple concurrent recordings.

Commands

# Start recording (runs in background)
spaces listen <url>

# Record multiple Spaces at once
spaces listen "https://x.com/i/spaces/1ABC..."
spaces listen "https://x.com/i/spaces/2DEF..."

# List all active recordings
spaces list

# Check specific recording status
spaces status 1

# Stop a recording
spaces stop 1
spaces stop all

# Clean stale pid/meta files
spaces clean

# Transcribe when done
spaces transcribe ~/Desktop/space.m4a --model medium

# Summarize an existing transcript
spaces summarize ~/Desktop/space_transcript.txt

# Skip summarization
spaces transcribe ~/Desktop/space.m4a --no-summarize

Requirements

brew install yt-dlp ffmpeg openai-whisper

For summaries, set OPENAI_API_KEY (transcription still works without it).

How It Works

  1. Each spaces listen starts a new background recording with a unique ID
  2. Recordings persist even if you close terminal
  3. Run spaces list to see all active recordings
  4. When done, spaces stop <id> or spaces stop all
  5. Transcribe with spaces transcribe <file>
  6. Summaries are generated automatically after transcription (skip with --no-summarize)

Output

Each space gets its own folder under ~/Dropbox/ClawdBox/XSpaces/:

~/Dropbox/ClawdBox/XSpaces/
  space_username_2026-02-03_1430/
    recording.m4a     — audio
    recording.log     — progress log
    transcript.txt    — transcript
    summary.txt       — summary

Critical: Agent Usage Rules

NEVER set a timeout on Space downloads. Spaces can be hours long. yt-dlp stops automatically when the Space ends — don't kill it early.

The correct workflow:

  1. Run spaces listen <url> — it starts a background process and returns immediately
  2. Set a cron job (every 5–10 min) to check spaces list
  3. When recording shows "No active recordings", it's done
  4. Transcribe the audio file, summarize, notify the user
  5. Delete the cron job

Do NOT:

  • Use exec with a timeout for downloads
  • Run competing download processes for the same Space
  • Kill the download process manually (unless the user asks)

Audio is staged in /tmp/spaces-listener-staging/ during recording, then automatically copied to the final Dropbox output dir when complete. This avoids Dropbox file-locking issues during long downloads.

Whisper Models

ModelSpeedAccuracy
tiny⚡⚡⚡⚡
base⚡⚡⚡⭐⭐
small⚡⚡⭐⭐⭐
medium⭐⭐⭐⭐
large🐢⭐⭐⭐⭐⭐
README.md

🎧 spaces-listener

Version: 1.4.1

Record and transcribe X/Twitter Spaces — live or replays.

Zero API costs by default. Optional summaries use the OpenAI API.

Features

  • 📥 Audio recording — Direct download via yt-dlp
  • 📝 Auto-transcription — Local Whisper (no API key)
  • 🧠 Auto-summarization — OpenAI summaries (optional)
  • ⏺️ Live Spaces — Record in real-time as they happen
  • 🔄 Replays — Download at full speed
  • 💰 Free — No API costs, no rate limits

Installation

Prerequisites

brew install yt-dlp ffmpeg openai-whisper

Install the skill

# Clone to your skills directory
git clone https://github.com/jamesalmeida/spaces-listener.git ~/clawd/skills/spaces-listener

# Add to PATH (add to your .zshrc or .bashrc)
export PATH="$HOME/clawd/skills/spaces-listener/scripts:$PATH"

# Or create a symlink
ln -s ~/clawd/skills/spaces-listener/scripts/spaces /usr/local/bin/spaces

Usage

Basic

spaces listen "https://x.com/i/spaces/1ABC..."

Options

FlagDescription
--output, -oOutput directory (default: ~/Desktop)
--modelWhisper model: tiny/base/small/medium/large
--no-transcribeSkip transcription
--no-summarizeSkip summarization

Examples

# Record a live Space
spaces listen "https://x.com/i/spaces/1ABC..."

# High-quality transcription
spaces listen "https://x.com/i/spaces/1ABC..." --model large

# Save to specific folder
spaces listen "https://x.com/i/spaces/1ABC..." -o ~/Spaces

# Summarize a transcript
spaces summarize ~/Desktop/space_transcript.txt

# Clean stale pid/meta files
spaces clean

Summaries require OPENAI_API_KEY

Transcription runs locally. To enable summaries, export your OpenAI key:

export OPENAI_API_KEY="sk-..."

Optional: set SPACES_SUMMARY_MODEL to override the summary model (default: gpt-4o-mini).

Output

Files saved to output directory:

  • space_<username>_<date>.m4a — Audio
  • space_<username>_<date>.txt — Transcript
  • space_<username>_<date>_summary.txt — Summary (requires OPENAI_API_KEY)

Video Recording

Want video of the Space UI? Use QuickTime Player:

  1. Install BlackHole for system audio capture:

    brew install blackhole-2ch
    
  2. Set up Multi-Output Device in Audio MIDI Setup:

    • Open Audio MIDI Setup (in /Applications/Utilities)
    • Click + → Create Multi-Output Device
    • Check both your speakers AND BlackHole 2ch
    • Set this as your system output in Sound settings
  3. Record with QuickTime:

    • File → New Screen Recording
    • Click dropdown arrow, select "BlackHole 2ch" for audio
    • Record your screen while the Space plays

Why isn't video automated? macOS requires Screen Recording permission granted to a proper .app bundle. CLI tools running as background services (like Clawdbot) can't easily get this permission. Audio-only mode works perfectly automated.

How It Works

┌─────────────┐     ┌─────────────┐     ┌─────────────┐
│   X Space   │────▶│   yt-dlp    │────▶│    .m4a     │
│    (URL)    │     │  (download) │     │   (audio)   │
└─────────────┘     └─────────────┘     └──────┬──────┘
                                               │
                                               ▼
                                        ┌─────────────┐
                                        │   Whisper   │
                                        │ (transcribe)│
                                        └──────┬──────┘
                                               │
                                               ▼
                                        ┌─────────────┐
                                        │    .txt     │
                                        │ (transcript)│
                                        └──────┬──────┘
                                               │
                                               ▼
                                        ┌─────────────┐
                                        │   OpenAI    │
                                        │ (summarize) │
                                        └──────┬──────┘
                                               │
                                               ▼
                                        ┌─────────────┐
                                        │ _summary.txt│
                                        └─────────────┘

Summary Examples

Speakers
- Host: @username
- Guest: @guest

Main Topics
- Product roadmap and timelines
- Community feedback and feature requests

Key Insights
- v2 release targeted for Q3
- Focus on stability over new features

Notable Moments
- "We are prioritizing reliability this year."

Whisper Models

ModelSpeedAccuracyDownload
tiny⚡⚡⚡⚡39 MB
base⚡⚡⚡⭐⭐142 MB
small⚡⚡⭐⭐⭐466 MB
medium⭐⭐⭐⭐1.5 GB
large🐢⭐⭐⭐⭐⭐2.9 GB

First run downloads the model. Subsequent runs use the cached model.

License

MIT

Permissions & Security

Security level L1: Low-risk skills with minimal permissions. Review inputs and outputs before running in production.

Requirements

```bash brew install yt-dlp ffmpeg openai-whisper ``` For summaries, set `OPENAI_API_KEY` (transcription still works without it).

FAQ

How do I install spaces-listener?

Run openclaw add @jamesalmeida/spaces-listener in your terminal. This installs spaces-listener into your OpenClaw Skills catalog.

Does this skill run locally or in the cloud?

OpenClaw Skills execute locally by default. Review the SKILL.md and permissions before running any skill.

Where can I verify the source code?

The source repository is available at https://github.com/openclaw/skills/tree/main/skills/jamesalmeida/spaces-listener. Review commits and README documentation before installing.