skills$openclaw/zoom-meeting-assistance-rtms-unofficial-community
tanchunsiong1.2k

by tanchunsiong

zoom-meeting-assistance-rtms-unofficial-community – OpenClaw Skill

zoom-meeting-assistance-rtms-unofficial-community is an OpenClaw Skills integration for writing workflows. Zoom RTMS Meeting Assistant — start on-demand to capture meeting audio, video, transcript, screenshare, and chat via Zoom Real-Time Media Streams. Handles meeting.rtms_started and meeting.rtms_stopped webhook events. Provides AI-powered dialog suggestions, sentiment analysis, and live summaries with WhatsApp notifications. Use when a Zoom RTMS webhook fires or the user asks to record/analyze a meeting.

1.2k stars1.2k forksSecurity L1
Updated Feb 7, 2026Created Feb 7, 2026writing

Skill Snapshot

namezoom-meeting-assistance-rtms-unofficial-community
descriptionZoom RTMS Meeting Assistant — start on-demand to capture meeting audio, video, transcript, screenshare, and chat via Zoom Real-Time Media Streams. Handles meeting.rtms_started and meeting.rtms_stopped webhook events. Provides AI-powered dialog suggestions, sentiment analysis, and live summaries with WhatsApp notifications. Use when a Zoom RTMS webhook fires or the user asks to record/analyze a meeting. OpenClaw Skills integration.
ownertanchunsiong
repositorytanchunsiong/zoom-meeting-assistance-with-rtms-unofficial-community-skill
languageMarkdown
licenseMIT
topics
securityL1
installopenclaw add @tanchunsiong/zoom-meeting-assistance-with-rtms-unofficial-community-skill
last updatedFeb 7, 2026

Maintainer

tanchunsiong

tanchunsiong

Maintains zoom-meeting-assistance-rtms-unofficial-community in the OpenClaw Skills directory.

View GitHub profile
File Explorer
20 files
.
_meta.json
897 B
chatWithClawdbot.js
9.1 KB
convertMeetingMedia.js
1.6 KB
index.js
29.3 KB
muxMixedAudioAndActiveSpeakerVideo.js
1.4 KB
package-lock.json
50.4 KB
package.json
243 B
query_prompt_current_meeting.md
1.3 KB
query_prompt_dialog_suggestions.md
1.4 KB
query_prompt_sentiment_analysis.md
1.9 KB
query_prompt.md
1.9 KB
README.md
6.4 KB
saveRawAudioAdvance.js
1.2 KB
saveRawVideoAdvance.js
1.9 KB
saveSharescreen.js
7.7 KB
skill.json
954 B
SKILL.md
7.3 KB
summary_prompt.md
5.3 KB
tool.js
1.0 KB
writeTranscriptToVtt.js
4.1 KB
SKILL.md

name: zoom-meeting-assistance-rtms-unofficial-community description: Zoom RTMS Meeting Assistant — start on-demand to capture meeting audio, video, transcript, screenshare, and chat via Zoom Real-Time Media Streams. Handles meeting.rtms_started and meeting.rtms_stopped webhook events. Provides AI-powered dialog suggestions, sentiment analysis, and live summaries with WhatsApp notifications. Use when a Zoom RTMS webhook fires or the user asks to record/analyze a meeting.

Zoom RTMS Meeting Assistant

Headless capture service for Zoom meetings using Real-Time Media Streams (RTMS). Receives webhook events, connects to RTMS WebSockets, records all media, and runs AI analysis via OpenClaw.

Webhook Events Handled

This skill processes two Zoom webhook events:

  • meeting.rtms_started — Zoom sends this when RTMS is activated for a meeting. Contains server_urls, rtms_stream_id, and meeting_uuid needed to connect to the RTMS WebSocket.
  • meeting.rtms_stopped — Zoom sends this when RTMS ends (meeting ended or RTMS disabled). Triggers cleanup: closes WebSocket connections, generates screenshare PDF, sends summary notification.

Webhook Dependency

This skill needs a public webhook endpoint to receive these events from Zoom.

Preferred: Use the ngrok-unofficial-webhook-skill (skills/ngrok-unofficial-webhook-skill). It auto-discovers this skill via webhookEvents in skill.json, notifies the user, and offers to route events here.

Other webhook solutions (e.g. custom servers, cloud functions) will work but require additional integration to forward payloads to this service.

Prerequisites

cd skills/zoom-meeting-assistance-rtms-unofficial-community
npm install

Requires ffmpeg for post-meeting media conversion.

Environment Variables

Set these in the skill's .env file:

Required:

  • ZOOM_SECRET_TOKEN — Zoom webhook secret token
  • ZOOM_CLIENT_ID — Zoom app Client ID
  • ZOOM_CLIENT_SECRET — Zoom app Client Secret

Optional:

  • PORT — Server port (default: 3000)
  • AI_PROCESSING_INTERVAL_MS — AI analysis frequency in ms (default: 30000)
  • AI_FUNCTION_STAGGER_MS — Delay between AI calls in ms (default: 5000)
  • AUDIO_DATA_OPT1 = mixed stream, 2 = multi-stream (default: 2)
  • OPENCLAW_NOTIFY_CHANNEL — Notification channel (default: whatsapp)
  • OPENCLAW_NOTIFY_TARGET — Phone number / target for notifications

Starting the Service

cd skills/zoom-meeting-assistance-rtms-unofficial-community
node index.js

This starts an Express server listening for Zoom webhook events on PORT.

⚠️ Important: Before forwarding webhooks to this service, always check if it's running:

# Check if service is listening on port 3000
lsof -i :3000

If nothing is returned, start the service first before forwarding any webhook events.

Typical flow:

  1. Start the server as a background process
  2. Zoom sends meeting.rtms_started webhook → service connects to RTMS WebSocket
  3. Media streams in real-time: audio, video, transcript, screenshare, chat
  4. AI processing runs periodically (dialog suggestions, sentiment, summary)
  5. meeting.rtms_stopped → service closes connections, generates screenshare PDF

Recorded Data

All recordings are stored organized by date:

skills/zoom-meeting-assistance-rtms-unofficial-community/recordings/YYYY/MM/DD/{streamId}/

Each stream folder contains:

FileContentSearchable
metadata.jsonMeeting metadata (UUID, stream ID, operator, start time)
transcript.txtPlain text transcript with timestamps and speaker names✅ Best for searching — grep-friendly, one line per utterance
transcript.vttVTT format transcript with timing cues
transcript.srtSRT format transcript
events.logParticipant join/leave, active speaker changes (JSON lines)
chat.txtChat messages with timestamps
ai_summary.mdAI-generated meeting summary (markdown)✅ Key document — read this first for meeting overview
ai_dialog.jsonAI dialog suggestions
ai_sentiment.jsonSentiment analysis per participant
mixedaudio.rawMixed audio stream (raw PCM)❌ Binary
activespeakervideo.h264Active speaker video (raw H.264)❌ Binary
processed/screenshare.pdfDeduplicated screenshare frames as PDF❌ Binary

All summaries are also copied to a central folder for easy access:

skills/zoom-meeting-assistance-rtms-unofficial-community/summaries/summary_YYYY-MM-DDTHH-MM-SS_{streamId}.md

Searching & Querying Past Meetings

To find and review past meeting data:

# List all recorded meetings by date
ls -R recordings/

# List meetings for a specific date
ls recordings/2026/01/28/

# Search across all transcripts for a keyword
grep -rl "keyword" recordings/*/*/*/*/transcript.txt

# Search for what a specific person said
grep "Chun Siong Tan" recordings/*/*/*/*/transcript.txt

# Read a meeting summary
cat recordings/YYYY/MM/DD/<streamId>/ai_summary.md

# Search summaries for a topic
grep -rl "topic" recordings/*/*/*/*/ai_summary.md

# Check who attended a meeting
cat recordings/YYYY/MM/DD/<streamId>/events.log

# Get sentiment for a meeting
cat recordings/YYYY/MM/DD/<streamId>/ai_sentiment.json

The .txt, .md, .json, and .log files are all text-based and searchable. Start with ai_summary.md for a quick overview, then drill into transcript.txt for specific quotes or details.

API Endpoints

# Toggle WhatsApp notifications on/off
curl -X POST http://localhost:3000/api/notify-toggle -H "Content-Type: application/json" -d '{"enabled": false}'

# Check notification status
curl http://localhost:3000/api/notify-toggle

Post-Meeting Processing

When meeting.rtms_stopped fires, the service automatically:

  1. Generates PDF from screenshare images
  2. Converts mixedaudio.rawmixedaudio.wav
  3. Converts activespeakervideo.h264activespeakervideo.mp4
  4. Muxes mixed audio + active speaker video into final_output.mp4

Manual conversion scripts are available but note that auto-conversion runs on meeting end, so manual re-runs are rarely needed.

Reading Meeting Data

After or during a meeting, read files from recordings/YYYY/MM/DD/{streamId}/:

# List recorded meetings by date
ls -R recordings/

# Read transcript
cat recordings/YYYY/MM/DD/<streamId>/transcript.txt

# Read AI summary
cat recordings/YYYY/MM/DD/<streamId>/ai_summary.md

# Read sentiment analysis
cat recordings/YYYY/MM/DD/<streamId>/ai_sentiment.json

Prompt Customization

Want different summary styles or analysis? Customize the AI prompts to fit your needs!

Edit these files to change AI behavior:

FilePurposeExample Customizations
summary_prompt.mdMeeting summary generationBullet points vs prose, focus areas, length
query_prompt.mdQuery response formattingResponse style, detail level
query_prompt_current_meeting.mdReal-time meeting analysisWhat to highlight during meetings
query_prompt_dialog_suggestions.mdDialog suggestion styleFormal vs casual, suggestion count
query_prompt_sentiment_analysis.mdSentiment scoring logicCustom sentiment categories, thresholds

Tip: Back up the originals before editing, so you can revert if needed.

README.md

Zoom RTMS Meeting Assistant

Headless capture service for Zoom meetings using Real-Time Media Streams (RTMS). Receives webhook events, connects to RTMS WebSockets, records all media, and runs AI analysis via OpenClaw with WhatsApp notifications.

Unofficial — This skill is not affiliated with or endorsed by Zoom Video Communications.

Requires OpenClaw — This skill uses the OpenClaw CLI for AI processing and notifications.

Features

  • Real-time capture — Audio, video, transcript, screenshare, and chat via RTMS WebSockets
  • AI analysis — Dialog suggestions, sentiment analysis, and live summaries via OpenClaw
  • WhatsApp notifications — Real-time AI results sent via WhatsApp
  • Multi-format transcripts — VTT, SRT, and plain text with timestamps and speaker names
  • Screenshare PDF — Deduplicated screenshare frames compiled into a PDF
  • Per-participant audio — Raw PCM audio per participant with gap filling
  • Notification toggle — Mute/unmute notifications mid-meeting via API

How It Works

  1. Receive webhook — Zoom sends meeting.rtms_started via the ngrok webhook skill
  2. Connect to RTMS — Service connects to Zoom's RTMS WebSocket using the provided stream URLs
  3. Capture media — All streams saved in real-time to recordings/{streamId}/
  4. AI processing — OpenClaw periodically analyzes transcripts for dialog suggestions, sentiment, and summaries
  5. Meeting endsmeeting.rtms_stopped triggers cleanup, PDF generation, and summary notification

Quick Start

1. Install dependencies

cd skills/zoom-meeting-assistance-rtms-unofficial-community
npm install

Requires ffmpeg for post-meeting media conversion.

2. Configure

Copy .env.example to .env and fill in:

ZOOM_SECRET_TOKEN=your_webhook_secret_token
ZOOM_CLIENT_ID=your_zoom_client_id
ZOOM_CLIENT_SECRET=your_zoom_client_secret
OPENCLAW_NOTIFY_TARGET=+1234567890

3. Start

node index.js

The service listens on port 4048 (configurable) for webhook events forwarded by the ngrok skill.

Environment Variables

VariableRequiredDefaultDescription
ZOOM_SECRET_TOKENZoom webhook secret token
ZOOM_CLIENT_IDZoom app Client ID
ZOOM_CLIENT_SECRETZoom app Client Secret
PORT3000Express server port
WEBHOOK_PATH/webhookWebhook endpoint path
AI_PROCESSING_INTERVAL_MS30000AI analysis frequency (ms)
AI_FUNCTION_STAGGER_MS5000Delay between AI calls (ms)
OPENCLAW_BINopenclawPath to OpenClaw binary
OPENCLAW_NOTIFY_CHANNELwhatsappNotification channel
OPENCLAW_NOTIFY_TARGETPhone number / target
OPENCLAW_TIMEOUT120OpenClaw timeout (seconds)
AUDIO_DATA_OPT21 = mixed audio, 2 = multi-stream

Recorded Data

All recordings stored at recordings/{streamId}/:

FileContent
transcript.txtPlain text transcript — best for searching
transcript.vttVTT format transcript with timing cues
transcript.srtSRT format transcript
events.logParticipant join/leave, active speaker (JSON lines)
chat.txtChat messages with timestamps
ai_summary.mdAI-generated meeting summary
ai_dialog.jsonAI dialog suggestions
ai_sentiment.jsonSentiment analysis per participant
{userId}.rawPer-participant raw PCM audio
combined.h264Raw H.264 video
processed/screenshare.pdfDeduplicated screenshare frames as PDF

Searching Past Meetings

# List all recorded meetings
ls recordings/

# Search across all transcripts
grep -rl "keyword" recordings/*/transcript.txt

# Search what a specific person said
grep "Name" recordings/*/transcript.txt

# Read a meeting summary
cat recordings/<streamId>/ai_summary.md

# Check who attended
cat recordings/<streamId>/events.log

API Endpoints

# Toggle WhatsApp notifications on/off
curl -X POST http://localhost:3000/api/notify-toggle \
  -H "Content-Type: application/json" -d '{"enabled": false}'

# Check notification status
curl http://localhost:3000/api/notify-toggle

Post-Meeting Helpers

Run manually after a meeting ends:

# Convert raw audio/video to WAV/MP4
node convertMeetingMedia.js <streamId>

# Mux first audio + video into final MP4
node muxFirstAudioVideo.js <streamId>

Prompt Customization

Edit these files to change AI behavior:

  • summary_prompt.md — Meeting summary generation
  • query_prompt.md — Query response formatting
  • query_prompt_current_meeting.md — Real-time meeting analysis
  • query_prompt_dialog_suggestions.md — Dialog suggestion style
  • query_prompt_sentiment_analysis.md — Sentiment scoring logic

File Structure

├── .env                        # API keys & config
├── index.js                    # Main RTMS server & recording logic
├── chatWithClawdbot.js         # OpenClaw AI integration
├── convertMeetingMedia.js      # FFmpeg conversion helper
├── muxFirstAudioVideo.js       # Audio/video muxing helper
├── saveRawAudioAdvance.js      # Real-time audio stream saving
├── saveRawVideoAdvance.js      # Real-time video stream saving
├── writeTranscriptToVtt.js     # Transcript writing (VTT/SRT/TXT)
├── saveSharescreen.js          # Screenshare capture & PDF generation
├── summary_prompt.md           # Summary generation prompt
├── query_prompt*.md            # AI query prompts
└── recordings/                 # Meeting data storage
    └── {streamId}/             # Per-meeting directory

Bug Reports & Contributing

Found a bug? Please raise an issue at: 👉 https://github.com/tanchunsiong/zoom-meeting-assistance-with-rtms-unofficial-community-skill/issues

Pull requests are also welcome!

License

MIT

Permissions & Security

Security level L1: Low-risk skills with minimal permissions. Review inputs and outputs before running in production.

Requirements

```bash cd skills/zoom-meeting-assistance-rtms-unofficial-community npm install ``` Requires `ffmpeg` for post-meeting media conversion.

FAQ

How do I install zoom-meeting-assistance-rtms-unofficial-community?

Run openclaw add @tanchunsiong/zoom-meeting-assistance-with-rtms-unofficial-community-skill in your terminal. This installs zoom-meeting-assistance-rtms-unofficial-community into your OpenClaw Skills catalog.

Does this skill run locally or in the cloud?

OpenClaw Skills execute locally by default. Review the SKILL.md and permissions before running any skill.

Where can I verify the source code?

The source repository is available at https://github.com/openclaw/skills/tree/main/skills/tanchunsiong/zoom-meeting-assistance-with-rtms-unofficial-community-skill. Review commits and README documentation before installing.