1.2k★by tanchunsiong
zoom-meeting-assistance-rtms-unofficial-community – OpenClaw Skill
zoom-meeting-assistance-rtms-unofficial-community is an OpenClaw Skills integration for writing workflows. Zoom RTMS Meeting Assistant — start on-demand to capture meeting audio, video, transcript, screenshare, and chat via Zoom Real-Time Media Streams. Handles meeting.rtms_started and meeting.rtms_stopped webhook events. Provides AI-powered dialog suggestions, sentiment analysis, and live summaries with WhatsApp notifications. Use when a Zoom RTMS webhook fires or the user asks to record/analyze a meeting.
Skill Snapshot
| name | zoom-meeting-assistance-rtms-unofficial-community |
| description | Zoom RTMS Meeting Assistant — start on-demand to capture meeting audio, video, transcript, screenshare, and chat via Zoom Real-Time Media Streams. Handles meeting.rtms_started and meeting.rtms_stopped webhook events. Provides AI-powered dialog suggestions, sentiment analysis, and live summaries with WhatsApp notifications. Use when a Zoom RTMS webhook fires or the user asks to record/analyze a meeting. OpenClaw Skills integration. |
| owner | tanchunsiong |
| repository | tanchunsiong/zoom-meeting-assistance-with-rtms-unofficial-community-skill |
| language | Markdown |
| license | MIT |
| topics | |
| security | L1 |
| install | openclaw add @tanchunsiong/zoom-meeting-assistance-with-rtms-unofficial-community-skill |
| last updated | Feb 7, 2026 |
Maintainer

tanchunsiong
Maintains zoom-meeting-assistance-rtms-unofficial-community in the OpenClaw Skills directory.
View GitHub profilename: zoom-meeting-assistance-rtms-unofficial-community description: Zoom RTMS Meeting Assistant — start on-demand to capture meeting audio, video, transcript, screenshare, and chat via Zoom Real-Time Media Streams. Handles meeting.rtms_started and meeting.rtms_stopped webhook events. Provides AI-powered dialog suggestions, sentiment analysis, and live summaries with WhatsApp notifications. Use when a Zoom RTMS webhook fires or the user asks to record/analyze a meeting.
Zoom RTMS Meeting Assistant
Headless capture service for Zoom meetings using Real-Time Media Streams (RTMS). Receives webhook events, connects to RTMS WebSockets, records all media, and runs AI analysis via OpenClaw.
Webhook Events Handled
This skill processes two Zoom webhook events:
meeting.rtms_started— Zoom sends this when RTMS is activated for a meeting. Containsserver_urls,rtms_stream_id, andmeeting_uuidneeded to connect to the RTMS WebSocket.meeting.rtms_stopped— Zoom sends this when RTMS ends (meeting ended or RTMS disabled). Triggers cleanup: closes WebSocket connections, generates screenshare PDF, sends summary notification.
Webhook Dependency
This skill needs a public webhook endpoint to receive these events from Zoom.
Preferred: Use the ngrok-unofficial-webhook-skill (skills/ngrok-unofficial-webhook-skill). It auto-discovers this skill via webhookEvents in skill.json, notifies the user, and offers to route events here.
Other webhook solutions (e.g. custom servers, cloud functions) will work but require additional integration to forward payloads to this service.
Prerequisites
cd skills/zoom-meeting-assistance-rtms-unofficial-community
npm install
Requires ffmpeg for post-meeting media conversion.
Environment Variables
Set these in the skill's .env file:
Required:
ZOOM_SECRET_TOKEN— Zoom webhook secret tokenZOOM_CLIENT_ID— Zoom app Client IDZOOM_CLIENT_SECRET— Zoom app Client Secret
Optional:
PORT— Server port (default:3000)AI_PROCESSING_INTERVAL_MS— AI analysis frequency in ms (default:30000)AI_FUNCTION_STAGGER_MS— Delay between AI calls in ms (default:5000)AUDIO_DATA_OPT—1= mixed stream,2= multi-stream (default:2)OPENCLAW_NOTIFY_CHANNEL— Notification channel (default:whatsapp)OPENCLAW_NOTIFY_TARGET— Phone number / target for notifications
Starting the Service
cd skills/zoom-meeting-assistance-rtms-unofficial-community
node index.js
This starts an Express server listening for Zoom webhook events on PORT.
⚠️ Important: Before forwarding webhooks to this service, always check if it's running:
# Check if service is listening on port 3000
lsof -i :3000
If nothing is returned, start the service first before forwarding any webhook events.
Typical flow:
- Start the server as a background process
- Zoom sends
meeting.rtms_startedwebhook → service connects to RTMS WebSocket - Media streams in real-time: audio, video, transcript, screenshare, chat
- AI processing runs periodically (dialog suggestions, sentiment, summary)
meeting.rtms_stopped→ service closes connections, generates screenshare PDF
Recorded Data
All recordings are stored organized by date:
skills/zoom-meeting-assistance-rtms-unofficial-community/recordings/YYYY/MM/DD/{streamId}/
Each stream folder contains:
| File | Content | Searchable |
|---|---|---|
metadata.json | Meeting metadata (UUID, stream ID, operator, start time) | ✅ |
transcript.txt | Plain text transcript with timestamps and speaker names | ✅ Best for searching — grep-friendly, one line per utterance |
transcript.vtt | VTT format transcript with timing cues | ✅ |
transcript.srt | SRT format transcript | ✅ |
events.log | Participant join/leave, active speaker changes (JSON lines) | ✅ |
chat.txt | Chat messages with timestamps | ✅ |
ai_summary.md | AI-generated meeting summary (markdown) | ✅ Key document — read this first for meeting overview |
ai_dialog.json | AI dialog suggestions | ✅ |
ai_sentiment.json | Sentiment analysis per participant | ✅ |
mixedaudio.raw | Mixed audio stream (raw PCM) | ❌ Binary |
activespeakervideo.h264 | Active speaker video (raw H.264) | ❌ Binary |
processed/screenshare.pdf | Deduplicated screenshare frames as PDF | ❌ Binary |
All summaries are also copied to a central folder for easy access:
skills/zoom-meeting-assistance-rtms-unofficial-community/summaries/summary_YYYY-MM-DDTHH-MM-SS_{streamId}.md
Searching & Querying Past Meetings
To find and review past meeting data:
# List all recorded meetings by date
ls -R recordings/
# List meetings for a specific date
ls recordings/2026/01/28/
# Search across all transcripts for a keyword
grep -rl "keyword" recordings/*/*/*/*/transcript.txt
# Search for what a specific person said
grep "Chun Siong Tan" recordings/*/*/*/*/transcript.txt
# Read a meeting summary
cat recordings/YYYY/MM/DD/<streamId>/ai_summary.md
# Search summaries for a topic
grep -rl "topic" recordings/*/*/*/*/ai_summary.md
# Check who attended a meeting
cat recordings/YYYY/MM/DD/<streamId>/events.log
# Get sentiment for a meeting
cat recordings/YYYY/MM/DD/<streamId>/ai_sentiment.json
The .txt, .md, .json, and .log files are all text-based and searchable. Start with ai_summary.md for a quick overview, then drill into transcript.txt for specific quotes or details.
API Endpoints
# Toggle WhatsApp notifications on/off
curl -X POST http://localhost:3000/api/notify-toggle -H "Content-Type: application/json" -d '{"enabled": false}'
# Check notification status
curl http://localhost:3000/api/notify-toggle
Post-Meeting Processing
When meeting.rtms_stopped fires, the service automatically:
- Generates PDF from screenshare images
- Converts
mixedaudio.raw→mixedaudio.wav - Converts
activespeakervideo.h264→activespeakervideo.mp4 - Muxes mixed audio + active speaker video into
final_output.mp4
Manual conversion scripts are available but note that auto-conversion runs on meeting end, so manual re-runs are rarely needed.
Reading Meeting Data
After or during a meeting, read files from recordings/YYYY/MM/DD/{streamId}/:
# List recorded meetings by date
ls -R recordings/
# Read transcript
cat recordings/YYYY/MM/DD/<streamId>/transcript.txt
# Read AI summary
cat recordings/YYYY/MM/DD/<streamId>/ai_summary.md
# Read sentiment analysis
cat recordings/YYYY/MM/DD/<streamId>/ai_sentiment.json
Prompt Customization
Want different summary styles or analysis? Customize the AI prompts to fit your needs!
Edit these files to change AI behavior:
| File | Purpose | Example Customizations |
|---|---|---|
summary_prompt.md | Meeting summary generation | Bullet points vs prose, focus areas, length |
query_prompt.md | Query response formatting | Response style, detail level |
query_prompt_current_meeting.md | Real-time meeting analysis | What to highlight during meetings |
query_prompt_dialog_suggestions.md | Dialog suggestion style | Formal vs casual, suggestion count |
query_prompt_sentiment_analysis.md | Sentiment scoring logic | Custom sentiment categories, thresholds |
Tip: Back up the originals before editing, so you can revert if needed.
Zoom RTMS Meeting Assistant
Headless capture service for Zoom meetings using Real-Time Media Streams (RTMS). Receives webhook events, connects to RTMS WebSockets, records all media, and runs AI analysis via OpenClaw with WhatsApp notifications.
Unofficial — This skill is not affiliated with or endorsed by Zoom Video Communications.
Requires OpenClaw — This skill uses the OpenClaw CLI for AI processing and notifications.
Features
- Real-time capture — Audio, video, transcript, screenshare, and chat via RTMS WebSockets
- AI analysis — Dialog suggestions, sentiment analysis, and live summaries via OpenClaw
- WhatsApp notifications — Real-time AI results sent via WhatsApp
- Multi-format transcripts — VTT, SRT, and plain text with timestamps and speaker names
- Screenshare PDF — Deduplicated screenshare frames compiled into a PDF
- Per-participant audio — Raw PCM audio per participant with gap filling
- Notification toggle — Mute/unmute notifications mid-meeting via API
How It Works
- Receive webhook — Zoom sends
meeting.rtms_startedvia the ngrok webhook skill - Connect to RTMS — Service connects to Zoom's RTMS WebSocket using the provided stream URLs
- Capture media — All streams saved in real-time to
recordings/{streamId}/ - AI processing — OpenClaw periodically analyzes transcripts for dialog suggestions, sentiment, and summaries
- Meeting ends —
meeting.rtms_stoppedtriggers cleanup, PDF generation, and summary notification
Quick Start
1. Install dependencies
cd skills/zoom-meeting-assistance-rtms-unofficial-community
npm install
Requires ffmpeg for post-meeting media conversion.
2. Configure
Copy .env.example to .env and fill in:
ZOOM_SECRET_TOKEN=your_webhook_secret_token
ZOOM_CLIENT_ID=your_zoom_client_id
ZOOM_CLIENT_SECRET=your_zoom_client_secret
OPENCLAW_NOTIFY_TARGET=+1234567890
3. Start
node index.js
The service listens on port 4048 (configurable) for webhook events forwarded by the ngrok skill.
Environment Variables
| Variable | Required | Default | Description |
|---|---|---|---|
ZOOM_SECRET_TOKEN | ✅ | — | Zoom webhook secret token |
ZOOM_CLIENT_ID | ✅ | — | Zoom app Client ID |
ZOOM_CLIENT_SECRET | ✅ | — | Zoom app Client Secret |
PORT | — | 3000 | Express server port |
WEBHOOK_PATH | — | /webhook | Webhook endpoint path |
AI_PROCESSING_INTERVAL_MS | — | 30000 | AI analysis frequency (ms) |
AI_FUNCTION_STAGGER_MS | — | 5000 | Delay between AI calls (ms) |
OPENCLAW_BIN | — | openclaw | Path to OpenClaw binary |
OPENCLAW_NOTIFY_CHANNEL | — | whatsapp | Notification channel |
OPENCLAW_NOTIFY_TARGET | — | — | Phone number / target |
OPENCLAW_TIMEOUT | — | 120 | OpenClaw timeout (seconds) |
AUDIO_DATA_OPT | — | 2 | 1 = mixed audio, 2 = multi-stream |
Recorded Data
All recordings stored at recordings/{streamId}/:
| File | Content |
|---|---|
transcript.txt | Plain text transcript — best for searching |
transcript.vtt | VTT format transcript with timing cues |
transcript.srt | SRT format transcript |
events.log | Participant join/leave, active speaker (JSON lines) |
chat.txt | Chat messages with timestamps |
ai_summary.md | AI-generated meeting summary |
ai_dialog.json | AI dialog suggestions |
ai_sentiment.json | Sentiment analysis per participant |
{userId}.raw | Per-participant raw PCM audio |
combined.h264 | Raw H.264 video |
processed/screenshare.pdf | Deduplicated screenshare frames as PDF |
Searching Past Meetings
# List all recorded meetings
ls recordings/
# Search across all transcripts
grep -rl "keyword" recordings/*/transcript.txt
# Search what a specific person said
grep "Name" recordings/*/transcript.txt
# Read a meeting summary
cat recordings/<streamId>/ai_summary.md
# Check who attended
cat recordings/<streamId>/events.log
API Endpoints
# Toggle WhatsApp notifications on/off
curl -X POST http://localhost:3000/api/notify-toggle \
-H "Content-Type: application/json" -d '{"enabled": false}'
# Check notification status
curl http://localhost:3000/api/notify-toggle
Post-Meeting Helpers
Run manually after a meeting ends:
# Convert raw audio/video to WAV/MP4
node convertMeetingMedia.js <streamId>
# Mux first audio + video into final MP4
node muxFirstAudioVideo.js <streamId>
Prompt Customization
Edit these files to change AI behavior:
summary_prompt.md— Meeting summary generationquery_prompt.md— Query response formattingquery_prompt_current_meeting.md— Real-time meeting analysisquery_prompt_dialog_suggestions.md— Dialog suggestion stylequery_prompt_sentiment_analysis.md— Sentiment scoring logic
File Structure
├── .env # API keys & config
├── index.js # Main RTMS server & recording logic
├── chatWithClawdbot.js # OpenClaw AI integration
├── convertMeetingMedia.js # FFmpeg conversion helper
├── muxFirstAudioVideo.js # Audio/video muxing helper
├── saveRawAudioAdvance.js # Real-time audio stream saving
├── saveRawVideoAdvance.js # Real-time video stream saving
├── writeTranscriptToVtt.js # Transcript writing (VTT/SRT/TXT)
├── saveSharescreen.js # Screenshare capture & PDF generation
├── summary_prompt.md # Summary generation prompt
├── query_prompt*.md # AI query prompts
└── recordings/ # Meeting data storage
└── {streamId}/ # Per-meeting directory
Related Skills
- ngrok-unofficial-webhook-skill — Public webhook endpoint (required to receive Zoom events)
- zoom-unofficial-community-skill — Zoom REST API CLI (can start/stop RTMS via
meetings rtms-start/stop)
Bug Reports & Contributing
Found a bug? Please raise an issue at: 👉 https://github.com/tanchunsiong/zoom-meeting-assistance-with-rtms-unofficial-community-skill/issues
Pull requests are also welcome!
License
MIT
Permissions & Security
Security level L1: Low-risk skills with minimal permissions. Review inputs and outputs before running in production.
Requirements
```bash cd skills/zoom-meeting-assistance-rtms-unofficial-community npm install ``` Requires `ffmpeg` for post-meeting media conversion.
FAQ
How do I install zoom-meeting-assistance-rtms-unofficial-community?
Run openclaw add @tanchunsiong/zoom-meeting-assistance-with-rtms-unofficial-community-skill in your terminal. This installs zoom-meeting-assistance-rtms-unofficial-community into your OpenClaw Skills catalog.
Does this skill run locally or in the cloud?
OpenClaw Skills execute locally by default. Review the SKILL.md and permissions before running any skill.
Where can I verify the source code?
The source repository is available at https://github.com/openclaw/skills/tree/main/skills/tanchunsiong/zoom-meeting-assistance-with-rtms-unofficial-community-skill. Review commits and README documentation before installing.
