249★by xadenryan
voice-wake-say – OpenClaw Skill
voice-wake-say is an OpenClaw Skills integration for productivity workflows. Speak responses aloud on macOS using the built-in `say` command when user input indicates Voice Wake/voice recognition (for example, messages starting with "User talked via voice recognition on <device>").
Skill Snapshot
| name | voice-wake-say |
| description | Speak responses aloud on macOS using the built-in `say` command when user input indicates Voice Wake/voice recognition (for example, messages starting with "User talked via voice recognition on <device>"). OpenClaw Skills integration. |
| owner | xadenryan |
| repository | xadenryan/voice-wake-say |
| language | Markdown |
| license | MIT |
| topics | |
| security | L1 |
| install | openclaw add @xadenryan/voice-wake-say |
| last updated | Feb 7, 2026 |
Maintainer

name: voice-wake-say
description: Speak responses aloud on macOS using the built-in say command when user input indicates Voice Wake/voice recognition (for example, messages starting with "User talked via voice recognition on <device>").
Voice Wake Say
Overview
Use macOS say to read the assistant's response out loud whenever the conversation came from Voice Wake/voice recognition. Do not use the tts tool (it calls cloud providers).
When to Use say (CHECK EVERY MESSAGE INDIVIDUALLY)
IF the user message STARTS WITH: User talked via voice recognition
- Step 1: Acknowledge with
sayfirst (so the user knows you heard them) - Step 2: Then perform the task
- Step 3: Optionally speak again when done if it makes sense
IF the user message does NOT start with that exact phrase
- THEN: Do NOT use
say. Text-only response only.
Critical:
- Check EACH message individually — context does NOT carry over
- The trigger phrase must be at the VERY START of the message
- For tasks that take time, acknowledge FIRST so the user knows you're working
Workflow
- Detect Voice Wake context
- Trigger ONLY when the latest user/system message STARTS WITH
User talked via voice recognition - If the message instructs "repeat prompt first", keep that behavior in the response.
- Prepare spoken text
- Use the final response text as the basis.
- Strip markdown/code blocks; if the response is long or code-heavy, speak a short summary and mention that details are on screen.
- Speak with
say(local macOS TTS)
printf '%s' "$SPOKEN_TEXT" | say
Optional controls (use only if set):
printf '%s' "$SPOKEN_TEXT" | say -v "$SAY_VOICE"
printf '%s' "$SPOKEN_TEXT" | say -r "$SAY_RATE"
Failure handling
- If
sayis unavailable or errors, still send the text response and note that TTS failed.
No README available.
Permissions & Security
Security level L1: Low-risk skills with minimal permissions. Review inputs and outputs before running in production.
Requirements
- OpenClaw CLI installed and configured.
- Language: Markdown
- License: MIT
- Topics:
FAQ
How do I install voice-wake-say?
Run openclaw add @xadenryan/voice-wake-say in your terminal. This installs voice-wake-say into your OpenClaw Skills catalog.
Does this skill run locally or in the cloud?
OpenClaw Skills execute locally by default. Review the SKILL.md and permissions before running any skill.
Where can I verify the source code?
The source repository is available at https://github.com/openclaw/skills/tree/main/skills/xadenryan/voice-wake-say. Review commits and README documentation before installing.
