skills$openclaw/moodcast
ashutosh8872.4k

by ashutosh887

moodcast – OpenClaw Skill

moodcast is an OpenClaw Skills integration for coding workflows. Transform any text into emotionally expressive audio with ambient soundscapes using ElevenLabs v3 audio tags and Sound Effects API

2.4k stars7.3k forksSecurity L1
Updated Feb 7, 2026Created Feb 7, 2026coding

Skill Snapshot

namemoodcast
descriptionTransform any text into emotionally expressive audio with ambient soundscapes using ElevenLabs v3 audio tags and Sound Effects API OpenClaw Skills integration.
ownerashutosh887
repositoryashutosh887/moodcast
languageMarkdown
licenseMIT
topics
securityL1
installopenclaw add @ashutosh887/moodcast
last updatedFeb 7, 2026

Maintainer

ashutosh887

ashutosh887

Maintains moodcast in the OpenClaw Skills directory.

View GitHub profile
File Explorer
12 files
.
examples
calm.txt
196 B
dramatic.txt
378 B
news.txt
391 B
scary.txt
384 B
story.txt
489 B
scripts
moodcast.py
12.5 KB
_meta.json
451 B
README.md
9.1 KB
requirements.txt
18 B
SKILL.md
5.3 KB
SKILL.md

name: moodcast description: Transform any text into emotionally expressive audio with ambient soundscapes using ElevenLabs v3 audio tags and Sound Effects API metadata: {"moltbot":{"requires":{"env":["ELEVENLABS_API_KEY"]},"primaryEnv":"ELEVENLABS_API_KEY","homepage":"https://github.com/ashutosh887/moodcast"}}

MoodCast

Transform any text into emotionally expressive audio with ambient soundscapes. MoodCast analyzes your content, adds expressive delivery using ElevenLabs v3 audio tags, and layers matching ambient soundscapes.

When to Use This Skill

Use MoodCast when the user wants to:

  • Hear text read with natural emotional expression
  • Create audio versions of articles, stories, or scripts
  • Generate expressive voiceovers with ambient atmosphere
  • Listen to morning briefings that actually sound engaging
  • Transform boring text into captivating audio content

Trigger phrases: "read this dramatically", "make this sound good", "create audio for", "moodcast this", "read with emotion", "narrate this"

Slash command: /moodcast

Core Capabilities

1. Emotion-Aware Text Enhancement

Automatically analyzes text and inserts appropriate v3 audio tags:

  • Emotions: [excited], [nervous], [angry], [sorrowful], [calm], [happy]
  • Delivery: [whispers], [shouts], [rushed], [slows down]
  • Reactions: [laughs], [sighs], [gasps], [giggles], [crying]
  • Pacing: [pause], [breathes], [stammers], [hesitates]

2. Ambient Soundscape Generation

Creates matching background audio using Sound Effects API:

  • News → subtle office ambiance
  • Story → atmospheric soundscape matching mood
  • Motivational → uplifting background
  • Scary → tense, eerie atmosphere

3. Multi-Voice Dialogue

For conversations/scripts, assigns different voices to speakers with appropriate emotional delivery.

Instructions

Quick Read (Single Command)

python3 {baseDir}/scripts/moodcast.py --text "Your text here"

With Ambient Sound

python3 {baseDir}/scripts/moodcast.py --text "Your text here" --ambient "coffee shop background noise"

Save to File

python3 {baseDir}/scripts/moodcast.py --text "Your text here" --output story.mp3

Different Moods

python3 {baseDir}/scripts/moodcast.py --text "Your text" --mood dramatic
python3 {baseDir}/scripts/moodcast.py --text "Your text" --mood calm
python3 {baseDir}/scripts/moodcast.py --text "Your text" --mood excited
python3 {baseDir}/scripts/moodcast.py --text "Your text" --mood scary

List Available Voices

python3 {baseDir}/scripts/moodcast.py --list-voices

Custom Configuration

python3 {baseDir}/scripts/moodcast.py --text "Your text" --voice VOICE_ID --model eleven_v3 --output-format mp3_44100_128

Emotion Detection Rules

The skill automatically detects and enhances:

Text PatternAudio Tag Added
"amazing", "incredible", "wow"[excited]
"scared", "afraid", "terrified"[nervous]
"angry", "furious", "hate"[angry]
"sad", "sorry", "unfortunately"[sorrowful]
"secret", "quiet", "between us"[whispers]
"!" exclamations[excited]
"..." trailing off[pause]
"haha", "lol"[laughs]
QuestionsNatural rising intonation

Example Transformations

Input:

Breaking news! Scientists have discovered something incredible. 
This could change everything we know about the universe...
I can't believe it.

Enhanced Output:

[excited] Breaking news! Scientists have discovered something incredible.
[pause] This could change everything we know about the universe...
[gasps] [whispers] I can't believe it.

Input:

It was a dark night. The old house creaked. 
Something moved in the shadows...
"Who's there?" she whispered.

Enhanced Output:

[slows down] It was a dark night. [pause] The old house creaked.
[nervous] Something moved in the shadows...
[whispers] "Who's there?" she whispered.

Environment Variables

  • ELEVENLABS_API_KEY (required) - Your ElevenLabs API key
  • MOODCAST_DEFAULT_VOICE (optional) - Default voice ID (defaults to CwhRBWXzGAHq8TQ4Fs17)
  • MOODCAST_MODEL (optional) - Default model ID (defaults to eleven_v3)
  • MOODCAST_OUTPUT_FORMAT (optional) - Default output format (defaults to mp3_44100_128)
  • MOODCAST_AUTO_AMBIENT (optional) - Set to "true" for automatic ambient sounds when using --mood

Configuration Priority: CLI arguments override environment variables, which override hardcoded defaults.

Technical Notes

  • Uses ElevenLabs Eleven v3 model for audio tag support
  • Sound Effects API for ambient generation (up to 30 seconds)
  • Free tier: 10,000 credits/month (~10 min audio)
  • Max 2,400 characters per chunk (v3 supports 5,000, but we split conservatively for reliability)
  • Audio tags must be lowercase: [whispers] not [WHISPERS]

Tips for Best Results

  1. Dramatic content works best - stories, news, scripts
  2. Shorter segments (under 500 chars) sound more natural
  3. Combine with ambient for immersive experience
  4. Roger and Rachel voices are most expressive with v3

Credits

Built by ashutosh887
Using ElevenLabs Text-to-Speech v3 + Sound Effects API
Created for #ClawdEleven Hackathon

README.md

MoodCast

Transform any text into emotionally expressive audio with ambient soundscapes.

MoodCast is a Moltbot skill that uses ElevenLabs' most advanced features to create compelling audio content. It analyzes your text, adds emotional expression using Eleven v3 audio tags, and can layer ambient soundscapes for immersive experiences.

Demo Video Moltbot Skill ElevenLabs


Features

FeatureDescription
Emotion DetectionAutomatically analyzes text and inserts v3 audio tags ([excited], [whispers], [laughs], etc.)
Ambient SoundscapesGenerates matching background sounds using Sound Effects API
Multiple MoodsPre-configured moods: dramatic, calm, excited, scary, news, story
Smart Text ProcessingAuto-splits long text, handles multiple speakers

Demo

Input:

Breaking news! Scientists have discovered something incredible. 
This could change everything we know about the universe...
I can't believe it.

MoodCast Output:

[excited] Breaking news! Scientists have discovered something incredible.
[pause] This could change everything we know about the universe...
[gasps] [whispers] I can't believe it.

The AI voice delivers this with genuine excitement, dramatic pauses, and a whispered ending.


Quick Start

1. Install the Skill

# Option 1: Clone to your Moltbot skills directory
git clone https://github.com/ashutosh887/moodcast ~/.clawdbot/skills/moodcast

# Option 2: Install via MoltHub (recommended)
npx molthub@latest install moodcast

# Option 3: Install to workspace (for per-agent skills)
# After installing, move to workspace or use git clone method

2. Set Your API Key

export ELEVENLABS_API_KEY="your-api-key-here"

Or add to ~/.clawdbot/moltbot.json:

{
  "skills": {
    "entries": {
      "moodcast": {
        "enabled": true,
        "apiKey": "your-api-key-here",
        "env": {
          "ELEVENLABS_API_KEY": "your-api-key-here"
        }
      }
    }
  }
}

Note: apiKey automatically maps to ELEVENLABS_API_KEY when the skill declares primaryEnv.

3. Use It!

Via Moltbot (WhatsApp/Telegram/Discord/iMessage):

Hey Molty, moodcast this: "It was a dark and stormy night..."

Or use the slash command:

/moodcast "It was a dark and stormy night..."

Via Command Line:

python3 ~/.clawdbot/skills/moodcast/scripts/moodcast.py --text "Hello world!"

Usage Examples

Basic Usage

python3 moodcast.py --text "This is amazing news!"

With Mood Preset

python3 moodcast.py --text "The door creaked open slowly..." --mood scary

With Ambient Sound

python3 moodcast.py --text "Welcome to my café" --ambient "coffee shop busy morning"

Save to File

python3 moodcast.py --text "Your story here" --output narration.mp3

Show Enhanced Text

python3 moodcast.py --text "Wow this is great!" --show-enhanced
# Output: [excited] Wow this is great!

Custom Configuration

# Custom voice, model, and output format
python3 moodcast.py --text "Hello" --voice VOICE_ID --model eleven_v3 --output-format mp3_44100_128

# Override mood's default voice
python3 moodcast.py --text "Dramatic scene" --mood dramatic --voice CUSTOM_VOICE_ID

# Skip emotion enhancement
python3 moodcast.py --text "Plain text" --no-enhance

Supported Audio Tags (Eleven v3)

MoodCast automatically detects emotions and inserts these tags:

Emotions

TagTriggers
[excited]amazing, incredible, wow, !!!
[happy]happy, delighted, thrilled
[nervous]scared, afraid, terrified
[angry]angry, furious, hate
[sorrowful]sad, sorry, tragic
[calm]peaceful, gentle, quiet

Delivery

TagEffect
[whispers]Soft, secretive tone
[shouts]Loud, emphatic delivery
[slows down]Deliberate pacing
[rushed]Fast, urgent speech

Reactions

TagEffect
[laughs]Natural laughter
[sighs]Weary exhale
[gasps]Surprise intake
[giggles]Light laughter
[pause]Dramatic beat

Mood Presets

MoodVoiceStyleBest For
dramaticRogerTheatrical, expressiveStories, scripts
calmLilySoothing, peacefulMeditation, ASMR
excitedLiamEnergetic, upbeatNews, announcements
scaryRoger (deep)Tense, ominousHorror, thrillers
newsLilyProfessional, clearBriefings, reports
storyRachelWarm, engagingAudiobooks, tales

Configuration

Command Line Arguments

ArgumentShortDescription
--text-tText to convert to speech (required)
--mood-mMood preset: dramatic, calm, excited, scary, news, story
--voice-vVoice ID (overrides mood's default voice)
--modelModel ID (default: eleven_v3)
--output-formatOutput format (default: mp3_44100_128)
--ambient-aGenerate ambient sound effect (prompt)
--ambient-durationAmbient duration in seconds (max 30, default: 10)
--output-oSave audio to file instead of playing
--no-enhanceSkip automatic emotion enhancement
--show-enhancedPrint enhanced text before generating
--list-voicesList available voices

Environment Variables

VariableRequiredDescriptionDefault
ELEVENLABS_API_KEYYesYour ElevenLabs API key-
MOODCAST_DEFAULT_VOICENoDefault voice ID (overridden by --voice or --mood)CwhRBWXzGAHq8TQ4Fs17
MOODCAST_MODELNoDefault model ID (overridden by --model)eleven_v3
MOODCAST_OUTPUT_FORMATNoDefault output format (overridden by --output-format)mp3_44100_128
MOODCAST_AUTO_AMBIENTNoAuto-generate ambient sounds when using --mood-

Priority order: CLI arguments > Environment variables > Hardcoded defaults

Moltbot Config (~/.clawdbot/moltbot.json)

{
  "skills": {
    "entries": {
      "moodcast": {
        "enabled": true,
        "apiKey": "xi-xxxxxxxxxxxx",
        "env": {
          "ELEVENLABS_API_KEY": "xi-xxxxxxxxxxxx",
          "MOODCAST_AUTO_AMBIENT": "true"
        }
      }
    }
  }
}

Note: apiKey is a convenience field that maps to ELEVENLABS_API_KEY when primaryEnv is set in the skill metadata.


ElevenLabs APIs Used

This skill demonstrates deep integration with multiple ElevenLabs APIs:

1. Text-to-Speech (Eleven v3)

  • Model: eleven_v3 for audio tag support
  • Format: mp3_44100_128
  • Features: Full audio tag expression system

2. Sound Effects API

  • Generates ambient soundscapes from text prompts
  • Up to 30 seconds per generation
  • Seamless looping support

3. Voices API

  • Lists available voices
  • Supports custom voice selection
  • Mood-based voice matching

Project Structure

moodcast/
├── SKILL.md           # Moltbot skill definition (AgentSkills format)
├── README.md          # Project documentation
├── requirements.txt   # Python dependencies
├── .gitignore         # Git ignore rules
├── scripts/
│   └── moodcast.py    # Main Python script
└── examples/
    ├── news.txt       # News article example
    ├── scary.txt      # Horror story example
    ├── dramatic.txt   # Dramatic narrative example
    ├── calm.txt       # Peaceful scene example
    └── story.txt      # Adventure story example

Skill Installation Locations

Moltbot loads skills from three locations (in precedence order):

  1. Workspace skills: <workspace>/skills/moodcast (per-agent, highest precedence)
  2. Managed skills: ~/.clawdbot/skills/moodcast (shared across agents)
  3. Bundled skills: Shipped with Moltbot install (lowest precedence)

Use npx molthub@latest install moodcast to install to the managed directory, or clone directly to your workspace for per-agent installation.


Technical Details

API Integration

CriteriaImplementation
ElevenLabs API usageUses Eleven v3 audio tags (deepest TTS feature), Sound Effects API, Voices API
Practical use casesContent creators, writers, podcasters, anyone who wants expressive audio
Demo approachSingle clear hook: "Text that feels emotion" with live demonstration

License

MIT License - feel free to use, modify, and share!


Acknowledgments

Built for the #ClawdEleven Hackathon (ElevenLabs × Moltbot)

Permissions & Security

Security level L1: Low-risk skills with minimal permissions. Review inputs and outputs before running in production.

Requirements

  • OpenClaw CLI installed and configured.
  • Language: Markdown
  • License: MIT
  • Topics:

Configuration

```bash python3 {baseDir}/scripts/moodcast.py --text "Your text" --voice VOICE_ID --model eleven_v3 --output-format mp3_44100_128 ```

FAQ

How do I install moodcast?

Run openclaw add @ashutosh887/moodcast in your terminal. This installs moodcast into your OpenClaw Skills catalog.

Does this skill run locally or in the cloud?

OpenClaw Skills execute locally by default. Review the SKILL.md and permissions before running any skill.

Where can I verify the source code?

The source repository is available at https://github.com/openclaw/skills/tree/main/skills/ashutosh887/moodcast. Review commits and README documentation before installing.