skills$openclaw/humanizer
brandonwise5.3k

by brandonwise

humanizer – OpenClaw Skill

humanizer is an OpenClaw Skills integration for writing workflows. >

5.3k stars4.1k forksSecurity L1
Updated Feb 7, 2026Created Feb 7, 2026writing

Skill Snapshot

namehumanizer
description> OpenClaw Skills integration.
ownerbrandonwise
repositorybrandonwise/ai-humanizer
languageMarkdown
licenseMIT
topics
securityL1
installopenclaw add @brandonwise/ai-humanizer
last updatedFeb 7, 2026

Maintainer

brandonwise

brandonwise

Maintains humanizer in the OpenClaw Skills directory.

View GitHub profile
File Explorer
37 files
.
assets
banner.md
331 B
docs
CONTRIBUTING.md
2.0 KB
EXAMPLES.md
5.3 KB
PATTERNS.md
2.4 KB
references
ai-vocabulary.md
5.5 KB
patterns.md
10.0 KB
style-guide.md
3.9 KB
scripts
analyze.sh
301 B
humanize.sh
315 B
src
analyzer.js
14.8 KB
cli.js
19.6 KB
humanizer.js
14.2 KB
patterns.js
28.1 KB
stats.js
9.5 KB
vocabulary.js
15.1 KB
tests
fixtures
ai-sample-1.txt
2.2 KB
ai-sample-2.txt
1.1 KB
human-sample-1.txt
1.1 KB
analyzer.test.js
13.2 KB
calibration.test.js
7.2 KB
edge-cases.test.js
7.1 KB
humanizer.test.js
6.4 KB
performance.test.js
3.2 KB
statistics.test.js
7.9 KB
_meta.json
278 B
eslint.config.js
559 B
package.json
1.3 KB
README.md
15.8 KB
SKILL.md
7.8 KB
vitest.config.js
152 B
SKILL.md

name: humanizer description: > Humanize AI-generated text by detecting and removing patterns typical of LLM output. Rewrites text to sound natural, specific, and human. Uses 24 pattern detectors, 500+ AI vocabulary terms across 3 tiers, and statistical analysis (burstiness, type-token ratio, readability) for comprehensive detection. Use when asked to humanize text, de-AI writing, make content sound more natural/human, review writing for AI patterns, score text for AI detection, or improve AI-generated drafts. Covers content, language, style, communication, and filler categories.

Humanizer: remove AI writing patterns

You are a writing editor that identifies and removes signs of AI-generated text. Your goal: make writing sound like a specific human wrote it, not like it was extruded from a language model.

Based on Wikipedia:Signs of AI writing, Copyleaks stylometric research, and real-world pattern analysis.

Your task

When given text to humanize:

  1. Scan for the 24 patterns below
  2. Check statistical indicators (burstiness, vocabulary diversity, sentence uniformity)
  3. Rewrite problematic sections with natural alternatives
  4. Preserve the core meaning
  5. Match the intended tone (formal, casual, technical)
  6. Add actual personality — sterile text is just as obvious as slop

Quick reference: the 24 patterns

#PatternCategoryWhat to watch for
1Significance inflationContent"marking a pivotal moment in the evolution of..."
2Notability name-droppingContentListing media outlets without specific claims
3Superficial -ing analysesContent"...showcasing... reflecting... highlighting..."
4Promotional languageContent"nestled", "breathtaking", "stunning", "renowned"
5Vague attributionsContent"Experts believe", "Studies show", "Industry reports"
6Formulaic challengesContent"Despite challenges... continues to thrive"
7AI vocabulary (500+ words)Language"delve", "tapestry", "landscape", "showcase", "seamless"
8Copula avoidanceLanguage"serves as", "boasts", "features" instead of "is", "has"
9Negative parallelismsLanguage"It's not just X, it's Y"
10Rule of threeLanguage"innovation, inspiration, and insights"
11Synonym cyclingLanguage"protagonist... main character... central figure..."
12False rangesLanguage"from the Big Bang to dark matter"
13Em dash overuseStyleToo many — dashes — everywhere
14Boldface overuseStyleMechanical emphasis everywhere
15Inline-header listsStyle"- Topic: Topic is discussed here"
16Title Case headingsStyleEvery Main Word Capitalized In Headings
17Emoji overuseStyle🚀💡✅ decorating professional text
18Curly quotesStyle"smart quotes" instead of "straight quotes"
19Chatbot artifactsCommunication"I hope this helps!", "Let me know if..."
20Cutoff disclaimersCommunication"As of my last training...", "While details are limited..."
21Sycophantic toneCommunication"Great question!", "You're absolutely right!"
22Filler phrasesFiller"In order to", "Due to the fact that", "At this point in time"
23Excessive hedgingFiller"could potentially possibly", "might arguably perhaps"
24Generic conclusionsFiller"The future looks bright", "Exciting times lie ahead"

Statistical signals

Beyond pattern matching, check for these AI statistical tells:

SignalHumanAIWhy
BurstinessHigh (0.5-1.0)Low (0.1-0.3)Humans write in bursts; AI is metronomic
Type-token ratio0.5-0.70.3-0.5AI reuses the same vocabulary
Sentence length variationHigh CoVLow CoVAI sentences are all roughly the same length
Trigram repetitionLow (<0.05)High (>0.10)AI reuses 3-word phrases

Vocabulary tiers

  • Tier 1 (Dead giveaways): delve, tapestry, vibrant, crucial, comprehensive, meticulous, embark, robust, seamless, groundbreaking, leverage, synergy, transformative, paramount, multifaceted, myriad, cornerstone, reimagine, empower, catalyst, invaluable, bustling, nestled, realm
  • Tier 2 (Suspicious in density): furthermore, moreover, paradigm, holistic, utilize, facilitate, nuanced, illuminate, encompasses, catalyze, proactive, ubiquitous, quintessential
  • Phrases: "In today's digital age", "It is worth noting", "plays a crucial role", "serves as a testament", "in the realm of", "delve into", "harness the power of", "embark on a journey", "without further ado"

Core principles

Write like a human, not a press release

  • Use "is" and "has" freely — "serves as" is pretentious
  • One qualifier per claim — don't stack hedges
  • Name your sources or drop the claim
  • End with something specific, not "the future looks bright"

Add personality

  • Have opinions. React to facts, don't just report them
  • Vary sentence rhythm. Short. Then longer ones that meander.
  • Acknowledge complexity and mixed feelings
  • Let some mess in — perfect structure feels algorithmic

Cut the fat

  • "In order to" → "to"
  • "Due to the fact that" → "because"
  • "It is important to note that" → (just say it)
  • Remove chatbot filler: "I hope this helps!", "Great question!"

Before/after example

Before (AI-sounding):

Great question! Here is an overview of sustainable energy. Sustainable energy serves as an enduring testament to humanity's commitment to environmental stewardship, marking a pivotal moment in the evolution of global energy policy. In today's rapidly evolving landscape, these groundbreaking technologies are reshaping how nations approach energy production, underscoring their vital role in combating climate change. The future looks bright. I hope this helps!

After (human):

Solar panel costs dropped 90% between 2010 and 2023, according to IRENA data. That single fact explains why adoption took off — it stopped being an ideological choice and became an economic one. Germany gets 46% of its electricity from renewables now. The transition is happening, but it's messy and uneven, and the storage problem is still mostly unsolved.

Using the analyzer

# Score text (0-100, higher = more AI-like)
echo "Your text here" | node src/cli.js score

# Full analysis report
node src/cli.js analyze -f draft.md

# Markdown report
node src/cli.js report article.txt > report.md

# Suggestions grouped by priority
node src/cli.js suggest essay.txt

# Statistical analysis only
node src/cli.js stats essay.txt

# Humanization suggestions with auto-fixes
node src/cli.js humanize --autofix -f article.txt

# JSON output for programmatic use
node src/cli.js analyze --json < input.txt

Always-on mode

For agents that should ALWAYS write like a human (not just when asked to humanize), add the core rules to your personality/system prompt. See the README's "Always-On Mode" section for copy-paste templates for OpenClaw (SOUL.md), Claude, and ChatGPT.

The key rules to internalize:

  • Ban Tier 1 vocabulary (delve, tapestry, vibrant, crucial, robust, seamless, etc.)
  • Kill filler phrases ("In order to" → "to", "Due to the fact that" → "because")
  • No sycophancy, chatbot artifacts, or generic conclusions
  • Vary sentence length, have opinions, use concrete specifics
  • If you wouldn't say it in conversation, don't write it

Process

  1. Read the input text
  2. Run pattern detection (24 patterns, 500+ vocabulary terms)
  3. Compute text statistics (burstiness, TTR, readability)
  4. Identify all issues and generate suggestions
  5. Rewrite problematic sections
  6. Verify the result sounds natural when read aloud
  7. Present the humanized version with a brief change summary
README.md

humanizer

License: MIT Tests Node >= 18

Detect and remove signs of AI-generated writing. Makes text sound natural and human.

An OpenClaw skill and standalone CLI tool that scans text for 24 AI writing patterns using 500+ vocabulary terms and statistical text analysis (burstiness, type-token ratio, readability metrics) — then provides actionable suggestions to fix them.

Based on Wikipedia:Signs of AI writing, Copyleaks stylistic fingerprint research, and blader/humanizer.

Install

As an OpenClaw skill

git clone https://github.com/brandonwise/humanizer.git
cp humanizer/SKILL.md ~/.config/openclaw/skills/humanizer.md

As a standalone CLI tool

git clone https://github.com/brandonwise/humanizer.git
cd humanizer
npm install

# Score some text
echo "This serves as a testament to innovation." | node src/cli.js score

# Full analysis
node src/cli.js analyze -f your-draft.md

# Humanize with auto-fixes
node src/cli.js humanize --autofix -f article.txt

Global install

npm install -g .
humanizer score < draft.txt
humanizer analyze -f essay.md
humanizer humanize --autofix < article.txt

Architecture

The scoring engine combines three signal types:

┌─────────────────────────────────────────────────┐
│              Composite Score (0-100)             │
├────────────────────┬────────────────────────────┤
│   Pattern Score    │    Uniformity Score         │
│   (70% weight)     │    (30% weight)             │
├────────────────────┼────────────────────────────┤
│ • 24 pattern       │ • Burstiness (sentence     │
│   detectors        │   length variation)         │
│ • 500+ vocabulary  │ • Type-token ratio          │
│   terms (3 tiers)  │ • Trigram repetition        │
│ • Density scoring  │ • Sentence length CoV       │
│ • Category breadth │ • Paragraph uniformity      │
└────────────────────┴────────────────────────────┘

Pattern score uses density-based detection: weighted hits per 100 words on a logarithmic curve, plus bonuses for breadth (unique patterns) and category diversity.

Uniformity score uses statistical analysis: human text has high burstiness (varied sentence lengths), diverse vocabulary, and low n-gram repetition. AI text is mechanically uniform.

Statistical analysis

The stats engine computes metrics that differentiate AI from human writing:

MetricHuman WritingAI WritingWhy It Matters
Burstiness0.5–1.00.1–0.3Humans write in bursts — short sentences, then long ones. AI is metronomic.
Type-token ratio0.5–0.70.3–0.5Humans use more varied vocabulary. AI cycles through the same words.
Sentence CoV0.4–0.80.15–0.35Coefficient of variation in sentence length. Low = robotic uniformity.
Trigram repetition< 0.05> 0.10AI reuses the same 3-word phrases more often.
Readability (FK)Varies8–12AI tends to write at a consistent grade level. Humans vary.

CLI reference

Commands

# Quick score (0-100, higher = more AI-like)
echo "text" | humanizer score

# Full analysis with pattern matches
humanizer analyze essay.txt

# Full markdown report (pipe to file)
humanizer report article.txt > report.md

# Suggestions grouped by priority
humanizer suggest draft.md

# Statistical analysis only
humanizer stats essay.txt

# Humanization suggestions with guidance
humanizer humanize -f article.txt

# Apply safe auto-fixes
humanizer humanize --autofix -f article.txt

Options

-f, --file <path>       Read text from file
--json                  Output as JSON
--verbose, -v           Show all matches
--autofix               Apply safe fixes (humanize only)
--patterns <ids>        Check specific pattern IDs (comma-separated)
--threshold <n>         Only show patterns with weight above n
--config <file>         Custom config file (JSON)
--help, -h              Show help

Score badges

🟢 0-25    Mostly human-sounding
🟡 26-50   Lightly AI-touched
🟠 51-75   Moderately AI-influenced
🔴 76-100  Heavily AI-generated

API (programmatic use)

const { analyze, score } = require('humanizer');

// Quick score
const s = score('Your text here...');
console.log(s); // 0-100

// Full analysis
const result = analyze(text, {
  verbose: true,          // Show all matches
  patternsToCheck: [7, 19, 22], // Only specific patterns
  includeStats: true,     // Include statistical analysis
});

console.log(result.score);           // 0-100 composite
console.log(result.patternScore);    // Pattern-only score
console.log(result.uniformityScore); // Stats-based uniformity score
console.log(result.stats);           // { burstiness, typeTokenRatio, ... }
console.log(result.findings);        // Detailed pattern matches
console.log(result.categories);      // Per-category breakdown

// Humanize
const { humanize, autoFix } = require('humanizer/src/humanizer');

const suggestions = humanize(text, { autofix: true });
console.log(suggestions.critical);   // Dead giveaway issues
console.log(suggestions.important);  // Noticeable patterns
console.log(suggestions.guidance);   // Writing tips
console.log(suggestions.styleTips);  // Statistical style advice
console.log(suggestions.autofix.text); // Auto-fixed text

// Stats only
const { computeStats } = require('humanizer/src/stats');
const stats = computeStats(text);
console.log(stats.burstiness);       // Sentence variation
console.log(stats.typeTokenRatio);   // Vocabulary diversity

The 24 patterns

#PatternCategoryWeightExample
1Significance inflationContent4"marking a pivotal moment in the evolution of..."
2Notability name-droppingContent3"featured in NYT, BBC, CNN, and Forbes"
3Superficial -ing analysesContent4"...showcasing... reflecting... highlighting..."
4Promotional languageContent3"nestled", "breathtaking", "stunning"
5Vague attributionsContent4"Experts believe", "Studies show"
6Formulaic challengesContent3"Despite challenges... continues to thrive"
7AI vocabularyLanguage5"Additionally", "delve", "tapestry" (500+ words)
8Copula avoidanceLanguage3"serves as" instead of "is"
9Negative parallelismsLanguage3"It's not just X, it's Y"
10Rule of threeLanguage2"innovation, inspiration, and insights"
11Synonym cyclingLanguage2"protagonist... main character... central figure"
12False rangesLanguage2"from the Big Bang to dark matter"
13Em dash overuseStyle2Too many — em dashes — in one — piece
14Boldface overuseStyle2Every other word bolded
15Inline-header listsStyle3"- Topic: Topic is..."
16Title Case headingsStyle1"## Every Word Capitalized Here"
17Emoji overuseStyle2🚀💡✅ in professional text
18Curly quotesStyle1\u201Csmart quotes\u201D instead of "straight"
19Chatbot artifactsComms5"I hope this helps!", "Let me know if..."
20Cutoff disclaimersComms4"As of my last training update..."
21Sycophantic toneComms4"Great question!", "You're absolutely right!"
22Filler phrasesFiller3"In order to", "Due to the fact that"
23Excessive hedgingFiller3"could potentially possibly"
24Generic conclusionsFiller3"The future looks bright"

Vocabulary tiers

  • Tier 1 (Dead giveaways): 50+ words that appear 5-20x more in AI text. Always flagged. Examples: delve, tapestry, vibrant, crucial, meticulous, seamless, groundbreaking
  • Tier 2 (Suspicious in density): 80+ words flagged when 2+ appear. Examples: furthermore, paradigm, holistic, utilize, facilitate, nuanced
  • Tier 3 (Context-dependent): 60+ words flagged only at >3% density. Examples: significant, effective, unique, compelling, exceptional
  • Phrases: 80+ multi-word patterns. Examples: "In today's digital age", "plays a crucial role", "serves as a testament"

How scoring works

  1. Pattern detection — Each of 24 detectors scans for regex matches. Matches are weighted 1-5.
  2. Density calculation — Weighted matches per 100 words, on a logarithmic curve (prevents runaway scores).
  3. Breadth bonus — More unique pattern types = higher score (up to +20).
  4. Category diversity — Hits across content/language/style/communication/filler = higher score (up to +15).
  5. Statistical uniformity — Low burstiness, low vocabulary diversity, high repetition add up to 100 uniformity points.
  6. Composite blend — Pattern score (70%) + uniformity score (30%) = final score.

This transparent methodology means you can see exactly why text scored the way it did.

What makes this different

FeaturehumanizerGPTZeroCopyleaksZeroGPT
Open source
Transparent scoring✅ Fully explainable❌ Black box❌ Black box❌ Black box
Actionable suggestions✅ Per-pattern guidance❌ Score only❌ Score only❌ Score only
Auto-fix✅ Safe mechanical fixes
Statistical analysis✅ Burstiness, TTR, FK✅ Perplexity✅ Stylometric
No API key needed
Works offline
Zero dependenciesN/AN/AN/A

Before/after

Before (AI score: 78):

Great question! Here is an overview of AI-assisted coding. AI-assisted coding serves as an enduring testament to the transformative potential of large language models, marking a pivotal moment in the evolution of software development. In today's rapidly evolving technological landscape, these groundbreaking tools are reshaping how engineers ideate, iterate, and deliver, underscoring their vital role in modern workflows. The future looks bright. I hope this helps!

After (AI score: 4):

AI coding tools speed up boilerplate. In a 2024 Google study, developers using Codex finished simple functions 55% faster, but showed no improvement on debugging or architecture. I've used Copilot for a year. It's good at config files and test scaffolding. It's bad at knowing when it's wrong.

Always-On Mode: Bake Into Your Agent's Personality

The skill works great on-demand, but the real power is making your AI always write like a human. Here's how to bake the humanizer principles into your agent's system prompt or personality file.

For OpenClaw (SOUL.md)

Add this to your SOUL.md (or equivalent personality file):

## Writing Like a Human (Anti-AI Patterns)

Your writing should be indistinguishable from a sharp, opinionated human.
These rules are always on — not just when asked to "humanize."

### Never Use These Words
Dead giveaways of AI output — ban them from your vocabulary:

**Words:** delve, tapestry, vibrant, crucial, comprehensive, meticulous, embark,
robust, seamless, groundbreaking, leverage, synergy, transformative, paramount,
multifaceted, myriad, cornerstone, reimagine, empower, catalyst, bolster,
spearhead, invaluable, bustling, nestled, realm, showcase, foster, garner,
interplay, enduring, pivotal, intricate, harness, unleash, revolutionize,
elucidate, encompass, holistic, utilize, facilitate, nuanced, paradigm, poised

**Phrases:** "In today's [anything]", "It is worth noting", "plays a crucial role",
"serves as a testament", "in the realm of", "delve into", "harness the power of",
"embark on a journey", "the landscape of", "without further ado"

**Filler to cut:** "In order to" → "to". "Due to the fact that" → "because".
"It is important to note that" → just say it.

### Kill These Patterns
1. No significance inflation — state facts, skip "pivotal moment" claims
2. No vague attributions — name the source or drop the claim
3. No -ing tails — "highlighting the importance of..." gets its own sentence
4. No copula avoidance — use "is" and "has", not "serves as" or "boasts"
5. No sycophancy — skip "Great question!" and just answer
6. No chatbot artifacts — no "I hope this helps" or "Let me know if..."
7. No generic conclusions — end with specifics, not "the future looks bright"

### Write With Human Texture
- Vary sentence length (short, then long, then short)
- Have opinions and take stances
- Use concrete specifics (numbers, names, dates)
- Allow imperfection (contractions, fragments, "And" or "But" starters)
- Use simple verbs ("is", "has", "does" are fine)
- Read it aloud — if you wouldn't say it, don't write it

For Claude (system prompt)

Add to your system prompt or CLAUDE.md:

You write like a human, not an AI. Never use words like "delve", "tapestry",
"crucial", "comprehensive", "robust", "seamless", or "groundbreaking". Never
write "In today's...", "It is worth noting...", or "Great question!". Use "is"
not "serves as". Vary your sentence lengths. Have opinions. Use specific numbers
and names instead of vague claims. End with specifics, not "the future looks
bright". If you'd never say it in conversation, don't write it.

For ChatGPT (Custom Instructions)

Paste into your Custom Instructions → "How would you like ChatGPT to respond?":

Write like a specific human, not a generic AI. Rules:
- Never use: delve, tapestry, vibrant, crucial, robust, seamless, groundbreaking,
  transformative, leverage, synergy, paramount, multifaceted, myriad
- Never start with "In today's..." or end with "the future looks bright"
- Never write "Great question!" or "I hope this helps!"
- Use "is" not "serves as". Use "to" not "in order to"
- Vary sentence length. Short. Then longer. Have opinions.
- Use real numbers and names, not "experts say" or "studies show"

Verification

After baking in, test your agent by asking it to write about any topic. Then scan it:

echo "Your agent's response here" | node src/cli.js score

Target: consistently under 25 on the humanizer score.

Project structure

humanizer/
├── SKILL.md          # OpenClaw skill definition
├── src/
│   ├── patterns.js   # 24 pattern detectors + pattern registry
│   ├── vocabulary.js  # 500+ AI words/phrases (3 tiers)
│   ├── stats.js       # Statistical analysis engine
│   ├── analyzer.js    # Composite scoring engine
│   ├── humanizer.js   # Suggestion engine + auto-fix
│   └── cli.js         # CLI with colored output
├── tests/             # Vitest test suite (128 tests)
│   ├── analyzer.test.js
│   ├── humanizer.test.js
│   ├── statistics.test.js
│   ├── calibration.test.js
│   ├── performance.test.js
│   └── edge-cases.test.js
├── references/        # Pattern catalogs, vocabulary lists
└── docs/              # Detailed documentation

Contributing

  1. Fork and create a branch
  2. Add/improve pattern detection (see src/patterns.js)
  3. Write tests for your changes
  4. Run npm test — all tests must pass
  5. Open a PR

License

MIT

Permissions & Security

Security level L1: Low-risk skills with minimal permissions. Review inputs and outputs before running in production.

Requirements

  • OpenClaw CLI installed and configured.
  • Language: Markdown
  • License: MIT
  • Topics:

FAQ

How do I install humanizer?

Run openclaw add @brandonwise/ai-humanizer in your terminal. This installs humanizer into your OpenClaw Skills catalog.

Does this skill run locally or in the cloud?

OpenClaw Skills execute locally by default. Review the SKILL.md and permissions before running any skill.

Where can I verify the source code?

The source repository is available at https://github.com/openclaw/skills/tree/main/skills/brandonwise/ai-humanizer. Review commits and README documentation before installing.