skills$openclaw/skill-evaluator
terwox8.5k

by terwox

skill-evaluator – OpenClaw Skill

skill-evaluator is an OpenClaw Skills integration for coding workflows. Evaluate Clawdbot skills for quality, reliability, and publish-readiness using a multi-framework rubric (ISO 25010, OpenSSF, Shneiderman, agent-specific heuristics). Use when asked to review, audit, evaluate, score, or assess a skill before publishing, or when checking skill quality. Runs automated structural checks and guides manual assessment across 25 criteria.

8.5k stars2.9k forksSecurity L1
Updated Feb 7, 2026Created Feb 7, 2026coding

Skill Snapshot

nameskill-evaluator
descriptionEvaluate Clawdbot skills for quality, reliability, and publish-readiness using a multi-framework rubric (ISO 25010, OpenSSF, Shneiderman, agent-specific heuristics). Use when asked to review, audit, evaluate, score, or assess a skill before publishing, or when checking skill quality. Runs automated structural checks and guides manual assessment across 25 criteria. OpenClaw Skills integration.
ownerterwox
repositoryterwox/skill-evaluator
languageMarkdown
licenseMIT
topics
securityL1
installopenclaw add @terwox/skill-evaluator
last updatedFeb 7, 2026

Maintainer

terwox

terwox

Maintains skill-evaluator in the OpenClaw Skills directory.

View GitHub profile
File Explorer
8 files
.
assets
EVAL-TEMPLATE.md
1.3 KB
references
rubric.md
11.1 KB
scripts
eval-skill.py
23.1 KB
_meta.json
282 B
SKILL.md
3.4 KB
SKILL.md

name: skill-evaluator description: Evaluate Clawdbot skills for quality, reliability, and publish-readiness using a multi-framework rubric (ISO 25010, OpenSSF, Shneiderman, agent-specific heuristics). Use when asked to review, audit, evaluate, score, or assess a skill before publishing, or when checking skill quality. Runs automated structural checks and guides manual assessment across 25 criteria.

Skill Evaluator

Evaluate skills across 25 criteria using a hybrid automated + manual approach.

Quick Start

1. Run automated checks

python3 scripts/eval-skill.py /path/to/skill
python3 scripts/eval-skill.py /path/to/skill --json    # machine-readable
python3 scripts/eval-skill.py /path/to/skill --verbose  # show all details

Checks: file structure, frontmatter, description quality, script syntax, dependency audit, credential scan, env var documentation.

2. Manual assessment

Use the rubric at references/rubric.md to score 25 criteria across 8 categories (0–4 each, 100 total). Each criterion has concrete descriptions per score level.

3. Write the evaluation

Copy assets/EVAL-TEMPLATE.md to the skill directory as EVAL.md. Fill in automated results + manual scores.

Evaluation Process

  1. Run eval-skill.py — get the automated structural score
  2. Read the skill's SKILL.md — understand what it does
  3. Read/skim the scripts — assess code quality, error handling, testability
  4. Score each manual criterion using references/rubric.md — concrete criteria per level
  5. Prioritize findings as P0 (blocks publishing) / P1 (should fix) / P2 (nice to have)
  6. Write EVAL.md in the skill directory with scores + findings

Categories (8 categories, 25 criteria)

#CategorySource FrameworkCriteria
1Functional SuitabilityISO 25010Completeness, Correctness, Appropriateness
2ReliabilityISO 25010Fault Tolerance, Error Reporting, Recoverability
3Performance / ContextISO 25010 + AgentToken Cost, Execution Efficiency
4Usability — AI AgentShneiderman, Gerhardt-PowalsLearnability, Consistency, Feedback, Error Prevention
5Usability — HumanTognazzini, NormanDiscoverability, Forgiveness
6SecurityISO 25010 + OpenSSFCredentials, Input Validation, Data Safety
7MaintainabilityISO 25010Modularity, Modifiability, Testability
8Agent-SpecificNovelTrigger Precision, Progressive Disclosure, Composability, Idempotency, Escape Hatches

Interpreting Scores

RangeVerdictAction
90–100ExcellentPublish confidently
80–89GoodPublishable, note known issues
70–79AcceptableFix P0s before publishing
60–69Needs WorkFix P0+P1 before publishing
<60Not ReadySignificant rework needed

Deeper Security Scanning

This evaluator covers security basics (credentials, input validation, data safety) but for thorough security audits of skills under development, consider SkillLens (npx skilllens scan <path>). It checks for exfiltration, code execution, persistence, privilege bypass, and prompt injection — complementary to the quality focus here.

Dependencies

  • Python 3.6+ (for eval-skill.py)
  • PyYAML (pip install pyyaml) — for frontmatter parsing in automated checks
README.md

No README available.

Permissions & Security

Security level L1: Low-risk skills with minimal permissions. Review inputs and outputs before running in production.

Requirements

- Python 3.6+ (for eval-skill.py) - PyYAML (`pip install pyyaml`) — for frontmatter parsing in automated checks

FAQ

How do I install skill-evaluator?

Run openclaw add @terwox/skill-evaluator in your terminal. This installs skill-evaluator into your OpenClaw Skills catalog.

Does this skill run locally or in the cloud?

OpenClaw Skills execute locally by default. Review the SKILL.md and permissions before running any skill.

Where can I verify the source code?

The source repository is available at https://github.com/openclaw/skills/tree/main/skills/terwox/skill-evaluator. Review commits and README documentation before installing.