skills$openclaw/skill-scan
dgriffin8312.6k

by dgriffin831

skill-scan – OpenClaw Skill

skill-scan is an OpenClaw Skills integration for coding workflows. Security scanner for OpenClaw skill packages. Scans skills for malicious code, evasion techniques, prompt injection, and misaligned behavior BEFORE installation. Use to audit any skill from ClawHub or local directories.

2.6k stars7.8k forksSecurity L1
Updated Feb 7, 2026Created Feb 7, 2026coding

Skill Snapshot

nameskill-scan
descriptionSecurity scanner for OpenClaw skill packages. Scans skills for malicious code, evasion techniques, prompt injection, and misaligned behavior BEFORE installation. Use to audit any skill from ClawHub or local directories. OpenClaw Skills integration.
ownerdgriffin831
repositorydgriffin831/skill-scan
languageMarkdown
licenseMIT
topics
securityL1
installopenclaw add @dgriffin831/skill-scan
last updatedFeb 7, 2026

Maintainer

dgriffin831

dgriffin831

Maintains skill-scan in the OpenClaw Skills directory.

View GitHub profile
File Explorer
138 files
.
evals
eval_runner.py
5.1 KB
rules
dangerous-patterns.json
5.9 KB
skill_scan
__init__.py
1.7 KB
alignment_analyzer.py
9.5 KB
ast_analyzer.py
27.0 KB
clawhub.py
2.4 KB
cli.py
9.6 KB
llm_analyzer.py
11.4 KB
llm_prompts.py
6.6 KB
meta_analyzer.py
14.1 KB
models.py
2.5 KB
prompt_analyzer.py
23.8 KB
reporter.py
11.7 KB
scanner.py
51.0 KB
test-fixtures
backdoor-magic-string
_expected.json
356 B
process.py
1.8 KB
SKILL.md
241 B
behavioral-multi-file-exfil
_expected.json
376 B
analyze.py
2.4 KB
collector.py
2.2 KB
encoder.py
1.6 KB
reporter.py
2.0 KB
SKILL.md
483 B
clean-skill
_expected.json
177 B
SKILL.md
332 B
weather.js
501 B
command-injection-eval
_expected.json
289 B
calculate.py
1.1 KB
SKILL.md
255 B
data-exfil-env-secrets
_expected.json
346 B
get_info.py
2.2 KB
SKILL.md
316 B
evasive-01-string-concat
_expected.json
220 B
index.js
693 B
SKILL.md
152 B
evasive-02-encoded
_expected.json
228 B
index.js
829 B
SKILL.md
117 B
evasive-03-prompt-subtle
_expected.json
228 B
SKILL.md
1.2 KB
evasive-04-timebomb
_expected.json
236 B
scheduler.js
1.1 KB
SKILL.md
130 B
evasive-05-alias-chain
_expected.json
237 B
SKILL.md
112 B
tools.js
1.0 KB
evasive-06-unicode-injection
_expected.json
233 B
SKILL.md
1.0 KB
evasive-07-sandbox-detect
_expected.json
223 B
check.js
1.6 KB
SKILL.md
123 B
evasive-08-reverse-shell
_expected.json
226 B
debug.sh
664 B
SKILL.md
118 B
evasive-09-python-pickle
_expected.json
221 B
cache.py
838 B
SKILL.md
139 B
evasive-10-roleplay
_expected.json
223 B
SKILL.md
1.0 KB
evasive-11-polyglot-json
_expected.json
246 B
config-template.json
498 B
SKILL.md
148 B
evasive-12-multi-stage
plugins
init.js
935 B
_expected.json
219 B
formatter.js
488 B
SKILL.md
125 B
legit-api-skill
_expected.json
181 B
github.js
2.0 KB
SKILL.md
473 B
malicious-skill
_expected.json
233 B
helper.js
745 B
SKILL.md
283 B
obfuscation-base64
_expected.json
324 B
process.py
1.3 KB
SKILL.md
215 B
path-traversal-reader
_expected.json
328 B
read.py
1.2 KB
SKILL.md
240 B
prompt-injection-jailbreak
_expected.json
321 B
SKILL.md
764 B
resource-exhaustion-loop
_expected.json
315 B
analyze.py
1.2 KB
SKILL.md
195 B
safe-file-validator
_expected.json
279 B
SKILL.md
337 B
validate.py
1.8 KB
safe-simple-math
_expected.json
253 B
math_ops.py
1.3 KB
SKILL.md
523 B
sql-injection-query
_expected.json
315 B
query.py
1.3 KB
SKILL.md
271 B
tests
__init__.py
conftest.py
658 B
test_alignment_analyzer.py
10.9 KB
test_ast_analyzer.py
3.8 KB
test_llm_analyzer.py
3.0 KB
test_meta_analyzer.py
14.9 KB
test_prompt_analyzer.py
4.0 KB
test_scanner.py
5.5 KB
_meta.json
277 B
CHANGELOG.md
1.5 KB
pyproject.toml
656 B
README.md
6.5 KB
SKILL.md
7.2 KB
TESTING.md
8.5 KB
SKILL.md

name: skill-scan description: Security scanner for OpenClaw skill packages. Scans skills for malicious code, evasion techniques, prompt injection, and misaligned behavior BEFORE installation. Use to audit any skill from ClawHub or local directories.

Skill-Scan — Security Auditor for Agent Skills

Multi-layered security scanner for OpenClaw skill packages. Detects malicious code, evasion techniques, prompt injection, and misaligned behavior through static analysis and optional LLM-powered deep inspection. Run this BEFORE installing or enabling any untrusted skill.

Features

  • 6 analysis layers — pattern matching, AST/evasion, prompt injection, LLM deep analysis, alignment verification, meta-analysis
  • 60+ detection rules — execution threats, credential theft, data exfiltration, obfuscation, behavioral signatures
  • Context-aware scoring — reduces false positives for legitimate API skills
  • ClawHub integration — scan skills directly from the registry by slug
  • Multiple output modes — text report (default), --json, --compact, --quiet
  • Exit codes — 0 for safe, 1 for risky (easy scripting integration)

When to Use

MANDATORY before installing or enabling:

  • Skills from ClawHub (any skill not authored by you)
  • Skills shared by other users or teams
  • Skills from public repositories
  • Any skill package you haven't personally reviewed

RECOMMENDED for periodic audits of already-installed skills.

Quick Start

# Scan a local skill directory
skill-scan scan /path/to/skill

# Scan a skill from ClawHub before installing it
skill-scan scan-hub some-skill-slug

# Batch scan all installed skills
skill-scan batch /path/to/skills-directory

# JSON output for programmatic use
skill-scan scan-hub some-skill-slug --json

# Quiet mode (just score + verdict)
skill-scan scan-hub some-skill-slug --quiet

Risk Scoring

RiskScoreAction
LOW80-100Safe to install
MEDIUM50-79Review findings before installing
HIGH20-49Do NOT install — serious threats detected
CRITICAL0-19Do NOT install — multiple critical threats

Exit Codes

  • 0 — LOW risk (safe to proceed)
  • 1 — MEDIUM or higher (block installation)

LLM Options

skill-scan scan-hub slug --llm        # Always run deep analysis (layers 4-5)
skill-scan scan-hub slug --llm-auto   # LLM only if pattern scan finds MEDIUM+
skill-scan scan-hub slug --llm-only   # Skip patterns, LLM analysis only

Provider auto-detected from environment:

  • OPENAI_API_KEY -> gpt-4o-mini
  • ANTHROPIC_API_KEY -> claude-sonnet-4-5

For the Agent

When the user asks to install a skill from ClawHub, follow this workflow:

  1. Scan first — run skill-scan scan-hub <slug> BEFORE installing
  2. Check the result:
    • If LOW risk (exit code 0): proceed with installation
    • If MEDIUM risk: show findings to the user and ask whether to proceed
    • If HIGH or CRITICAL risk: block installation and show the report
  3. Report — always show the user the scan summary (score, risk level, finding count)

Alert Format

When a scan finds MEDIUM or higher risk:

🛡️ Skill-Scan Alert: {RISK_LEVEL} ({SCORE}/100)
Skill: {slug}
Findings: {count} issue(s) detected
Top threat: {highest severity finding title}
Action: Installation blocked pending review.

To integrate skill-scan into your agent's workflow, add ONE of the following templates to your AGENTS.md (or equivalent agent instructions file).

Option A: Automatic Scanning (Recommended)

All ClawHub skill installs are automatically scanned before installation. No manual step needed.

## Skill-Scan — Automatic Pre-Install Security Scanning

BEFORE installing any skill from ClawHub, you MUST scan it with skill-scan first. This is non-negotiable — never skip the scan.

### Workflow

1. When the user asks to install a skill (e.g. `clawhub install <slug>`), run the scan FIRST:
   ```bash
   skill-scan scan-hub <slug>
  1. Check the exit code and output:
    • Exit code 0 (LOW risk) — proceed with clawhub install <slug>
    • Exit code 1 (MEDIUM risk) — show findings to the user, ask whether to proceed
    • Exit code 1 (HIGH/CRITICAL risk) — BLOCK installation, show the full report
  2. Never install a HIGH or CRITICAL risk skill without explicit user override
  3. Always show the scan summary to the user before proceeding with installation

Periodic Audits

When asked to audit installed skills, batch scan the skills directory:

skill-scan batch {skillsDir}

LLM Deep Analysis

For high-stakes installs or when pattern scan returns MEDIUM, escalate to LLM analysis:

skill-scan scan-hub <slug> --llm-auto

### Option B: Manual On-Demand Scanning

Skills are only scanned when the user explicitly asks. Installation proceeds without automatic checks.

```markdown
## Skill-Scan — On-Demand Skill Security Scanning

skill-scan is available to audit skill packages for security threats. Use it when the user requests a security scan.

### Commands

- **Scan a ClawHub skill**: `skill-scan scan-hub <slug>`
- **Scan a local skill**: `skill-scan scan /path/to/skill`
- **Batch scan installed skills**: `skill-scan batch {skillsDir}`
- **Deep analysis with LLM**: add `--llm` or `--llm-auto` to any scan command

### Risk Levels

- **LOW (80-100)** — safe, no significant threats
- **MEDIUM (50-79)** — review findings, use caution
- **HIGH (20-49)** — serious threats, do not install
- **CRITICAL (0-19)** — multiple critical threats, do not install

### When the User Asks to Scan

1. Run the appropriate scan command
2. Show the scan summary (score, risk level, finding count)
3. If MEDIUM+, list the top findings with severity and description
4. Let the user decide what to do

Install-Time Setup

When this skill is first installed, ask the user:

How should skill-scan integrate with your workflow?

  1. Automatic (Recommended) — Every clawhub install is scanned first. HIGH/CRITICAL installs are blocked automatically.
  2. Manual — Scanning only happens when you explicitly ask for it.

Based on their choice, add the corresponding template (Option A or Option B above) to the project's AGENTS.md.

Execution threatseval(), exec(), child_process, dynamic imports

Credential theft.env access, API keys, tokens, private keys, wallet files

Data exfiltrationfetch(), axios, requests, sockets, webhooks

Filesystem manipulation — Write/delete/rename operations

Obfuscation — Base64, hex, unicode encoding, string construction

Prompt injection — Jailbreaks, invisible characters, homoglyphs, roleplay framing, encoded instructions

Behavioral signatures — Compound patterns: data exfiltration, trojan skills, evasive malware, persistent backdoors

Requirements

  • Python 3.10+
  • httpx>=0.27 (for LLM API calls only)
  • API key only needed for --llm modes (static analysis is self-contained)
  • input-guard — External input scanning
  • memory-scan — Agent memory security
  • guardrails — Security policy configuration
README.md

Skill-Scan - OpenClaw Skill Security Auditor

Multi-layered security scanner for OpenClaw agent skill packages. Detects malicious code, evasion techniques, prompt injection, and misaligned behavior through static analysis and optional LLM-powered deep inspection.

Prerequisites

  • Python 3.10+ — check with python3 --version
  • pip — check with pip3 --version or python3 -m pip --version

If pip is not installed:

# Option 1: System package manager (requires sudo)
sudo apt-get install python3-pip        # Debian/Ubuntu
brew install python3                     # macOS (includes pip)

# Option 2: Bootstrap pip without sudo
python3 -m ensurepip --upgrade

Quick Start

pip install -e .
skill-scan scan /path/to/skill

Alerting (OpenClaw)

Send alert on MEDIUM+ risk using configured OpenClaw channel:

OPENCLAW_ALERT_CHANNEL=slack skill-scan scan /path/to/skill --alert

Optional target for channels that require a recipient:

OPENCLAW_ALERT_CHANNEL=slack OPENCLAW_ALERT_TO=@security skill-scan scan /path/to/skill --alert

Alert only on HIGH/CRITICAL:

OPENCLAW_ALERT_CHANNEL=slack skill-scan scan /path/to/skill --alert --alert-threshold HIGH

Scan from ClawHub

skill-scan scan-hub some-skill-slug

Check Arbitrary Text

skill-scan check "some suspicious text"

Batch Scan

skill-scan batch /path/to/skills-directory

Analysis Layers

LayerModulePurposeWhen
1Pattern matchingFast regex-based detectionAlways
2AST/evasion analysisCatches obfuscation tricksAlways
3Prompt injectionDetects social engineering in SKILL.mdAlways
4LLM deep analysisSemantic threat understanding--llm
5aAlignment verificationCode vs description matching--llm
5bMeta-analysisFinding review and correlation--llm

Risk Scoring

  • LOW (80-100) - Safe, no significant threats
  • MEDIUM (50-79) - Moderate risk, review needed
  • HIGH (20-49) - Serious threats detected
  • CRITICAL (0-19) - Multiple critical threats, do not use

Detection Categories

Execution threats - eval(), exec(), child_process, dynamic imports

Credential theft - .env access, API keys, tokens, private keys, wallet files

Data exfiltration - fetch(), axios, requests, sockets, webhooks

Filesystem manipulation - Write/delete/rename operations

Obfuscation - Base64, hex, unicode encoding, string construction

Prompt injection - Jailbreaks, invisible characters, homoglyphs, roleplay framing, encoded instructions

Behavioral signatures - Compound patterns: data exfiltration, trojan skills, evasive malware, persistent backdoors

skill-scan scan path/            # Formatted text report (default)
skill-scan scan path/ --json     # Raw JSON
skill-scan scan path/ --compact  # Single-line summary
skill-scan scan path/ --quiet    # Score + verdict only

LLM Options

skill-scan scan path/ --llm        # Always run layers 4-5
skill-scan scan path/ --llm-only   # Skip pattern analysis, LLM only
skill-scan scan path/ --llm-auto   # LLM only if pattern analysis finds MEDIUM+

Provider auto-detected from environment:

  • OPENAI_API_KEY -> gpt-4o-mini
  • ANTHROPIC_API_KEY -> claude-sonnet-4-5

Environment Variables

Create a .env file in the repository root with any needed keys:

VariableRequired ForDescription
OPENAI_API_KEYLLM scanningOpenAI API key (uses gpt-4o-mini)
ANTHROPIC_API_KEYLLM scanningAnthropic API key (alternative to OpenAI)
PROMPTINTEL_API_KEYMoltThreats integrationPromptIntel API key
OPENCLAW_ALERT_CHANNELAlertsOpenClaw channel name for alerts
OPENCLAW_ALERT_TOAlertsOptional recipient/target for channels that require one

Static analysis requires no keys — it works out of the box.

Files

skill-scan/
├── pyproject.toml                  # Package metadata (v0.3.0)
├── TESTING.md                      # Eval approach and results
├── rules/
│   └── dangerous-patterns.json     # 60+ regex detection rules
├── skill_scan/
│   ├── cli.py                      # CLI entry point
│   ├── scanner.py                  # Core scanning engine
│   ├── models.py                   # Data classes for findings
│   ├── reporter.py                 # Report formatting
│   ├── ast_analyzer.py             # Layer 2: JS/TS evasion detection
│   ├── prompt_analyzer.py          # Layer 3: Prompt injection detection
│   ├── llm_analyzer.py             # Layer 4: LLM deep analysis
│   ├── alignment_analyzer.py       # Layer 5a: Code vs description matching
│   ├── meta_analyzer.py            # Layer 5b: Meta-analysis
│   └── clawhub.py                  # ClawHub registry integration
├── tests/                          # Unit tests
├── evals/                          # Evaluation framework
└── test-fixtures/                  # 26 test cases (safe + malicious)

Requirements

  • Python 3.10+
  • httpx>=0.27 (for LLM API calls)
  • API key only needed for --llm modes (static analysis is self-contained)
python3 -m pytest tests/ -v
python3 evals/eval_runner.py
python3 evals/eval_runner.py --llm       # With LLM layers

Static analysis results: 100% precision, 86% recall across 26 fixtures.

Exit Codes

  • 0 - LOW risk
  • 1 - MEDIUM risk
  • 2 - HIGH risk
  • 3 - CRITICAL risk

Uninstalling

1. Remove the AGENTS.md section

During installation, one of two sections was added to your workspace AGENTS.md:

  • ## Skill-Scan — Automatic Pre-Install Security Scanning (Option A), or
  • ## Skill-Scan — On-Demand Skill Security Scanning (Option B)

Delete whichever section was added.

2. Uninstall the Python package

pip uninstall skill-scan

3. Remove the skill directory

rm -rf skills/skill-scan

4. Clean up environment variables

Remove from your .env (if no other skill uses them):

  • OPENAI_API_KEY
  • ANTHROPIC_API_KEY
  • PROMPTINTEL_API_KEY
  • OPENCLAW_ALERT_CHANNEL
  • OPENCLAW_ALERT_TO

skill-scan does not create any files in the workspace outside its own directory.

  • input-guard - External input scanning
  • memory-scan - Agent memory security
  • guardrails - Security policy configuration

Permissions & Security

Security level L1: Low-risk skills with minimal permissions. Review inputs and outputs before running in production.

Requirements

- Python 3.10+ - `httpx>=0.27` (for LLM API calls only) - API key only needed for `--llm` modes (static analysis is self-contained)

FAQ

How do I install skill-scan?

Run openclaw add @dgriffin831/skill-scan in your terminal. This installs skill-scan into your OpenClaw Skills catalog.

Does this skill run locally or in the cloud?

OpenClaw Skills execute locally by default. Review the SKILL.md and permissions before running any skill.

Where can I verify the source code?

The source repository is available at https://github.com/openclaw/skills/tree/main/skills/dgriffin831/skill-scan. Review commits and README documentation before installing.