skills$openclaw/reflect
stevengonsalvez831

by stevengonsalvez

reflect – OpenClaw Skill

reflect is an OpenClaw Skills integration for coding workflows. Self-improvement through conversation analysis. Extracts learnings from corrections and success patterns, permanently encoding them into agent definitions. Philosophy - Correct once, never again.

831 stars1.3k forksSecurity L1
Updated Feb 7, 2026Created Feb 7, 2026coding

Skill Snapshot

namereflect
descriptionSelf-improvement through conversation analysis. Extracts learnings from corrections and success patterns, permanently encoding them into agent definitions. Philosophy - Correct once, never again. OpenClaw Skills integration.
ownerstevengonsalvez
repositorystevengonsalvez/agent-reflect
languageMarkdown
licenseMIT
topics
securityL1
installopenclaw add @stevengonsalvez/agent-reflect
last updatedFeb 7, 2026

Maintainer

stevengonsalvez

stevengonsalvez

Maintains reflect in the OpenClaw Skills directory.

View GitHub profile
File Explorer
11 files
.
logs
chat.json
4.7 MB
stop.json
1.2 KB
subagent_stop.json
1.2 KB
user_prompt_submit.json
1.4 KB
_meta.json
281 B
agent_mappings.md
7.3 KB
README.md
2.5 KB
signal_patterns.md
7.8 KB
skill.json
1.4 KB
SKILL.md
6.4 KB
SKILL.md

name: reflect description: Self-improvement through conversation analysis. Extracts learnings from corrections and success patterns, permanently encoding them into agent definitions. Philosophy - Correct once, never again. version: "2.0.0" user-invocable: true triggers:

  • reflect
  • self-reflect
  • review session
  • what did I learn
  • extract learnings
  • analyze corrections allowed-tools:
  • Read
  • Write
  • Edit
  • Grep
  • Glob
  • Bash metadata: clawdbot: emoji: "🪞" config: stateDirs: ["~/.reflect"]

Reflect - Agent Self-Improvement Skill

Transform your AI assistant into a continuously improving partner. Every correction becomes a permanent improvement that persists across all future sessions.

Quick Reference

CommandAction
reflectAnalyze conversation for learnings
reflect onEnable auto-reflection
reflect offDisable auto-reflection
reflect statusShow state and metrics
reflect reviewReview pending learnings

When to Use

  • After completing complex tasks
  • When user explicitly corrects behavior ("never do X", "always Y")
  • At session boundaries or before context compaction
  • When successful patterns are worth preserving

Workflow

Step 1: Scan Conversation for Signals

Analyze the conversation for correction signals and learning opportunities.

Signal Confidence Levels:

ConfidenceTriggersExamples
HIGHExplicit corrections"never", "always", "wrong", "stop", "the rule is"
MEDIUMApproved approaches"perfect", "exactly", "that's right", accepted output
LOWObservationsPatterns that worked but not explicitly validated

See signal_patterns.md for full detection rules.

Step 2: Classify & Match to Target Files

Map each signal to the appropriate target:

CategoryTarget Files
Code Stylecode-reviewer, backend-developer, frontend-developer
Architecturesolution-architect, api-architect, architecture-reviewer
ProcessCLAUDE.md, orchestrator agents
DomainDomain-specific agents, CLAUDE.md
ToolsCLAUDE.md, relevant specialists
New SkillCreate new skill file

See agent_mappings.md for mapping rules.

Step 3: Check for Skill-Worthy Signals

Some learnings should become new skills rather than agent updates:

Skill-Worthy Criteria:

  • Non-obvious debugging (>10 min investigation)
  • Misleading error (root cause different from message)
  • Workaround discovered through experimentation
  • Configuration insight (differs from documented)
  • Reusable pattern (helps in similar situations)

Quality Gates (must pass all):

  • Reusable: Will help with future tasks
  • Non-trivial: Requires discovery, not just docs
  • Specific: Can describe exact trigger conditions
  • Verified: Solution actually worked
  • No duplication: Doesn't exist already

Step 4: Generate Proposals

Present findings in structured format:

# Reflection Analysis

## Session Context
- **Date**: [timestamp]
- **Messages Analyzed**: [count]

## Signals Detected

| # | Signal | Confidence | Source Quote | Category |
|---|--------|------------|--------------|----------|
| 1 | [learning] | HIGH | "[exact words]" | Code Style |

## Proposed Changes

### Change 1: Update [agent-name]
**Target**: `[file path]`
**Section**: [section name]
**Confidence**: HIGH

```diff
+ New rule from learning

Review Prompt

Apply these changes? (Y/N/modify/1,2,3)


### Step 5: Apply with User Approval

**On `Y` (approve):**
1. Apply each change using Edit tool
2. Commit with descriptive message
3. Update metrics

**On `N` (reject):**
1. Discard proposed changes
2. Log rejection for analysis

**On `modify`:**
1. Present each change individually
2. Allow editing before applying

**On selective (e.g., `1,3`):**
1. Apply only specified changes
2. Commit partial updates

## State Management

State is stored in `~/.reflect/` (configurable via `REFLECT_STATE_DIR`):

```yaml
# reflect-state.yaml
auto_reflect: false
last_reflection: "2026-01-26T10:30:00Z"
pending_reviews: []

Metrics Tracking

# reflect-metrics.yaml
total_sessions_analyzed: 42
total_signals_detected: 156
total_changes_accepted: 89
acceptance_rate: 78%
confidence_breakdown:
  high: 45
  medium: 32
  low: 12
most_updated_agents:
  code-reviewer: 23
  backend-developer: 18
skills_created: 5

Safety Guardrails

Human-in-the-Loop

  • NEVER apply changes without explicit user approval
  • Always show full diff before applying
  • Allow selective application

Incremental Updates

  • ONLY add to existing sections
  • NEVER delete or rewrite existing rules
  • Preserve original structure

Conflict Detection

  • Check if proposed rule contradicts existing
  • Warn user if conflict detected
  • Suggest resolution strategy

Output Locations

Project-level (versioned with repo):

  • .claude/reflections/YYYY-MM-DD_HH-MM-SS.md - Full reflection
  • .claude/skills/{name}/SKILL.md - New skills

Global (user-level):

  • ~/.reflect/learnings.yaml - Learning log
  • ~/.reflect/reflect-metrics.yaml - Aggregate metrics

Examples

Example 1: Code Style Correction

User says: "Never use var in TypeScript, always use const or let"

Signal detected:

  • Confidence: HIGH (explicit "never" + "always")
  • Category: Code Style
  • Target: frontend-developer.md

Proposed change:

## Style Guidelines
+ * Use `const` or `let` instead of `var` in TypeScript

Example 2: Process Preference

User says: "Always run tests before committing"

Signal detected:

  • Confidence: HIGH (explicit "always")
  • Category: Process
  • Target: CLAUDE.md

Proposed change:

## Commit Hygiene
+ * Run test suite before creating commits

Example 3: New Skill from Debugging

Context: Spent 30 minutes debugging a React hydration mismatch

Signal detected:

  • Confidence: HIGH (non-trivial debugging)
  • Category: New Skill
  • Quality gates: All passed

Proposed skill: react-hydration-fix/SKILL.md

Troubleshooting

No signals detected:

  • Session may not have had corrections
  • Check if using natural language corrections

Conflict warning:

  • Review the existing rule cited
  • Decide if new rule should override
  • Can modify before applying

Agent file not found:

  • Check agent name spelling
  • May need to create agent file first
README.md

Reflect - Agent Self-Improvement Skill

"Correct once, never again."

Transform your AI assistant into a continuously improving partner. The reflect skill analyzes conversations for corrections and successful patterns, permanently encoding learnings into agent definitions.

Features

  • Signal Detection: Automatically identifies corrections with confidence levels (HIGH/MEDIUM/LOW)
  • Category Classification: Routes learnings to appropriate agent files (Code Style, Architecture, Process, Domain, Tools)
  • Skill Generation: Creates new skills from non-trivial debugging discoveries
  • Metrics Tracking: Quantifies improvement with acceptance rates and statistics
  • Human-in-the-Loop: All changes require explicit approval
  • Git Integration: Full version control with easy rollback

Installation

Via ClawdHub CLI

clawdhub install reflect

Manual Installation

Copy the reflect/ folder to your skills directory:

  • Claude Code: ~/.claude/skills/reflect/
  • Clawdbot: ~/.clawdbot/skills/reflect/

Usage

Basic Reflection

Just say "reflect" or "review session" to trigger analysis:

User: reflect
Agent: [Analyzes conversation, presents learnings for approval]

Toggle Auto-Reflection

User: reflect on
Agent: Auto-reflection enabled. Will analyze before context compaction.

User: reflect off
Agent: Auto-reflection disabled.

Check Status

User: reflect status
Agent:
  Sessions analyzed: 42
  Signals detected: 156
  Changes accepted: 89 (78%)
  Skills created: 5

Review Pending

User: reflect review
Agent: [Shows low-confidence learnings awaiting validation]

How It Works

  1. Scan: Analyzes conversation for correction signals
  2. Classify: Maps signals to categories and target files
  3. Propose: Generates diffs for agent updates or new skills
  4. Review: Presents changes for user approval
  5. Apply: Commits approved changes with descriptive messages

Signal Detection

ConfidenceTriggersExamples
HIGHExplicit corrections"never", "always", "wrong", "stop"
MEDIUMApproved approaches"perfect", "exactly", "that's right"
LOWObservationsPatterns that worked, not validated

Configuration

Set custom state directory:

export REFLECT_STATE_DIR=/path/to/state

Default locations:

  • ~/.reflect/ (portable)
  • ~/.claude/session/ (Claude Code)

License

MIT

Author

Claude Code Toolkit

Permissions & Security

Security level L1: Low-risk skills with minimal permissions. Review inputs and outputs before running in production.

Requirements

  • OpenClaw CLI installed and configured.
  • Language: Markdown
  • License: MIT
  • Topics:

FAQ

How do I install reflect?

Run openclaw add @stevengonsalvez/agent-reflect in your terminal. This installs reflect into your OpenClaw Skills catalog.

Does this skill run locally or in the cloud?

OpenClaw Skills execute locally by default. Review the SKILL.md and permissions before running any skill.

Where can I verify the source code?

The source repository is available at https://github.com/openclaw/skills/tree/main/skills/stevengonsalvez/agent-reflect. Review commits and README documentation before installing.