skills$openclaw/elite-longterm-memory
nextfrontierbuilds9.5kā˜…

by nextfrontierbuilds

elite-longterm-memory – OpenClaw Skill

elite-longterm-memory is an OpenClaw Skills integration for coding workflows. Ultimate AI agent memory system. WAL protocol + vector search + git-notes + cloud backup. Never lose context again. Works with Claude, Cursor, GPT, OpenClaw agents.

9.5k stars9.5k forksSecurity L1
Updated Feb 7, 2026Created Feb 7, 2026coding

Skill Snapshot

nameelite-longterm-memory
descriptionUltimate AI agent memory system. WAL protocol + vector search + git-notes + cloud backup. Never lose context again. Works with Claude, Cursor, GPT, OpenClaw agents. OpenClaw Skills integration.
ownernextfrontierbuilds
repositorynextfrontierbuilds/elite-longterm-memory
languageMarkdown
licenseMIT
topics
securityL1
installopenclaw add @nextfrontierbuilds/elite-longterm-memory
last updatedFeb 7, 2026

Maintainer

nextfrontierbuilds

nextfrontierbuilds

Maintains elite-longterm-memory in the OpenClaw Skills directory.

View GitHub profile
File Explorer
6 files
.
bin
elite-memory.js
4.8 KB
_meta.json
660 B
package.json
1.2 KB
README.md
5.5 KB
SKILL.md
12.3 KB
SKILL.md

name: elite-longterm-memory version: 1.2.1 description: "Ultimate AI agent memory system. WAL protocol + vector search + git-notes + cloud backup. Never lose context again. Works with Claude, Cursor, GPT, OpenClaw agents." author: NextFrontierBuilds keywords: [memory, ai-agent, ai-coding, long-term-memory, vector-search, lancedb, git-notes, wal, persistent-context, claude, claude-code, gpt, cursor, copilot, openclaw, moltbot, openclaw, vibe-coding, agentic] metadata: openclaw: emoji: "🧠" requires: env: - OPENAI_API_KEY plugins: - memory-lancedb

Elite Longterm Memory 🧠

The ultimate memory system for AI agents. Combines 6 proven approaches into one bulletproof architecture.

Never lose context. Never forget decisions. Never repeat mistakes.

Architecture Overview

ā”Œā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”
│                    ELITE LONGTERM MEMORY                        │
ā”œā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”¤
│                                                                 │
│  ā”Œā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”  ā”Œā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”  ā”Œā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”             │
│  │   HOT RAM   │  │  WARM STORE │  │  COLD STORE │             │
│  │             │  │             │  │             │             │
│  │ SESSION-    │  │  LanceDB    │  │  Git-Notes  │             │
│  │ STATE.md    │  │  Vectors    │  │  Knowledge  │             │
│  │             │  │             │  │  Graph      │             │
│  │ (survives   │  │ (semantic   │  │ (permanent  │             │
│  │  compaction)│  │  search)    │  │  decisions) │             │
│  ā””ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”˜  ā””ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”˜  ā””ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”˜             │
│         │                │                │                     │
│         ā””ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”¼ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”˜                     │
│                          ā–¼                                      │
│                  ā”Œā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”                                │
│                  │  MEMORY.md  │  ← Curated long-term           │
│                  │  + daily/   │    (human-readable)            │
│                  ā””ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”˜                                │
│                          │                                      │
│                          ā–¼                                      │
│                  ā”Œā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”                                │
│                  │ SuperMemory │  ← Cloud backup (optional)     │
│                  │    API      │                                │
│                  ā””ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”˜                                │
│                                                                 │
ā””ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”˜

The 5 Memory Layers

Layer 1: HOT RAM (SESSION-STATE.md)

From: bulletproof-memory

Active working memory that survives compaction. Write-Ahead Log protocol.

# SESSION-STATE.md — Active Working Memory

## Current Task
[What we're working on RIGHT NOW]

## Key Context
- User preference: ...
- Decision made: ...
- Blocker: ...

## Pending Actions
- [ ] ...

Rule: Write BEFORE responding. Triggered by user input, not agent memory.

Layer 2: WARM STORE (LanceDB Vectors)

From: lancedb-memory

Semantic search across all memories. Auto-recall injects relevant context.

# Auto-recall (happens automatically)
memory_recall query="project status" limit=5

# Manual store
memory_store text="User prefers dark mode" category="preference" importance=0.9

Layer 3: COLD STORE (Git-Notes Knowledge Graph)

From: git-notes-memory

Structured decisions, learnings, and context. Branch-aware.

# Store a decision (SILENT - never announce)
python3 memory.py -p $DIR remember '{"type":"decision","content":"Use React for frontend"}' -t tech -i h

# Retrieve context
python3 memory.py -p $DIR get "frontend"

Layer 4: CURATED ARCHIVE (MEMORY.md + daily/)

From: OpenClaw native

Human-readable long-term memory. Daily logs + distilled wisdom.

workspace/
ā”œā”€ā”€ MEMORY.md              # Curated long-term (the good stuff)
└── memory/
    ā”œā”€ā”€ 2026-01-30.md      # Daily log
    ā”œā”€ā”€ 2026-01-29.md
    └── topics/            # Topic-specific files

From: supermemory

Cross-device sync. Chat with your knowledge base.

export SUPERMEMORY_API_KEY="your-key"
supermemory add "Important context"
supermemory search "what did we decide about..."

Layer 6: AUTO-EXTRACTION (Mem0) — Recommended

NEW: Automatic fact extraction

Mem0 automatically extracts facts from conversations. 80% token reduction.

npm install mem0ai
export MEM0_API_KEY="your-key"
const { MemoryClient } = require('mem0ai');
const client = new MemoryClient({ apiKey: process.env.MEM0_API_KEY });

// Conversations auto-extract facts
await client.add(messages, { user_id: "user123" });

// Retrieve relevant memories
const memories = await client.search(query, { user_id: "user123" });

Benefits:

  • Auto-extracts preferences, decisions, facts
  • Deduplicates and updates existing memories
  • 80% reduction in tokens vs raw history
  • Works across sessions automatically

Quick Setup

1. Create SESSION-STATE.md (Hot RAM)

cat > SESSION-STATE.md << 'EOF'
# SESSION-STATE.md — Active Working Memory

This file is the agent's "RAM" — survives compaction, restarts, distractions.

## Current Task
[None]

## Key Context
[None yet]

## Pending Actions
- [ ] None

## Recent Decisions
[None yet]

---
*Last updated: [timestamp]*
EOF

2. Enable LanceDB (Warm Store)

In ~/.openclaw/openclaw.json:

{
  "memorySearch": {
    "enabled": true,
    "provider": "openai",
    "sources": ["memory"],
    "minScore": 0.3,
    "maxResults": 10
  },
  "plugins": {
    "entries": {
      "memory-lancedb": {
        "enabled": true,
        "config": {
          "autoCapture": false,
          "autoRecall": true,
          "captureCategories": ["preference", "decision", "fact"],
          "minImportance": 0.7
        }
      }
    }
  }
}

3. Initialize Git-Notes (Cold Store)

cd ~/clawd
git init  # if not already
python3 skills/git-notes-memory/memory.py -p . sync --start

4. Verify MEMORY.md Structure

# Ensure you have:
# - MEMORY.md in workspace root
# - memory/ folder for daily logs
mkdir -p memory

5. (Optional) Setup SuperMemory

export SUPERMEMORY_API_KEY="your-key"
# Add to ~/.zshrc for persistence

Agent Instructions

On Session Start

  1. Read SESSION-STATE.md — this is your hot context
  2. Run memory_search for relevant prior context
  3. Check memory/YYYY-MM-DD.md for recent activity

During Conversation

  1. User gives concrete detail? → Write to SESSION-STATE.md BEFORE responding
  2. Important decision made? → Store in Git-Notes (SILENTLY)
  3. Preference expressed? → memory_store with importance=0.9

On Session End

  1. Update SESSION-STATE.md with final state
  2. Move significant items to MEMORY.md if worth keeping long-term
  3. Create/update daily log in memory/YYYY-MM-DD.md

Memory Hygiene (Weekly)

  1. Review SESSION-STATE.md — archive completed tasks
  2. Check LanceDB for junk: memory_recall query="*" limit=50
  3. Clear irrelevant vectors: memory_forget id=<id>
  4. Consolidate daily logs into MEMORY.md

The WAL Protocol (Critical)

Write-Ahead Log: Write state BEFORE responding, not after.

TriggerAction
User states preferenceWrite to SESSION-STATE.md → then respond
User makes decisionWrite to SESSION-STATE.md → then respond
User gives deadlineWrite to SESSION-STATE.md → then respond
User corrects youWrite to SESSION-STATE.md → then respond

Why? If you respond first and crash/compact before saving, context is lost. WAL ensures durability.

Example Workflow

User: "Let's use Tailwind for this project, not vanilla CSS"

Agent (internal):
1. Write to SESSION-STATE.md: "Decision: Use Tailwind, not vanilla CSS"
2. Store in Git-Notes: decision about CSS framework
3. memory_store: "User prefers Tailwind over vanilla CSS" importance=0.9
4. THEN respond: "Got it — Tailwind it is..."

Maintenance Commands

# Audit vector memory
memory_recall query="*" limit=50

# Clear all vectors (nuclear option)
rm -rf ~/.openclaw/memory/lancedb/
openclaw gateway restart

# Export Git-Notes
python3 memory.py -p . export --format json > memories.json

# Check memory health
du -sh ~/.openclaw/memory/
wc -l MEMORY.md
ls -la memory/

Why Memory Fails

Understanding the root causes helps you fix them:

Failure ModeCauseFix
Forgets everythingmemory_search disabledEnable + add OpenAI key
Files not loadedAgent skips reading memoryAdd to AGENTS.md rules
Facts not capturedNo auto-extractionUse Mem0 or manual logging
Sub-agents isolatedDon't inherit contextPass context in task prompt
Repeats mistakesLessons not loggedWrite to memory/lessons.md

If you have an OpenAI key, enable semantic search:

openclaw configure --section web

This enables vector search over MEMORY.md + memory/*.md files.

2. Recommended: Mem0 Integration

Auto-extract facts from conversations. 80% token reduction.

npm install mem0ai
const { MemoryClient } = require('mem0ai');

const client = new MemoryClient({ apiKey: process.env.MEM0_API_KEY });

// Auto-extract and store
await client.add([
  { role: "user", content: "I prefer Tailwind over vanilla CSS" }
], { user_id: "ty" });

// Retrieve relevant memories
const memories = await client.search("CSS preferences", { user_id: "ty" });

3. Better File Structure (No Dependencies)

memory/
ā”œā”€ā”€ projects/
│   ā”œā”€ā”€ strykr.md
│   └── taska.md
ā”œā”€ā”€ people/
│   └── contacts.md
ā”œā”€ā”€ decisions/
│   └── 2026-01.md
ā”œā”€ā”€ lessons/
│   └── mistakes.md
└── preferences.md

Keep MEMORY.md as a summary (<5KB), link to detailed files.

Immediate Fixes Checklist

ProblemFix
Forgets preferencesAdd ## Preferences section to MEMORY.md
Repeats mistakesLog every mistake to memory/lessons.md
Sub-agents lack contextInclude key context in spawn task prompt
Forgets recent workStrict daily file discipline
Memory search not workingCheck OPENAI_API_KEY is set

Agent keeps forgetting mid-conversation: → SESSION-STATE.md not being updated. Check WAL protocol.

Irrelevant memories injected: → Disable autoCapture, increase minImportance threshold.

Memory too large, slow recall: → Run hygiene: clear old vectors, archive daily logs.

Git-Notes not persisting: → Run git notes push to sync with remote.

memory_search returns nothing: → Check OpenAI API key: echo $OPENAI_API_KEY → Verify memorySearch enabled in openclaw.json



Built by @NextXFrontier — Part of the Next Frontier AI toolkit

README.md

Elite Longterm Memory 🧠

The ultimate memory system for AI agents. Never lose context again.

npm version npm downloads License: MIT


Works With

<p align="center"> <img src="https://img.shields.io/badge/Claude-AI-orange?style=for-the-badge&logo=anthropic" alt="Claude AI" /> <img src="https://img.shields.io/badge/GPT-OpenAI-412991?style=for-the-badge&logo=openai" alt="GPT" /> <img src="https://img.shields.io/badge/Cursor-IDE-000000?style=for-the-badge" alt="Cursor" /> <img src="https://img.shields.io/badge/LangChain-Framework-1C3C3C?style=for-the-badge" alt="LangChain" /> </p> <p align="center"> <strong>Built for:</strong> Clawdbot • Moltbot • Claude Code • Any AI Agent </p>

Combines 7 proven memory approaches into one bulletproof architecture:

  • āœ… Bulletproof WAL Protocol — Write-ahead logging survives compaction
  • āœ… LanceDB Vector Search — Semantic recall of relevant memories
  • āœ… Git-Notes Knowledge Graph — Structured decisions, branch-aware
  • āœ… File-Based Archives — Human-readable MEMORY.md + daily logs
  • āœ… Cloud Backup — Optional SuperMemory sync
  • āœ… Memory Hygiene — Keep vectors lean, prevent token waste
  • āœ… Mem0 Auto-Extraction — Automatic fact extraction, 80% token reduction

Quick Start

# Initialize in your workspace
npx elite-longterm-memory init

# Check status
npx elite-longterm-memory status

# Create today's log
npx elite-longterm-memory today

Architecture

ā”Œā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”
│              ELITE LONGTERM MEMORY                  │
ā”œā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”¤
│  HOT RAM          WARM STORE        COLD STORE     │
│  SESSION-STATE.md → LanceDB      → Git-Notes       │
│  (survives         (semantic       (permanent      │
│   compaction)       search)         decisions)     │
│         │              │                │          │
│         ā””ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”¼ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”˜          │
│                        ā–¼                           │
│                   MEMORY.md                        │
│               (curated archive)                    │
ā””ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”˜

The 5 Memory Layers

LayerFile/SystemPurposePersistence
1. Hot RAMSESSION-STATE.mdActive task contextSurvives compaction
2. Warm StoreLanceDBSemantic searchAuto-recall
3. Cold StoreGit-NotesStructured decisionsPermanent
4. ArchiveMEMORY.md + daily/Human-readableCurated
5. CloudSuperMemoryCross-device syncOptional

The WAL Protocol

Critical insight: Write state BEFORE responding, not after.

User: "Let's use Tailwind for this project"

Agent (internal):
1. Write to SESSION-STATE.md → "Decision: Use Tailwind"
2. THEN respond → "Got it — Tailwind it is..."

If you respond first and crash before saving, context is lost. WAL ensures durability.

Why Memory Fails (And How to Fix It)

ProblemCauseFix
Forgets everythingmemory_search disabledEnable + add OpenAI key
Repeats mistakesLessons not loggedWrite to memory/lessons.md
Sub-agents isolatedNo context inheritancePass context in task prompt
Facts not capturedNo auto-extractionUse Mem0 (see below)

Mem0 Integration (Recommended)

Auto-extract facts from conversations. 80% token reduction.

npm install mem0ai
export MEM0_API_KEY="your-key"
const { MemoryClient } = require('mem0ai');
const client = new MemoryClient({ apiKey: process.env.MEM0_API_KEY });

// Auto-extracts facts from messages
await client.add(messages, { user_id: "user123" });

// Retrieve relevant memories  
const memories = await client.search(query, { user_id: "user123" });

Add to ~/.clawdbot/clawdbot.json:

{
  "memorySearch": {
    "enabled": true,
    "provider": "openai",
    "sources": ["memory"]
  }
}

Files Created

workspace/
ā”œā”€ā”€ SESSION-STATE.md    # Hot RAM (active context)
ā”œā”€ā”€ MEMORY.md           # Curated long-term memory
└── memory/
    ā”œā”€ā”€ 2026-01-30.md   # Daily logs
    └── ...

Commands

elite-memory init      # Initialize memory system
elite-memory status    # Check health
elite-memory today     # Create today's log
elite-memory help      # Show help

Links


Built by @NextXFrontier

Permissions & Security

Security level L1: Low-risk skills with minimal permissions. Review inputs and outputs before running in production.

Requirements

  • OpenClaw CLI installed and configured.
  • Language: Markdown
  • License: MIT
  • Topics:

FAQ

How do I install elite-longterm-memory?

Run openclaw add @nextfrontierbuilds/elite-longterm-memory in your terminal. This installs elite-longterm-memory into your OpenClaw Skills catalog.

Does this skill run locally or in the cloud?

OpenClaw Skills execute locally by default. Review the SKILL.md and permissions before running any skill.

Where can I verify the source code?

The source repository is available at https://github.com/openclaw/skills/tree/main/skills/nextfrontierbuilds/elite-longterm-memory. Review commits and README documentation before installing.