9.5kā
elite-longterm-memory ā OpenClaw Skill
elite-longterm-memory is an OpenClaw Skills integration for coding workflows. Ultimate AI agent memory system. WAL protocol + vector search + git-notes + cloud backup. Never lose context again. Works with Claude, Cursor, GPT, OpenClaw agents.
Skill Snapshot
| name | elite-longterm-memory |
| description | Ultimate AI agent memory system. WAL protocol + vector search + git-notes + cloud backup. Never lose context again. Works with Claude, Cursor, GPT, OpenClaw agents. OpenClaw Skills integration. |
| owner | nextfrontierbuilds |
| repository | nextfrontierbuilds/elite-longterm-memory |
| language | Markdown |
| license | MIT |
| topics | |
| security | L1 |
| install | openclaw add @nextfrontierbuilds/elite-longterm-memory |
| last updated | Feb 7, 2026 |
Maintainer

nextfrontierbuilds
Maintains elite-longterm-memory in the OpenClaw Skills directory.
View GitHub profilename: elite-longterm-memory version: 1.2.1 description: "Ultimate AI agent memory system. WAL protocol + vector search + git-notes + cloud backup. Never lose context again. Works with Claude, Cursor, GPT, OpenClaw agents." author: NextFrontierBuilds keywords: [memory, ai-agent, ai-coding, long-term-memory, vector-search, lancedb, git-notes, wal, persistent-context, claude, claude-code, gpt, cursor, copilot, openclaw, moltbot, openclaw, vibe-coding, agentic] metadata: openclaw: emoji: "š§ " requires: env: - OPENAI_API_KEY plugins: - memory-lancedb
Elite Longterm Memory š§
The ultimate memory system for AI agents. Combines 6 proven approaches into one bulletproof architecture.
Never lose context. Never forget decisions. Never repeat mistakes.
Architecture Overview
āāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāā
ā ELITE LONGTERM MEMORY ā
āāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāā¤
ā ā
ā āāāāāāāāāāāāāāā āāāāāāāāāāāāāāā āāāāāāāāāāāāāāā ā
ā ā HOT RAM ā ā WARM STORE ā ā COLD STORE ā ā
ā ā ā ā ā ā ā ā
ā ā SESSION- ā ā LanceDB ā ā Git-Notes ā ā
ā ā STATE.md ā ā Vectors ā ā Knowledge ā ā
ā ā ā ā ā ā Graph ā ā
ā ā (survives ā ā (semantic ā ā (permanent ā ā
ā ā compaction)ā ā search) ā ā decisions) ā ā
ā āāāāāāāāāāāāāāā āāāāāāāāāāāāāāā āāāāāāāāāāāāāāā ā
ā ā ā ā ā
ā āāāāāāāāāāāāāāāāāā¼āāāāāāāāāāāāāāāāā ā
ā ā¼ ā
ā āāāāāāāāāāāāāāā ā
ā ā MEMORY.md ā ā Curated long-term ā
ā ā + daily/ ā (human-readable) ā
ā āāāāāāāāāāāāāāā ā
ā ā ā
ā ā¼ ā
ā āāāāāāāāāāāāāāā ā
ā ā SuperMemory ā ā Cloud backup (optional) ā
ā ā API ā ā
ā āāāāāāāāāāāāāāā ā
ā ā
āāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāā
The 5 Memory Layers
Layer 1: HOT RAM (SESSION-STATE.md)
From: bulletproof-memory
Active working memory that survives compaction. Write-Ahead Log protocol.
# SESSION-STATE.md ā Active Working Memory
## Current Task
[What we're working on RIGHT NOW]
## Key Context
- User preference: ...
- Decision made: ...
- Blocker: ...
## Pending Actions
- [ ] ...
Rule: Write BEFORE responding. Triggered by user input, not agent memory.
Layer 2: WARM STORE (LanceDB Vectors)
From: lancedb-memory
Semantic search across all memories. Auto-recall injects relevant context.
# Auto-recall (happens automatically)
memory_recall query="project status" limit=5
# Manual store
memory_store text="User prefers dark mode" category="preference" importance=0.9
Layer 3: COLD STORE (Git-Notes Knowledge Graph)
From: git-notes-memory
Structured decisions, learnings, and context. Branch-aware.
# Store a decision (SILENT - never announce)
python3 memory.py -p $DIR remember '{"type":"decision","content":"Use React for frontend"}' -t tech -i h
# Retrieve context
python3 memory.py -p $DIR get "frontend"
Layer 4: CURATED ARCHIVE (MEMORY.md + daily/)
From: OpenClaw native
Human-readable long-term memory. Daily logs + distilled wisdom.
workspace/
āāā MEMORY.md # Curated long-term (the good stuff)
āāā memory/
āāā 2026-01-30.md # Daily log
āāā 2026-01-29.md
āāā topics/ # Topic-specific files
Layer 5: CLOUD BACKUP (SuperMemory) ā Optional
From: supermemory
Cross-device sync. Chat with your knowledge base.
export SUPERMEMORY_API_KEY="your-key"
supermemory add "Important context"
supermemory search "what did we decide about..."
Layer 6: AUTO-EXTRACTION (Mem0) ā Recommended
NEW: Automatic fact extraction
Mem0 automatically extracts facts from conversations. 80% token reduction.
npm install mem0ai
export MEM0_API_KEY="your-key"
const { MemoryClient } = require('mem0ai');
const client = new MemoryClient({ apiKey: process.env.MEM0_API_KEY });
// Conversations auto-extract facts
await client.add(messages, { user_id: "user123" });
// Retrieve relevant memories
const memories = await client.search(query, { user_id: "user123" });
Benefits:
- Auto-extracts preferences, decisions, facts
- Deduplicates and updates existing memories
- 80% reduction in tokens vs raw history
- Works across sessions automatically
Quick Setup
1. Create SESSION-STATE.md (Hot RAM)
cat > SESSION-STATE.md << 'EOF'
# SESSION-STATE.md ā Active Working Memory
This file is the agent's "RAM" ā survives compaction, restarts, distractions.
## Current Task
[None]
## Key Context
[None yet]
## Pending Actions
- [ ] None
## Recent Decisions
[None yet]
---
*Last updated: [timestamp]*
EOF
2. Enable LanceDB (Warm Store)
In ~/.openclaw/openclaw.json:
{
"memorySearch": {
"enabled": true,
"provider": "openai",
"sources": ["memory"],
"minScore": 0.3,
"maxResults": 10
},
"plugins": {
"entries": {
"memory-lancedb": {
"enabled": true,
"config": {
"autoCapture": false,
"autoRecall": true,
"captureCategories": ["preference", "decision", "fact"],
"minImportance": 0.7
}
}
}
}
}
3. Initialize Git-Notes (Cold Store)
cd ~/clawd
git init # if not already
python3 skills/git-notes-memory/memory.py -p . sync --start
4. Verify MEMORY.md Structure
# Ensure you have:
# - MEMORY.md in workspace root
# - memory/ folder for daily logs
mkdir -p memory
5. (Optional) Setup SuperMemory
export SUPERMEMORY_API_KEY="your-key"
# Add to ~/.zshrc for persistence
Agent Instructions
On Session Start
- Read SESSION-STATE.md ā this is your hot context
- Run
memory_searchfor relevant prior context - Check memory/YYYY-MM-DD.md for recent activity
During Conversation
- User gives concrete detail? ā Write to SESSION-STATE.md BEFORE responding
- Important decision made? ā Store in Git-Notes (SILENTLY)
- Preference expressed? ā
memory_storewith importance=0.9
On Session End
- Update SESSION-STATE.md with final state
- Move significant items to MEMORY.md if worth keeping long-term
- Create/update daily log in memory/YYYY-MM-DD.md
Memory Hygiene (Weekly)
- Review SESSION-STATE.md ā archive completed tasks
- Check LanceDB for junk:
memory_recall query="*" limit=50 - Clear irrelevant vectors:
memory_forget id=<id> - Consolidate daily logs into MEMORY.md
The WAL Protocol (Critical)
Write-Ahead Log: Write state BEFORE responding, not after.
| Trigger | Action |
|---|---|
| User states preference | Write to SESSION-STATE.md ā then respond |
| User makes decision | Write to SESSION-STATE.md ā then respond |
| User gives deadline | Write to SESSION-STATE.md ā then respond |
| User corrects you | Write to SESSION-STATE.md ā then respond |
Why? If you respond first and crash/compact before saving, context is lost. WAL ensures durability.
Example Workflow
User: "Let's use Tailwind for this project, not vanilla CSS"
Agent (internal):
1. Write to SESSION-STATE.md: "Decision: Use Tailwind, not vanilla CSS"
2. Store in Git-Notes: decision about CSS framework
3. memory_store: "User prefers Tailwind over vanilla CSS" importance=0.9
4. THEN respond: "Got it ā Tailwind it is..."
Maintenance Commands
# Audit vector memory
memory_recall query="*" limit=50
# Clear all vectors (nuclear option)
rm -rf ~/.openclaw/memory/lancedb/
openclaw gateway restart
# Export Git-Notes
python3 memory.py -p . export --format json > memories.json
# Check memory health
du -sh ~/.openclaw/memory/
wc -l MEMORY.md
ls -la memory/
Why Memory Fails
Understanding the root causes helps you fix them:
| Failure Mode | Cause | Fix |
|---|---|---|
| Forgets everything | memory_search disabled | Enable + add OpenAI key |
| Files not loaded | Agent skips reading memory | Add to AGENTS.md rules |
| Facts not captured | No auto-extraction | Use Mem0 or manual logging |
| Sub-agents isolated | Don't inherit context | Pass context in task prompt |
| Repeats mistakes | Lessons not logged | Write to memory/lessons.md |
Solutions (Ranked by Effort)
1. Quick Win: Enable memory_search
If you have an OpenAI key, enable semantic search:
openclaw configure --section web
This enables vector search over MEMORY.md + memory/*.md files.
2. Recommended: Mem0 Integration
Auto-extract facts from conversations. 80% token reduction.
npm install mem0ai
const { MemoryClient } = require('mem0ai');
const client = new MemoryClient({ apiKey: process.env.MEM0_API_KEY });
// Auto-extract and store
await client.add([
{ role: "user", content: "I prefer Tailwind over vanilla CSS" }
], { user_id: "ty" });
// Retrieve relevant memories
const memories = await client.search("CSS preferences", { user_id: "ty" });
3. Better File Structure (No Dependencies)
memory/
āāā projects/
ā āāā strykr.md
ā āāā taska.md
āāā people/
ā āāā contacts.md
āāā decisions/
ā āāā 2026-01.md
āāā lessons/
ā āāā mistakes.md
āāā preferences.md
Keep MEMORY.md as a summary (<5KB), link to detailed files.
Immediate Fixes Checklist
| Problem | Fix |
|---|---|
| Forgets preferences | Add ## Preferences section to MEMORY.md |
| Repeats mistakes | Log every mistake to memory/lessons.md |
| Sub-agents lack context | Include key context in spawn task prompt |
| Forgets recent work | Strict daily file discipline |
| Memory search not working | Check OPENAI_API_KEY is set |
Troubleshooting
Agent keeps forgetting mid-conversation: ā SESSION-STATE.md not being updated. Check WAL protocol.
Irrelevant memories injected: ā Disable autoCapture, increase minImportance threshold.
Memory too large, slow recall: ā Run hygiene: clear old vectors, archive daily logs.
Git-Notes not persisting:
ā Run git notes push to sync with remote.
memory_search returns nothing:
ā Check OpenAI API key: echo $OPENAI_API_KEY
ā Verify memorySearch enabled in openclaw.json
Links
- bulletproof-memory: https://clawdhub.com/skills/bulletproof-memory
- lancedb-memory: https://clawdhub.com/skills/lancedb-memory
- git-notes-memory: https://clawdhub.com/skills/git-notes-memory
- memory-hygiene: https://clawdhub.com/skills/memory-hygiene
- supermemory: https://clawdhub.com/skills/supermemory
Built by @NextXFrontier ā Part of the Next Frontier AI toolkit
Elite Longterm Memory š§
The ultimate memory system for AI agents. Never lose context again.
Works With
<p align="center"> <img src="https://img.shields.io/badge/Claude-AI-orange?style=for-the-badge&logo=anthropic" alt="Claude AI" /> <img src="https://img.shields.io/badge/GPT-OpenAI-412991?style=for-the-badge&logo=openai" alt="GPT" /> <img src="https://img.shields.io/badge/Cursor-IDE-000000?style=for-the-badge" alt="Cursor" /> <img src="https://img.shields.io/badge/LangChain-Framework-1C3C3C?style=for-the-badge" alt="LangChain" /> </p> <p align="center"> <strong>Built for:</strong> Clawdbot ⢠Moltbot ⢠Claude Code ⢠Any AI Agent </p>Combines 7 proven memory approaches into one bulletproof architecture:
- ā Bulletproof WAL Protocol ā Write-ahead logging survives compaction
- ā LanceDB Vector Search ā Semantic recall of relevant memories
- ā Git-Notes Knowledge Graph ā Structured decisions, branch-aware
- ā File-Based Archives ā Human-readable MEMORY.md + daily logs
- ā Cloud Backup ā Optional SuperMemory sync
- ā Memory Hygiene ā Keep vectors lean, prevent token waste
- ā Mem0 Auto-Extraction ā Automatic fact extraction, 80% token reduction
Quick Start
# Initialize in your workspace
npx elite-longterm-memory init
# Check status
npx elite-longterm-memory status
# Create today's log
npx elite-longterm-memory today
Architecture
āāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāā
ā ELITE LONGTERM MEMORY ā
āāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāā¤
ā HOT RAM WARM STORE COLD STORE ā
ā SESSION-STATE.md ā LanceDB ā Git-Notes ā
ā (survives (semantic (permanent ā
ā compaction) search) decisions) ā
ā ā ā ā ā
ā āāāāāāāāāāāāāāāā¼āāāāāāāāāāāāāāāāā ā
ā ā¼ ā
ā MEMORY.md ā
ā (curated archive) ā
āāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāā
The 5 Memory Layers
| Layer | File/System | Purpose | Persistence |
|---|---|---|---|
| 1. Hot RAM | SESSION-STATE.md | Active task context | Survives compaction |
| 2. Warm Store | LanceDB | Semantic search | Auto-recall |
| 3. Cold Store | Git-Notes | Structured decisions | Permanent |
| 4. Archive | MEMORY.md + daily/ | Human-readable | Curated |
| 5. Cloud | SuperMemory | Cross-device sync | Optional |
The WAL Protocol
Critical insight: Write state BEFORE responding, not after.
User: "Let's use Tailwind for this project"
Agent (internal):
1. Write to SESSION-STATE.md ā "Decision: Use Tailwind"
2. THEN respond ā "Got it ā Tailwind it is..."
If you respond first and crash before saving, context is lost. WAL ensures durability.
Why Memory Fails (And How to Fix It)
| Problem | Cause | Fix |
|---|---|---|
| Forgets everything | memory_search disabled | Enable + add OpenAI key |
| Repeats mistakes | Lessons not logged | Write to memory/lessons.md |
| Sub-agents isolated | No context inheritance | Pass context in task prompt |
| Facts not captured | No auto-extraction | Use Mem0 (see below) |
Mem0 Integration (Recommended)
Auto-extract facts from conversations. 80% token reduction.
npm install mem0ai
export MEM0_API_KEY="your-key"
const { MemoryClient } = require('mem0ai');
const client = new MemoryClient({ apiKey: process.env.MEM0_API_KEY });
// Auto-extracts facts from messages
await client.add(messages, { user_id: "user123" });
// Retrieve relevant memories
const memories = await client.search(query, { user_id: "user123" });
For Clawdbot/Moltbot Users
Add to ~/.clawdbot/clawdbot.json:
{
"memorySearch": {
"enabled": true,
"provider": "openai",
"sources": ["memory"]
}
}
Files Created
workspace/
āāā SESSION-STATE.md # Hot RAM (active context)
āāā MEMORY.md # Curated long-term memory
āāā memory/
āāā 2026-01-30.md # Daily logs
āāā ...
Commands
elite-memory init # Initialize memory system
elite-memory status # Check health
elite-memory today # Create today's log
elite-memory help # Show help
Links
Built by @NextXFrontier
Permissions & Security
Security level L1: Low-risk skills with minimal permissions. Review inputs and outputs before running in production.
Requirements
- OpenClaw CLI installed and configured.
- Language: Markdown
- License: MIT
- Topics:
FAQ
How do I install elite-longterm-memory?
Run openclaw add @nextfrontierbuilds/elite-longterm-memory in your terminal. This installs elite-longterm-memory into your OpenClaw Skills catalog.
Does this skill run locally or in the cloud?
OpenClaw Skills execute locally by default. Review the SKILL.md and permissions before running any skill.
Where can I verify the source code?
The source repository is available at https://github.com/openclaw/skills/tree/main/skills/nextfrontierbuilds/elite-longterm-memory. Review commits and README documentation before installing.
