skills$openclaw/better-memory
dvntydigital8.9k

by dvntydigital

better-memory – OpenClaw Skill

better-memory is an OpenClaw Skills integration for coding workflows. Semantic memory, intelligent compression, and context management for AI agents. Prevents context limit amnesia with real embeddings, priority-based compression, and identity persistence.

8.9k stars2.3k forksSecurity L1
Updated Feb 7, 2026Created Feb 7, 2026coding

Skill Snapshot

namebetter-memory
descriptionSemantic memory, intelligent compression, and context management for AI agents. Prevents context limit amnesia with real embeddings, priority-based compression, and identity persistence. OpenClaw Skills integration.
ownerdvntydigital
repositorydvntydigital/better-memory
languageMarkdown
licenseMIT
topics
securityL1
installopenclaw add @dvntydigital/better-memory
last updatedFeb 7, 2026

Maintainer

dvntydigital

dvntydigital

Maintains better-memory in the OpenClaw Skills directory.

View GitHub profile
File Explorer
19 files
.
examples
user-experience-demo.js
4.7 KB
lib
compressor.js
7.9 KB
context-guardian.test.js
15.1 KB
context-monitor.js
9.4 KB
feedback.js
5.6 KB
index.js
7.8 KB
memory-store.js
14.0 KB
scripts
cli.js
8.2 KB
setup.js
538 B
_meta.json
284 B
HUMAN-EXPERIENCE.md
7.7 KB
INSTALL.md
2.9 KB
package-lock.json
33.2 KB
package.json
902 B
README.md
6.2 KB
SKILL.md
1.6 KB
SKILL.md

name: better-memory description: Semantic memory, intelligent compression, and context management for AI agents. Prevents context limit amnesia with real embeddings, priority-based compression, and identity persistence. homepage: https://github.com/DVNTYDIGITAL/better-memory metadata: clawdbot: emoji: "🧠" requires: bins: [] npm: ["@xenova/transformers", "tiktoken", "sql.js"] install: - id: npm kind: npm label: Install Better Memory dependencies command: "cd ~/.clawdbot/skills/better-memory && npm install"

Better Memory

Semantic memory, intelligent compression, and context management for AI agents.

What It Does

  • Stores memories with real vector embeddings (local, no API calls)
  • Semantic search via cosine similarity
  • Auto-deduplicates on store (exact + semantic)
  • Priority-based compression when approaching context limits
  • Identity persistence across sessions
  • Token-budget-aware memory retrieval
  • Configurable context limits, thresholds, and summarizer

Quick Start

import { createContextGuardian } from 'context-guardian';

const cg = createContextGuardian({
  contextLimit: 128000,
  summarizer: async (text) => myLLM.summarize(text), // optional
});
await cg.initialize();

// Store (auto-deduplicates)
await cg.store('User prefers TypeScript', { priority: 9 });

// Search
const results = await cg.search('programming preferences');

// Get memories within token budget
const { memories, tokensUsed } = await cg.getRelevantContext('query', 4000);

// Compress conversation and store important parts
const { compressed } = await cg.summarizeAndStore(messages);
README.md

Better Memory

Semantic memory, intelligent compression, and context management for AI agents.

The Problem

Agents hit context limits and lose everything. Mid-conversation amnesia. No memory across sessions. System prompts eat token budgets.

The Solution

Better Memory gives agents persistent semantic memory with automatic deduplication, priority-based compression, and token-budget-aware retrieval.

  • Real vector embeddings (local, no API calls)
  • SQLite storage with binary embedding blobs
  • Auto-deduplication (exact hash + cosine similarity >0.9)
  • Multi-signal priority scoring (role + regex + semantic + length)
  • Pluggable LLM summarizer with extractive fallback
  • Memory decay (age penalty + access boost)
  • Token-budget-aware retrieval
  • Configurable everything (context limit, thresholds, data dir, encoding)

Install

npm install better-memory

Usage

Programmatic (Primary)

import { createContextGuardian } from 'better-memory';

const cg = createContextGuardian({
  dataDir: '/path/to/data',       // Default: ~/.better-memory
  contextLimit: 128000,            // Default: 128000
  encoding: 'cl100k_base',        // Default: cl100k_base
  summarizer: async (text) => {    // Optional: your LLM summarizer
    return await myLLM.summarize(text);
  },
  autoRetrieve: true,              // Auto-inject relevant memories
  autoCompress: true,              // Auto-compress at thresholds
});

await cg.initialize();

Store Memories

// Auto-deduplicates: exact match returns existing ID,
// cosine similarity >0.9 updates existing instead of creating duplicate
await cg.store('User prefers TypeScript over JavaScript', { priority: 9 });
await cg.store('Project uses PostgreSQL on AWS RDS', { priority: 8 });
const results = await cg.search('database choice', { threshold: 0.5, limit: 5 });
// Returns: [{ id, content, similarity, priority, metadata, tags, created_at }]

Token-Budget Retrieval

// Get relevant memories that fit within a token budget
const { memories, tokensUsed } = await cg.getRelevantContext(
  'tech stack decisions',
  4000,  // max tokens
  { threshold: 0.5 }
);

Compress Conversations

// Score, summarize, and store a conversation chunk
const { compressed, storedCount } = await cg.summarizeAndStore(messages, {
  targetTokens: 2000,
  sessionId: 'session-123',
});

Process Messages (Auto Mode)

// Tracks tokens, auto-compresses at thresholds, auto-retrieves memories
const usage = await cg.process(userMessage, 'user');
// Returns: { used, limit, percent, remaining, status }

Identity

cg.setIdentity({ name: 'Kit', personality: 'direct, competent' });
const identity = cg.getIdentity();

Runtime Configuration

cg.configure({
  contextLimit: 200000,
  autoRetrieve: false,
});

Status

const status = cg.getStatus();
// { used, limit, percent, status, session_id, messages, compressions, ... }

const memStats = cg.getMemoryStats();
// { memory_count, db_size_bytes, db_size_mb }

Cleanup

cg.close(); // Frees tiktoken encoder + closes SQLite

CLI

better-memory status                        # Context health
better-memory search <query>                # Semantic search
better-memory store <content>               # Store a memory
better-memory identity [name] [traits...]   # Set/view identity
better-memory stats                         # Statistics
better-memory relevant <query> --budget <n> # Budget-aware retrieval
better-memory compress                      # Force compression
better-memory end-session                   # End session

# Flags
--data-dir <path>       # Data directory (default: ~/.better-memory)
--context-limit <n>     # Token limit (default: 128000)

Architecture

lib/
  index.js            # ContextGuardian class + factory exports
  memory-store.js     # SQLite vector store with dedup + decay
  compressor.js       # Multi-signal priority scoring + summarization
  context-monitor.js  # Token tracking + threshold management

Storage

SQLite database at ~/.better-memory/memories.db (configurable via dataDir).

Tables: memories (content + binary embedding blob + priority + metadata + access tracking), identity, sessions.

Embeddings

@xenova/transformers with Xenova/all-MiniLM-L6-v2 (384-dim vectors). Runs locally, no API calls. Embeddings loaded into memory for fast cosine similarity search.

Priority Scoring

Multi-signal, not keyword matching:

  • Role base: system=7, user=6, assistant=5, tool=4
  • Regex patterns: word-boundary matching for importance indicators (+/- weight)
  • Semantic archetypes: cosine similarity to pre-computed importance embeddings
  • Length: bonus for substantive content, penalty for very short
  • Explicit: caller-provided priority passes through unchanged

Deduplication

On every store():

  1. SHA-256 hash check (exact match = update existing)
  2. Cosine similarity >0.9 check (near-duplicate = update existing)

Memory Decay

effectivePriority = basePriority - (daysSinceAccess * 0.3) + min(accessCount * 0.1, 2)

Old unused memories decay. Frequently accessed memories get boosted.

Compression

At 85% capacity (configurable):

  1. Score all messages by priority
  2. Keep high-priority (>=8) + last 5 messages
  3. Summarize medium-priority (5-7) via pluggable LLM or extractive fallback
  4. Drop low-priority (<5)
  5. Store high-priority to persistent memory

Configuration Options

OptionDefaultDescription
dataDir~/.better-memoryData storage directory
contextLimit128000Token limit
encodingcl100k_baseTiktoken encoding
summarizernullasync (text) => string LLM function
warningThreshold0.75Warning at 75%
compressThreshold0.85Auto-compress at 85%
emergencyThreshold0.95Emergency compress at 95%
autoRetrievetrueAuto-inject relevant memories
autoCompresstrueAuto-compress at threshold

Dependencies

  • @xenova/transformers - Local sentence embeddings
  • sql.js - SQLite storage (WASM, no native build required)
  • tiktoken - Accurate token counting

License

MIT

Permissions & Security

Security level L1: Low-risk skills with minimal permissions. Review inputs and outputs before running in production.

Requirements

  • OpenClaw CLI installed and configured.
  • Language: Markdown
  • License: MIT
  • Topics:

FAQ

How do I install better-memory?

Run openclaw add @dvntydigital/better-memory in your terminal. This installs better-memory into your OpenClaw Skills catalog.

Does this skill run locally or in the cloud?

OpenClaw Skills execute locally by default. Review the SKILL.md and permissions before running any skill.

Where can I verify the source code?

The source repository is available at https://github.com/openclaw/skills/tree/main/skills/dvntydigital/better-memory. Review commits and README documentation before installing.