skills$openclaw/MemoryLayer
khli011.8k

by khli01

MemoryLayer – OpenClaw Skill

MemoryLayer is an OpenClaw Skills integration for coding workflows. Semantic memory for AI agents. 95% token savings with vector search.

1.8k stars7.4k forksSecurity L1
Updated Feb 7, 2026Created Feb 7, 2026coding

Skill Snapshot

nameMemoryLayer
descriptionSemantic memory for AI agents. 95% token savings with vector search. OpenClaw Skills integration.
ownerkhli01
repositorykhli01/memorylayer
languageMarkdown
licenseMIT
topics
securityL1
installopenclaw add @khli01/memorylayer
last updatedFeb 7, 2026

Maintainer

khli01

khli01

Maintains MemoryLayer in the OpenClaw Skills directory.

View GitHub profile
File Explorer
14 files
.
examples
agent-integration.js
4.2 KB
basic-usage.js
2.7 KB
token-savings-demo.js
6.8 KB
python
memorylayer_skill.py
6.7 KB
requirements.txt
129 B
_meta.json
274 B
.gitignore
14 B
index.js
2.7 KB
package-lock.json
10.0 KB
package.json
593 B
README.md
4.7 KB
SKILL.md
4.7 KB
SKILL.md

slug: memorylayer name: MemoryLayer description: Semantic memory for AI agents. 95% token savings with vector search. homepage: https://memorylayer.clawbot.hk metadata: clawdbot: emoji: "🧠"

MemoryLayer

Semantic memory infrastructure for AI agents that actually scales.

Features

  • 95% Token Savings - Retrieve only relevant memories
  • Semantic Search - Find memories by meaning, not keywords
  • Sub-200ms - Lightning-fast memory retrieval
  • Multi-tenant - Isolated memory per agent instance

Setup

1. Sign up for FREE account

Visit https://memorylayer.clawbot.hk and sign up with Google. You'll get:

  • 10,000 operations/month
  • 1GB storage
  • Community support

2. Configure credentials

# Option 1: Email/Password
export MEMORYLAYER_EMAIL=your@email.com
export MEMORYLAYER_PASSWORD=your_password

# Option 2: API Key (recommended for production)
export MEMORYLAYER_API_KEY=ml_your_api_key_here

3. Install Python SDK (if not using skill wrapper)

pip install memorylayer

Usage

Basic Example

// In your Clawdbot agent
const memory = require('memorylayer');

// Store a memory
await memory.remember(
  'User prefers dark mode UI',
  { type: 'semantic', importance: 0.8 }
);

// Search memories
const results = await memory.search('UI preferences');
console.log(results[0].content); // "User prefers dark mode UI"

Python Example

from plugins.memorylayer import memory

# Store
memory.remember(
    "Boss prefers direct reporting with zero bullshit",
    memory_type="semantic",
    importance=0.9
)

# Search
results = memory.recall("What are Boss's preferences?")
for r in results:
    print(f"{r.relevance_score:.2f}: {r.memory.content}")

Token Savings

Before MemoryLayer:

# Inject entire memory files
context = open('MEMORY.md').read()  # 10,500 tokens
prompt = f"{context}\n\nUser: What are my preferences?"

After MemoryLayer:

# Inject only relevant memories
context = memory.get_context("user preferences", limit=5)  # ~500 tokens
prompt = f"{context}\n\nUser: What are my preferences?"

Result: 95% token reduction, $900/month savings at scale

API Reference

memory.remember(content, options)

Store a new memory.

Parameters:

  • content (string): Memory content
  • options.type (string): 'episodic' | 'semantic' | 'procedural'
  • options.importance (number): 0.0 to 1.0
  • options.metadata (object): Additional tags/data

Returns: Memory object with id

memory.search(query, limit)

Search memories semantically.

Parameters:

  • query (string): Search query (natural language)
  • limit (number): Max results (default: 10)

Returns: Array of SearchResult objects

memory.get_context(query, limit)

Get formatted context for prompt injection.

Parameters:

  • query (string): What context do you need?
  • limit (number): Max memories (default: 5)

Returns: Formatted string ready for prompt

memory.stats()

Get usage statistics.

Returns: Object with total_memories, memory_types, operations_this_month

Advanced

Memory Types

Episodic - Events and experiences

memory.remember('Deployed MemoryLayer on 2026-02-03', { type: 'episodic' });

Semantic - Facts and knowledge

memory.remember('Boss prefers concise reports', { type: 'semantic' });

Procedural - How-to and processes

memory.remember('To restart server: ssh root@... && systemctl restart...', { type: 'procedural' });

Metadata Tagging

memory.remember('User likes blue', {
  type: 'semantic',
  metadata: {
    category: 'preferences',
    subcategory: 'colors',
    source: 'user_profile'
  }
});

Usage Tracking

const stats = await memory.stats();
console.log(`Total memories: ${stats.total_memories}`);
console.log(`Operations this month: ${stats.operations_this_month}`);
console.log(`Plan: ${stats.plan} (${stats.operations_limit}/month)`);

Pricing

FREE Plan (Current)

  • 10,000 operations/month
  • 1GB storage
  • Community support

Pro Plan ($99/mo)

  • 1M operations/month
  • 10GB storage
  • Email support
  • 99.9% SLA

Enterprise (Custom)

  • Unlimited operations
  • Unlimited storage
  • Dedicated support
  • Self-hosted option
  • Custom SLA
README.md

MemoryLayer ClawdBot Skill

Semantic memory for AI agents with 95% token savings.

Install with ClawdBot Homepage

🎯 What is MemoryLayer?

MemoryLayer provides semantic long-term memory for AI agents, replacing bloated file-based memory systems with efficient vector search.

The Problem:

  • Dumping entire chat history = 10,500+ tokens per request
  • Keyword search misses semantic matches
  • File-based memory doesn't scale
  • Cost: $945/month at 30K requests

The Solution:

  • Semantic search via embeddings
  • 95% token reduction (10.5K → 500 tokens)
  • <200ms retrieval
  • Cost: $45/month at 30K requests

Savings: $900/month 💰

🚀 Quick Start

Install

clawdbot skill install memorylayer

Note for developers: If cloning from GitHub, run npm install first to install dependencies.

Setup

# Sign up for FREE account at https://memorylayer.clawbot.hk
# Then configure credentials:

export MEMORYLAYER_EMAIL=your@email.com
export MEMORYLAYER_PASSWORD=your_password

Usage

JavaScript:

const memory = require('memorylayer');

// Store a memory
await memory.remember(
  'User prefers dark mode UI',
  { type: 'semantic', importance: 0.8 }
);

// Search memories
const results = await memory.search('UI preferences');
console.log(results[0].content); // "User prefers dark mode UI"

// Get formatted context for prompt injection
const context = await memory.get_context('user preferences', 5);
// Returns: "## Relevant Memories\n- User prefers dark mode..."

Python:

from memorylayer import memory

# Store
memory.remember(
    "User prefers dark mode UI",
    memory_type="semantic",
    importance=0.8
)

# Search
results = memory.recall("UI preferences")
for r in results:
    print(f"{r.relevance_score:.2f}: {r.memory.content}")

📊 Token Savings Example

Before MemoryLayer:

# Inject entire memory files
context = open('MEMORY.md').read()  # 10,500 tokens
prompt = f"{context}\n\nUser: What are my preferences?"

After MemoryLayer:

# Inject only relevant memories
context = memory.get_context("user preferences", limit=5)  # ~500 tokens
prompt = f"{context}\n\nUser: What are my preferences?"

Result: 95% token reduction, $900/month savings at scale

🌟 Features

  • Semantic Search - Find by meaning, not keywords
  • Multi-tenant - Isolated memory per agent
  • Fast - <200ms average search time
  • Memory Types - Episodic, semantic, procedural
  • FREE Plan - 10,000 operations/month
  • Dual Language - JavaScript + Python support

📖 API Reference

memory.remember(content, options)

Store a new memory.

Parameters:

  • content (string): Memory content
  • options.type (string): 'episodic' | 'semantic' | 'procedural'
  • options.importance (number): 0.0 to 1.0
  • options.metadata (object): Additional tags/data

Returns: Memory object with id

memory.search(query, limit)

Search memories semantically.

Parameters:

  • query (string): Search query (natural language)
  • limit (number): Max results (default: 10)

Returns: Array of SearchResult objects

memory.get_context(query, limit)

Get formatted context for prompt injection.

Parameters:

  • query (string): What context do you need?
  • limit (number): Max memories (default: 5)

Returns: Formatted string ready for prompt

memory.stats()

Get usage statistics.

Returns: Object with total_memories, memory_types, operations_this_month

💰 Pricing

FREE Plan

  • 10,000 operations/month
  • 1GB storage
  • Community support
  • Perfect for side projects

Pro Plan ($99/mo)

  • 1M operations/month
  • 10GB storage
  • Email support
  • 99.9% SLA

Enterprise (Custom)

  • Unlimited operations
  • Unlimited storage
  • Dedicated support
  • Self-hosted option

🔗 Links

📝 Examples

See the examples/ directory for:

  • basic-usage.js - Simple remember + search demo
  • agent-integration.js - Agent workflow integration
  • token-savings-demo.js - Before/after ROI comparison

🤝 Support

📄 License

MIT


Built by QuantechCo | Powered by MemoryLayer

Permissions & Security

Security level L1: Low-risk skills with minimal permissions. Review inputs and outputs before running in production.

Requirements

  • OpenClaw CLI installed and configured.
  • Language: Markdown
  • License: MIT
  • Topics:

FAQ

How do I install MemoryLayer?

Run openclaw add @khli01/memorylayer in your terminal. This installs MemoryLayer into your OpenClaw Skills catalog.

Does this skill run locally or in the cloud?

OpenClaw Skills execute locally by default. Review the SKILL.md and permissions before running any skill.

Where can I verify the source code?

The source repository is available at https://github.com/openclaw/skills/tree/main/skills/khli01/memorylayer. Review commits and README documentation before installing.