skills$openclaw/context-optimizer
ad2546295

by ad2546

context-optimizer – OpenClaw Skill

context-optimizer is an OpenClaw Skills integration for devops workflows. Advanced context management with auto-compaction and dynamic context optimization for DeepSeek's 64k context window. Features intelligent compaction (merging, summarizing, extracting), query-aware relevance scoring, and hierarchical memory system with context archive. Logs optimization events to chat.

295 stars5.1k forksSecurity L1
Updated Feb 7, 2026Created Feb 7, 2026devops

Skill Snapshot

namecontext-optimizer
descriptionAdvanced context management with auto-compaction and dynamic context optimization for DeepSeek's 64k context window. Features intelligent compaction (merging, summarizing, extracting), query-aware relevance scoring, and hierarchical memory system with context archive. Logs optimization events to chat. OpenClaw Skills integration.
ownerad2546
repositoryad2546/context-optimizer
languageMarkdown
licenseMIT
topics
securityL1
installopenclaw add @ad2546/context-optimizer
last updatedFeb 7, 2026

Maintainer

ad2546

ad2546

Maintains context-optimizer in the OpenClaw Skills directory.

View GitHub profile
File Explorer
14 files
.
examples
clawdbot-integration.js
5.7 KB
simple-integration.js
8.6 KB
lib
index.js
63.8 KB
scripts
cli.js
5.2 KB
_meta.json
286 B
chat-logger.js
3.1 KB
INSTALL.md
5.8 KB
package.json
532 B
README.md
4.2 KB
SKILL.md
6.2 KB
SUMMARY.md
3.8 KB
SKILL.md

name: context-optimizer description: Advanced context management with auto-compaction and dynamic context optimization for DeepSeek's 64k context window. Features intelligent compaction (merging, summarizing, extracting), query-aware relevance scoring, and hierarchical memory system with context archive. Logs optimization events to chat. homepage: https://github.com/clawdbot/clawdbot metadata: clawdbot: emoji: "🧠" requires: bins: [] npm: ["tiktoken", "@xenova/transformers"] install: - id: npm kind: npm label: Install Context Pruner dependencies command: "cd ~/.clawdbot/skills/context-pruner && npm install"

Context Pruner

Advanced context management optimized for DeepSeek's 64k context window. Provides intelligent pruning, compression, and token optimization to prevent context overflow while preserving important information.

Key Features

  • DeepSeek-optimized: Specifically tuned for 64k context window
  • Adaptive pruning: Multiple strategies based on context usage
  • Semantic deduplication: Removes redundant information
  • Priority-aware: Preserves high-value messages
  • Token-efficient: Minimizes token overhead
  • Real-time monitoring: Continuous context health tracking

Quick Start

Auto-compaction with dynamic context:

import { createContextPruner } from './lib/index.js';

const pruner = createContextPruner({
  contextLimit: 64000, // DeepSeek's limit
  autoCompact: true,    // Enable automatic compaction
  dynamicContext: true, // Enable dynamic relevance-based context
  strategies: ['semantic', 'temporal', 'extractive', 'adaptive'],
  queryAwareCompaction: true, // Compact based on current query relevance
});

await pruner.initialize();

// Process messages with auto-compaction and dynamic context
const processed = await pruner.processMessages(messages, currentQuery);

// Get context health status
const status = pruner.getStatus();
console.log(`Context health: ${status.health}, Relevance scores: ${status.relevanceScores}`);

// Manual compaction when needed
const compacted = await pruner.autoCompact(messages, currentQuery);

Archive Retrieval (Hierarchical Memory):

// When something isn't in current context, search archive
const archiveResult = await pruner.retrieveFromArchive('query about previous conversation', {
  maxContextTokens: 1000,
  minRelevance: 0.4,
});

if (archiveResult.found) {
  // Add relevant snippets to current context
  const archiveContext = archiveResult.snippets.join('\n\n');
  // Use archiveContext in your prompt
  console.log(`Found ${archiveResult.sources.length} relevant sources`);
  console.log(`Retrieved ${archiveResult.totalTokens} tokens from archive`);
}

Auto-Compaction Strategies

  1. Semantic Compaction: Merges similar messages instead of removing them
  2. Temporal Compaction: Summarizes older conversations by time windows
  3. Extractive Compaction: Extracts key information from verbose messages
  4. Adaptive Compaction: Chooses best strategy based on message characteristics
  5. Dynamic Context: Filters messages based on relevance to current query

Dynamic Context Management

  • Query-aware Relevance: Scores messages based on similarity to current query
  • Relevance Decay: Relevance scores decay over time for older conversations
  • Adaptive Filtering: Automatically filters low-relevance messages
  • Priority Integration: Combines message priority with semantic relevance

Hierarchical Memory System

The context archive provides a RAM vs Storage approach:

  • Current Context (RAM): Limited (64k tokens), fast access, auto-compacted
  • Archive (Storage): Larger (100MB), slower but searchable
  • Smart Retrieval: When information isn't in current context, efficiently search archive
  • Selective Loading: Extract only relevant snippets, not entire documents
  • Automatic Storage: Compacted content automatically stored in archive

Configuration

{
  contextLimit: 64000, // DeepSeek's context window
  autoCompact: true, // Enable automatic compaction
  compactThreshold: 0.75, // Start compacting at 75% usage
  aggressiveCompactThreshold: 0.9, // Aggressive compaction at 90%
  
  dynamicContext: true, // Enable dynamic context management
  relevanceDecay: 0.95, // Relevance decays 5% per time step
  minRelevanceScore: 0.3, // Minimum relevance to keep
  queryAwareCompaction: true, // Compact based on current query relevance
  
  strategies: ['semantic', 'temporal', 'extractive', 'adaptive'],
  preserveRecent: 10, // Always keep last N messages
  preserveSystem: true, // Always keep system messages
  minSimilarity: 0.85, // Semantic similarity threshold
  
  // Archive settings
  enableArchive: true, // Enable hierarchical memory system
  archivePath: './context-archive',
  archiveSearchLimit: 10,
  archiveMaxSize: 100 * 1024 * 1024, // 100MB
  archiveIndexing: true,
  
  // Chat logging
  logToChat: true, // Log optimization events to chat
  chatLogLevel: 'brief', // 'brief', 'detailed', or 'none'
  chatLogFormat: '📊 {action}: {details}', // Format for chat messages
  
  // Performance
  batchSize: 5, // Messages to process in batch
  maxCompactionRatio: 0.5, // Maximum 50% compaction in one pass
}

Chat Logging

The context optimizer can log events directly to chat:

// Example chat log messages:
// 📊 Context optimized: Compacted 15 messages → 8 (47% reduction)
// 📊 Archive search: Found 3 relevant snippets (42% similarity)
// 📊 Dynamic context: Filtered 12 low-relevance messages

// Configure logging:
const pruner = createContextPruner({
  logToChat: true,
  chatLogLevel: 'brief', // Options: 'brief', 'detailed', 'none'
  chatLogFormat: '📊 {action}: {details}',
  
  // Custom log handler (optional)
  onLog: (level, message, data) => {
    if (level === 'info' && data.action === 'compaction') {
      // Send to chat
      console.log(`🧠 Context optimized: ${message}`);
    }
  }
});

Integration with Clawdbot

Add to your Clawdbot config:

skills:
  context-pruner:
    enabled: true
    config:
      contextLimit: 64000
      autoPrune: true

The pruner will automatically monitor context usage and apply appropriate pruning strategies to stay within DeepSeek's 64k limit.

README.md

Context Pruner

Advanced context management optimized for DeepSeek's 64k context window. Provides intelligent pruning, compression, and token optimization to prevent context overflow while preserving important information.

Features

  • DeepSeek-optimized: Specifically tuned for 64k context window
  • Multiple pruning strategies: Semantic, temporal, and extractive compression
  • Adaptive pruning: Different strategies based on context usage levels
  • Priority-aware: Preserves high-priority and system messages
  • Real-time monitoring: Continuous context health tracking
  • Token-efficient: Minimizes token overhead from pruning operations

Installation

# Install dependencies
npm install

# Or install globally for CLI use
npm install -g .

Quick Start

import { createContextPruner } from './lib/index.js';

const pruner = createContextPruner({
  contextLimit: 64000, // DeepSeek's limit
  autoPrune: true,
  strategies: ['semantic', 'temporal', 'extractive'],
});

await pruner.initialize();

// Process messages with automatic pruning
const messages = [
  { role: 'user', content: 'Hello!', priority: 5 },
  { role: 'assistant', content: 'Hi there!', priority: 5 },
  // ... more messages
];

const processed = await pruner.processMessages(messages);

// Get status
const status = pruner.getStatus();
console.log(`Health: ${status.health}`);
console.log(`Tokens: ${status.tokens.used}/${status.tokens.limit}`);

CLI Usage

# Run tests
node scripts/cli.js test

# Show status
node scripts/cli.js status

# Prune a JSON file
node scripts/cli.js prune input.json output.json

# Show statistics
node scripts/cli.js stats

Pruning Strategies

1. Semantic Pruning

Removes semantically similar messages using embeddings. Useful for eliminating redundant information.

2. Temporal Pruning

Removes older messages first, preserving recent conversation. Configurable preservation of recent messages.

3. Extractive Compression

Summarizes groups of messages using extractive summarization. Preserves key information while reducing token count.

Configuration Options

OptionDefaultDescription
contextLimit64000DeepSeek's context window size
model'deepseek-chat'Model-specific optimizations
warningThreshold0.7Warn at 70% usage
pruneThreshold0.8Start pruning at 80% usage
emergencyThreshold0.95Aggressive pruning at 95% usage
strategies['semantic', 'temporal', 'extractive']Pruning strategies to use
autoPrunetrueEnable automatic pruning
preserveRecent10Always keep last N messages
preserveSystemtrueAlways keep system messages
preserveHighPriority8Priority threshold for preservation
minSimilarity0.85Semantic deduplication threshold
summarizernullOptional LLM summarizer function

Integration with Clawdbot

As a Skill

  1. Copy the context-pruner folder to your Clawdbot skills directory
  2. Add to your Clawdbot config:
skills:
  context-pruner:
    enabled: true
    config:
      contextLimit: 64000
      autoPrune: true
      strategies: ['semantic', 'temporal', 'extractive']

Direct Integration

import { ClawdbotContextManager } from './examples/clawdbot-integration.js';

const contextManager = new ClawdbotContextManager();
await contextManager.initialize();

// Add messages
await contextManager.addMessage('user', 'Hello!', 6);

// Get pruned context
const context = await contextManager.getContext();

// Check status
const status = contextManager.getStatus();

Health Status

The pruner monitors context usage and reports health status:

  • HEALTHY: Below 70% usage
  • WARNING: 70-80% usage (mild pruning may occur)
  • PRUNE: 80-95% usage (active pruning)
  • EMERGENCY: Above 95% usage (aggressive pruning)

Performance

  • Token counting: Uses tiktoken for accurate token estimation
  • Embeddings: Uses Xenova/transformers for local semantic analysis
  • Memory: Lightweight, with configurable caching
  • Speed: Optimized for real-time conversation processing

Testing

# Run the test suite
npm test

# Or directly
node lib/index.test.js

License

MIT

Permissions & Security

Security level L1: Low-risk skills with minimal permissions. Review inputs and outputs before running in production.

Requirements

  • OpenClaw CLI installed and configured.
  • Language: Markdown
  • License: MIT
  • Topics:

Configuration

```javascript { contextLimit: 64000, // DeepSeek's context window autoCompact: true, // Enable automatic compaction compactThreshold: 0.75, // Start compacting at 75% usage aggressiveCompactThreshold: 0.9, // Aggressive compaction at 90% dynamicContext: true, // Enable dynamic context management relevanceDecay: 0.95, // Relevance decays 5% per time step minRelevanceScore: 0.3, // Minimum relevance to keep queryAwareCompaction: true, // Compact based on current query relevance strategies: ['semantic', 'temporal', 'extractive', 'adaptive'], preserveRecent: 10, // Always keep last N messages preserveSystem: true, // Always keep system messages minSimilarity: 0.85, // Semantic similarity threshold // Archive settings enableArchive: true, // Enable hierarchical memory system archivePath: './context-archive', archiveSearchLimit: 10, archiveMaxSize: 100 * 1024 * 1024, // 100MB archiveIndexing: true, // Chat logging logToChat: true, // Log optimization events to chat chatLogLevel: 'brief', // 'brief', 'detailed', or 'none' chatLogFormat: '📊 {action}: {details}', // Format for chat messages // Performance batchSize: 5, // Messages to process in batch maxCompactionRatio: 0.5, // Maximum 50% compaction in one pass } ```

FAQ

How do I install context-optimizer?

Run openclaw add @ad2546/context-optimizer in your terminal. This installs context-optimizer into your OpenClaw Skills catalog.

Does this skill run locally or in the cloud?

OpenClaw Skills execute locally by default. Review the SKILL.md and permissions before running any skill.

Where can I verify the source code?

The source repository is available at https://github.com/openclaw/skills/tree/main/skills/ad2546/context-optimizer. Review commits and README documentation before installing.