skills$openclaw/langcache
manvinder019.2k

by manvinder01

langcache – OpenClaw Skill

langcache is an OpenClaw Skills integration for ai ml workflows. This skill should be used when the user asks to "enable semantic caching", "cache LLM responses", "reduce API costs", "speed up AI responses", "configure LangCache", "search the semantic cache", "store responses in cache", or mentions Redis LangCache, semantic similarity caching, or LLM response caching. Provides integration with Redis LangCache managed service for semantic caching of prompts and responses.

9.2k stars4.2k forksSecurity L1
Updated Feb 7, 2026Created Feb 7, 2026ai ml

Skill Snapshot

namelangcache
descriptionThis skill should be used when the user asks to "enable semantic caching", "cache LLM responses", "reduce API costs", "speed up AI responses", "configure LangCache", "search the semantic cache", "store responses in cache", or mentions Redis LangCache, semantic similarity caching, or LLM response caching. Provides integration with Redis LangCache managed service for semantic caching of prompts and responses. OpenClaw Skills integration.
ownermanvinder01
repositorymanvinder01/openclaw-langcache
languageMarkdown
licenseMIT
topics
securityL1
installopenclaw add @manvinder01/openclaw-langcache
last updatedFeb 7, 2026

Maintainer

manvinder01

manvinder01

Maintains langcache in the OpenClaw Skills directory.

View GitHub profile
File Explorer
10 files
.
examples
agent-integration.py
15.4 KB
basic-caching.sh
1.8 KB
references
api-reference.md
4.9 KB
best-practices.md
5.6 KB
scripts
langcache.sh
14.7 KB
_meta.json
314 B
SKILL.md
5.6 KB
SKILL.md

name: langcache description: This skill should be used when the user asks to "enable semantic caching", "cache LLM responses", "reduce API costs", "speed up AI responses", "configure LangCache", "search the semantic cache", "store responses in cache", or mentions Redis LangCache, semantic similarity caching, or LLM response caching. Provides integration with Redis LangCache managed service for semantic caching of prompts and responses. version: 1.0.0 tools: Read, Bash, WebFetch

Redis LangCache Semantic Caching

This skill integrates Redis LangCache, a fully-managed semantic caching service, into OpenClaw workflows. LangCache stores LLM prompts and responses, returning cached results for semantically similar queries to reduce costs and latency.

Prerequisites

Before using LangCache, ensure the following environment variables are configured:

LANGCACHE_HOST=<your-langcache-host>
LANGCACHE_CACHE_ID=<your-cache-id>
LANGCACHE_API_KEY=<your-api-key>

Store these in ~/.openclaw/secrets.env or configure them in the OpenClaw settings.

Core Operations

Search for Cached Response

Before calling an LLM, check if a semantically similar response exists:

./scripts/langcache.sh search "What is semantic caching?"

With similarity threshold (0.0-1.0, higher = stricter match):

./scripts/langcache.sh search "What is semantic caching?" --threshold 0.95

With attribute filtering:

./scripts/langcache.sh search "What is semantic caching?" --attr "model=gpt-5"

Store New Response

After receiving an LLM response, cache it for future use:

./scripts/langcache.sh store "What is semantic caching?" "Semantic caching stores responses based on meaning similarity..."

With attributes for filtering/organization:

./scripts/langcache.sh store "prompt" "response" --attr "model=gpt-5" --attr "user_id=123"

Delete Cached Entries

By entry ID:

./scripts/langcache.sh delete --id "<entry-id>"

By attributes:

./scripts/langcache.sh delete --attr "user_id=123"

Flush Cache

Clear all entries (use with caution):

./scripts/langcache.sh flush

Integration Pattern

The recommended pattern for integrating LangCache into agent workflows:

1. Receive user prompt
2. Search LangCache for similar cached response
3. If cache hit (similarity >= threshold):
   - Return cached response immediately
   - Log cache hit for observability
4. If cache miss:
   - Call LLM API
   - Store prompt + response in LangCache
   - Return LLM response

Default Caching Policy

This policy is enforced automatically. All cache operations MUST respect these rules.

CACHEABLE (white-list)

CategoryExamplesThreshold
Factual Q&A"What is X?", "How does Y work?"0.90
Definitions / docs / help textAPI docs, command help, explanations0.90
Command explanations"What does git rebase do?"0.92
Reusable reply templates"polite no", "follow-up", "scheduling", "intro"0.88
Style transforms"make this warmer/shorter/firmer"0.85
Generic communication scriptsnegotiation templates, professional responses0.88

NEVER CACHE (hard blocks)

These patterns are blocked at the code level - cache operations will refuse to store them.

CategoryPatterns to DetectReason
Temporal infotoday, tomorrow, this week, deadline, ETA, "in X minutes", appointments, schedulesStale immediately
CredentialsAPI keys, tokens, passwords, OTP, 2FA codes, secretsSecurity risk
Identifiersphone numbers, emails, addresses, account IDs, order numbers, message IDs, chat IDs, JIDsPrivacy / PII
Personal contextnames + relationships, private history, "who said what", specific conversationsPrivacy / context-dependent

Detection Patterns

The following regex patterns trigger a hard block:

# Temporal
\b(today|tomorrow|tonight|yesterday)\b
\b(this|next|last)\s+(week|month|year|monday|tuesday|...)\b
\b(in\s+\d+\s+(minutes?|hours?|days?))\b
\b(deadline|eta|appointment|schedule[d]?)\b

# Credentials
\b(api[_-]?key|token|password|secret|otp|2fa)\b
\b(bearer|auth[orization]*)\s+\S+

# Identifiers
\b\d{10,}\b                          # phone numbers, long IDs
\b[A-Za-z0-9._%+-]+@[A-Za-z0-9.-]+   # emails
\b(order|account|message|chat)[_-]?id\b

# Personal context
\b(my\s+(wife|husband|partner|friend|boss|mom|dad|brother|sister))\b
\b(said\s+to\s+me|told\s+me|between\s+us)\b

Attribute Strategies

Use attributes to partition the cache:

  • model: LLM model used (useful when switching models)
  • category: factual, template, style, command
  • skill: Which skill generated the response
  • version: API or prompt version

Search Strategies

LangCache supports two search strategies:

  • semantic (default): Vector similarity matching
  • exact: Case-insensitive exact match

Combine both for hybrid search:

./scripts/langcache.sh search "prompt" --strategy "exact,semantic"

Observability

Monitor cache performance:

  • Track hit/miss ratios
  • Log similarity scores for hits
  • Alert on high miss rates (may indicate threshold too high)
  • Review stored entries periodically for relevance

References

Examples

README.md

No README available.

Permissions & Security

Security level L1: Low-risk skills with minimal permissions. Review inputs and outputs before running in production.

Requirements

Before using LangCache, ensure the following environment variables are configured: ```bash LANGCACHE_HOST=<your-langcache-host> LANGCACHE_CACHE_ID=<your-cache-id> LANGCACHE_API_KEY=<your-api-key> ``` Store these in `~/.openclaw/secrets.env` or configure them in the OpenClaw settings.

FAQ

How do I install langcache?

Run openclaw add @manvinder01/openclaw-langcache in your terminal. This installs langcache into your OpenClaw Skills catalog.

Does this skill run locally or in the cloud?

OpenClaw Skills execute locally by default. Review the SKILL.md and permissions before running any skill.

Where can I verify the source code?

The source repository is available at https://github.com/openclaw/skills/tree/main/skills/manvinder01/openclaw-langcache. Review commits and README documentation before installing.