skills$openclaw/llm-models
okaris5.6k

by okaris

llm-models – OpenClaw Skill

llm-models is an OpenClaw Skills integration for coding workflows. |

5.6k stars4.4k forksSecurity L1
Updated Feb 7, 2026Created Feb 7, 2026coding

Skill Snapshot

namellm-models
description| OpenClaw Skills integration.
ownerokaris
repositoryokaris/inference-shpath: llm-models
languageMarkdown
licenseMIT
topics
securityL1
installopenclaw add @okaris/inference-sh:llm-models
last updatedFeb 7, 2026

Maintainer

okaris

okaris

Maintains llm-models in the OpenClaw Skills directory.

View GitHub profile
File Explorer
1 files
llm-models
SKILL.md
3.4 KB
SKILL.md

name: llm-models description: | Access Claude, Gemini, Kimi, GLM and 100+ LLMs via inference.sh CLI using OpenRouter. Models: Claude Opus 4.5, Claude Sonnet 4.5, Claude Haiku 4.5, Gemini 3 Pro, Kimi K2, GLM-4.6, Intellect 3. One API for all models with automatic fallback and cost optimization. Use for: AI assistants, code generation, reasoning, agents, chat, content generation. Triggers: claude api, openrouter, llm api, claude sonnet, claude opus, gemini api, kimi, language model, gpt alternative, anthropic api, ai model api, llm access, chat api, claude alternative, openai alternative allowed-tools: Bash(infsh *)

LLM Models via OpenRouter

Access 100+ language models via inference.sh CLI.

Quick Start

curl -fsSL https://cli.inference.sh | sh && infsh login

# Call Claude Sonnet
infsh app run openrouter/claude-sonnet-45 --input '{"prompt": "Explain quantum computing"}'

Available Models

ModelApp IDBest For
Claude Opus 4.5openrouter/claude-opus-45Complex reasoning, coding
Claude Sonnet 4.5openrouter/claude-sonnet-45Balanced performance
Claude Haiku 4.5openrouter/claude-haiku-45Fast, economical
Gemini 3 Proopenrouter/gemini-3-pro-previewGoogle's latest
Kimi K2 Thinkingopenrouter/kimi-k2-thinkingMulti-step reasoning
GLM-4.6openrouter/glm-46Open-source, coding
Intellect 3openrouter/intellect-3General purpose
Any Modelopenrouter/any-modelAuto-selects best option

Search LLM Apps

infsh app list --search "openrouter"
infsh app list --search "claude"

Examples

Claude Opus (Best Quality)

infsh app run openrouter/claude-opus-45 --input '{
  "prompt": "Write a Python function to detect palindromes with comprehensive tests"
}'

Claude Sonnet (Balanced)

infsh app run openrouter/claude-sonnet-45 --input '{
  "prompt": "Summarize the key concepts of machine learning"
}'

Claude Haiku (Fast & Cheap)

infsh app run openrouter/claude-haiku-45 --input '{
  "prompt": "Translate this to French: Hello, how are you?"
}'

Kimi K2 (Thinking Agent)

infsh app run openrouter/kimi-k2-thinking --input '{
  "prompt": "Plan a step-by-step approach to build a web scraper"
}'

Any Model (Auto-Select)

# Automatically picks the most cost-effective model
infsh app run openrouter/any-model --input '{
  "prompt": "What is the capital of France?"
}'

With System Prompt

infsh app sample openrouter/claude-sonnet-45 --save input.json

# Edit input.json:
# {
#   "system": "You are a helpful coding assistant",
#   "prompt": "How do I read a file in Python?"
# }

infsh app run openrouter/claude-sonnet-45 --input input.json
  • Coding: Generate, review, debug code
  • Writing: Content, summaries, translations
  • Analysis: Data interpretation, research
  • Agents: Build AI-powered workflows
  • Chat: Conversational interfaces
# Full platform skill (all 150+ apps)
npx skills add inference-sh/skills@inference-sh

# Web search (combine with LLMs for RAG)
npx skills add inference-sh/skills@web-search

# Image generation
npx skills add inference-sh/skills@ai-image-generation

# Video generation
npx skills add inference-sh/skills@ai-video-generation

Browse all apps: infsh app list

README.md

No README available.

Permissions & Security

Security level L1: Low-risk skills with minimal permissions. Review inputs and outputs before running in production.

Requirements

  • OpenClaw CLI installed and configured.
  • Language: Markdown
  • License: MIT
  • Topics:

FAQ

How do I install llm-models?

Run openclaw add @okaris/inference-sh:llm-models in your terminal. This installs llm-models into your OpenClaw Skills catalog.

Does this skill run locally or in the cloud?

OpenClaw Skills execute locally by default. Review the SKILL.md and permissions before running any skill.

Where can I verify the source code?

The source repository is available at https://github.com/openclaw/skills/tree/main/skills/okaris/inference-sh. Review commits and README documentation before installing.