skills$openclaw/tinyfish
simantak-dabhade7.1k

by simantak-dabhade

tinyfish – OpenClaw Skill

tinyfish is an OpenClaw Skills integration for coding workflows. Use TinyFish/Mino web agent to extract/scrape websites, extract data, and automate browser actions using natural language. Use when you need to extract/scrape data from websites, handle bot-protected sites, or automate web tasks.

7.1k stars8.4k forksSecurity L1
Updated Feb 7, 2026Created Feb 7, 2026coding

Skill Snapshot

nametinyfish
descriptionUse TinyFish/Mino web agent to extract/scrape websites, extract data, and automate browser actions using natural language. Use when you need to extract/scrape data from websites, handle bot-protected sites, or automate web tasks. OpenClaw Skills integration.
ownersimantak-dabhade
repositorysimantak-dabhade/tinyfish-web-agent
languageMarkdown
licenseMIT
topics
securityL1
installopenclaw add @simantak-dabhade/tinyfish-web-agent
last updatedFeb 7, 2026

Maintainer

simantak-dabhade

simantak-dabhade

Maintains tinyfish in the OpenClaw Skills directory.

View GitHub profile
File Explorer
4 files
.
scripts
extract.py
2.6 KB
_meta.json
476 B
SKILL.md
3.0 KB
SKILL.md

name: tinyfish description: Use TinyFish/Mino web agent to extract/scrape websites, extract data, and automate browser actions using natural language. Use when you need to extract/scrape data from websites, handle bot-protected sites, or automate web tasks.

TinyFish Web Agent

Requires: MINO_API_KEY environment variable

Best Practices

  1. Specify JSON format: Always describe the exact structure you want returned
  2. Parallel calls: When extracting from multiple independent sites, make separate parallel calls instead of combining into one prompt

Basic Extract/Scrape

Extract data from a page. Specify the JSON structure you want:

import requests
import json
import os

response = requests.post(
    "https://mino.ai/v1/automation/run-sse",
    headers={
        "X-API-Key": os.environ["MINO_API_KEY"],
        "Content-Type": "application/json",
    },
    json={
        "url": "https://example.com",
        "goal": "Extract product info as JSON: {\"name\": str, \"price\": str, \"in_stock\": bool}",
    },
    stream=True,
)

for line in response.iter_lines():
    if line:
        line_str = line.decode("utf-8")
        if line_str.startswith("data: "):
            event = json.loads(line_str[6:])
            if event.get("type") == "COMPLETE" and event.get("status") == "COMPLETED":
                print(json.dumps(event["resultJson"], indent=2))

Multiple Items

Extract lists of data with explicit structure:

json={
    "url": "https://example.com/products",
    "goal": "Extract all products as JSON array: [{\"name\": str, \"price\": str, \"url\": str}]",
}

Stealth Mode

For bot-protected sites:

json={
    "url": "https://protected-site.com",
    "goal": "Extract product data as JSON: {\"name\": str, \"price\": str, \"description\": str}",
    "browser_profile": "stealth",
}

Proxy

Route through specific country:

json={
    "url": "https://geo-restricted-site.com",
    "goal": "Extract pricing data as JSON: {\"item\": str, \"price\": str, \"currency\": str}",
    "browser_profile": "stealth",
    "proxy_config": {
        "enabled": True,
        "country_code": "US",
    },
}

Output

Results are in event["resultJson"] when event["type"] == "COMPLETE"

Parallel Extraction

When extracting from multiple independent sources, make separate parallel API calls instead of combining into one prompt:

Good - Parallel calls:

# Compare pizza prices - run these simultaneously
call_1 = extract("https://pizzahut.com", "Extract pizza prices as JSON: [{\"name\": str, \"price\": str}]")
call_2 = extract("https://dominos.com", "Extract pizza prices as JSON: [{\"name\": str, \"price\": str}]")

Bad - Single combined call:

# Don't do this - less reliable and slower
extract("https://pizzahut.com", "Extract prices from Pizza Hut and also go to Dominos...")

Each independent extraction task should be its own API call. This is faster (parallel execution) and more reliable.

README.md

No README available.

Permissions & Security

Security level L1: Low-risk skills with minimal permissions. Review inputs and outputs before running in production.

Requirements

  • OpenClaw CLI installed and configured.
  • Language: Markdown
  • License: MIT
  • Topics:

FAQ

How do I install tinyfish?

Run openclaw add @simantak-dabhade/tinyfish-web-agent in your terminal. This installs tinyfish into your OpenClaw Skills catalog.

Does this skill run locally or in the cloud?

OpenClaw Skills execute locally by default. Review the SKILL.md and permissions before running any skill.

Where can I verify the source code?

The source repository is available at https://github.com/openclaw/skills/tree/main/skills/simantak-dabhade/tinyfish-web-agent. Review commits and README documentation before installing.