4.8k★portable-tools – OpenClaw Skill
portable-tools is an OpenClaw Skills integration for security workflows. Build cross-device tools without hardcoding paths or account names
Skill Snapshot
| name | portable-tools |
| description | Build cross-device tools without hardcoding paths or account names OpenClaw Skills integration. |
| owner | tunaissacoding |
| repository | tunaissacoding/portable-tools |
| language | Markdown |
| license | MIT |
| topics | |
| security | L1 |
| install | openclaw add @tunaissacoding/portable-tools |
| last updated | Feb 7, 2026 |
Maintainer

name: portable-tools description: Build cross-device tools without hardcoding paths or account names
Portable Tools - Cross-Device Development Methodology
Methodology for building tools that work across different devices, naming schemes, and configurations. Based on lessons from OAuth refresher debugging session (2026-01-23).
Core Principle
Never assume your device is the only device.
Your local setup is just one of many possible configurations. Build for the general case, not the specific instance.
The Three Questions (Before Writing Code)
1. "What varies between devices?"
Before writing any code that reads configuration, data, or credentials:
Ask:
- File paths? (macOS vs Linux, different home dirs)
- Account names? (user123 vs default vs oauth)
- Service names? (slight variations in spelling/capitalization)
- Data structure? (different versions, different formats)
- Environment? (different shells, different tools available)
Example from OAuth refresher:
- ❌ Assumed: Account is always "claude"
- ✅ Reality: Could be "claude", "Claude Code", "default", etc.
Action: List variables, make them configurable or auto-discoverable
2. "How do I prove this works?"
Before claiming success:
Require:
- Concrete BEFORE state (exact values)
- Concrete AFTER state (exact values)
- Proof they're different (side-by-side comparison)
Example from OAuth refresher:
BEFORE:
- Access Token: POp5z1fi...eSN9VAAA
- Expires: 1769189639000
AFTER:
- Access Token: 01v0RrFG...eOE9QAA ✅ Different
- Expires: 1769190268000 ✅ Extended
Action: Always show data transformation with real values
3. "What happens when it breaks?"
Before pushing to production:
Test:
- Wrong configuration (intentionally break config)
- Missing data (remove expected fields)
- Multiple entries (ambiguous case)
- Edge cases (empty values, special characters)
Example from OAuth refresher:
- Test with
keychain_account: "wrong-name"→ Fallback should work - Test with incomplete keychain data → Should fail gracefully with helpful error
Action: Test failure modes, not just happy path
Mandatory Patterns
Pattern 1: Explicit Over Implicit
❌ Wrong:
# Ambiguous - returns first match
security find-generic-password -s "Service" -w
✅ Correct:
# Explicit - returns specific entry
security find-generic-password -s "Service" -a "account" -w
Rule: If a command can be ambiguous, make it explicit.
Pattern 2: Validate Before Use
❌ Wrong:
DATA=$(read_config)
USE_VALUE="$DATA" # Hope it's valid
✅ Correct:
DATA=$(read_config)
if ! validate_structure "$DATA"; then
error "Invalid data structure"
fi
USE_VALUE="$DATA"
Rule: Never assume data has expected structure.
Pattern 3: Fallback Chains
❌ Wrong:
ACCOUNT="claude" # Hardcoded
✅ Correct:
# Try configured → Try common → Error with help
ACCOUNT="${CONFIG_ACCOUNT}"
if ! has_data "$ACCOUNT"; then
for fallback in "claude" "default" "oauth"; do
if has_data "$fallback"; then
ACCOUNT="$fallback"
break
fi
done
fi
[[ -z "$ACCOUNT" ]] && error "No account found. Tried: ..."
Rule: Provide automatic fallbacks for common variations.
Pattern 4: Helpful Errors
❌ Wrong:
[[ -z "$TOKEN" ]] && error "No token"
✅ Correct:
[[ -z "$TOKEN" ]] && error "No token found
Checked:
- Config: $CONFIG_FILE
- Field: $FIELD_NAME
- Expected: { \"tokens\": { \"refresh\": \"...\" } }
Verify with:
cat $CONFIG_FILE | jq '.tokens'
"
Rule: Error messages should help user diagnose and fix.
Debugging Methodology (Patrick's Approach)
Step 1: Get Exact Data
Don't ask: "Is it broken?"
Ask: "What exact values do you see? How many entries exist? Which one has the data?"
Example:
# Vague
"Check keychain"
# Specific
"Run: security find-generic-password -l 'Service' | grep 'acct'"
"Tell me: 1. How many entries 2. Which has tokens 3. Last modified"
Step 2: Prove With Concrete Examples
Don't say: "It should work now"
Show: "Here's the BEFORE token (POp5z...), here's AFTER (01v0R...), they're different"
Template:
BEFORE:
- Field1: <exact_value>
- Field2: <exact_value>
AFTER:
- Field1: <new_value> ✅ Changed
- Field2: <new_value> ✅ Changed
PROOF: Values are different
Step 3: Think Cross-Device Immediately
Don't think: "Works on my machine"
Think: "What if their setup differs in [X]?"
Checklist:
- Different account names?
- Different file paths?
- Different tools/versions?
- Different permissions?
- Different data formats?
Pre-Flight Checklist (Before Publishing)
Discovery Phase
- List all external dependencies (files, commands, services)
- Document what each dependency provides
- Identify which parts could vary between devices
Implementation Phase
- Make variations configurable (with sensible defaults)
- Add validation for each input
- Build fallback chains for common variations
- Add
--dry-runor--testmode
Testing Phase
- Test with correct config → Should work
- Test with wrong config → Should fallback or fail gracefully
- Test with missing data → Should give helpful error
- Test with multiple entries → Should handle ambiguity
Documentation Phase
- Document default assumptions
- Document how to verify local setup
- Document common variations and how to handle them
- Include data flow diagram
- Add troubleshooting section
Real-World Example: OAuth Refresher
Original (Broken)
# Assumes single entry, no validation, no fallback
KEYCHAIN_DATA=$(security find-generic-password -s "Service" -w)
REFRESH_TOKEN=$(echo "$KEYCHAIN_DATA" | jq -r '.refreshToken')
# Use token (hope it's valid)
Problems:
- Returns first alphabetical match (wrong entry)
- No validation (could be empty/malformed)
- No fallback (fails if account name differs)
Fixed (Portable)
# Explicit account with validation and fallback
validate_data() {
echo "$1" | jq -e '.claudeAiOauth.refreshToken' > /dev/null 2>&1
}
# Try configured account
DATA=$(security find-generic-password -s "$SERVICE" -a "$ACCOUNT" -w 2>&1)
if validate_data "$DATA"; then
log "✓ Using account: $ACCOUNT"
else
log "⚠ Trying fallback accounts..."
for fallback in "claude" "Claude Code" "default"; do
DATA=$(security find-generic-password -s "$SERVICE" -a "$fallback" -w 2>&1)
if validate_data "$DATA"; then
ACCOUNT="$fallback"
log "✓ Found data in: $fallback"
break
fi
done
fi
[[ -z "$DATA" ]] || ! validate_data "$DATA" && error "No valid data found
Tried accounts: $ACCOUNT, claude, Claude Code, default
Verify with: security find-generic-password -l '$SERVICE'"
REFRESH_TOKEN=$(echo "$DATA" | jq -r '.claudeAiOauth.refreshToken')
Improvements:
- ✅ Explicit account parameter
- ✅ Validates data structure
- ✅ Automatic fallback to common names
- ✅ Helpful error with verification command
Common Anti-Patterns
Anti-Pattern 1: "Works On My Machine"
FILE="/Users/patrick/.config/app.json" # Hardcoded path
Fix: Use $HOME, detect OS, or make configurable
Anti-Pattern 2: "Hope It's There"
TOKEN=$(cat config.json | jq -r '.token')
# What if .token doesn't exist? Script continues with empty value
Fix: Validate before using
TOKEN=$(cat config.json | jq -r '.token // empty')
[[ -z "$TOKEN" ]] && error "No token in config"
Anti-Pattern 3: "First Match Is Right"
# If multiple entries exist, which one?
ENTRY=$(find_entry "service")
Fix: Be explicit or enumerate all
ENTRY=$(find_entry "service" "account") # Specific
# OR
ALL=$(find_all_entries "service")
for entry in $ALL; do
validate_and_use "$entry"
done
Anti-Pattern 4: "Silent Failures"
process_data || true # Ignore errors
Fix: Fail loudly with context
process_data || error "Failed to process
Data: $DATA
Expected: { ... }
Check: command_to_verify"
Integration With Existing Workflows
With sprint-plan.md
Add to testing section:
## Cross-Device Testing
- [ ] Test with different account names
- [ ] Test with wrong config values
- [ ] Test with missing data
- [ ] Document fallback behavior
With PRIVACY-CHECKLIST.md
Add before publishing:
## Portability Check
- [ ] No hardcoded paths (use $HOME, detect OS)
- [ ] No hardcoded names (use config or fallback)
- [ ] Validation on all inputs
- [ ] Helpful errors for common issues
With skill-creator
When building new skills:
- List what varies between devices
- Make it configurable or auto-discoverable
- Test with wrong config
- Document troubleshooting
Quick Reference Card
Before writing code:
- What varies between devices?
- How do I prove this works?
- What happens when it breaks?
Mandatory patterns:
- Explicit over implicit
- Validate before use
- Fallback chains
- Helpful errors
Testing:
- Correct config → Works
- Wrong config → Fallback or helpful error
- Missing data → Clear diagnostic
Documentation:
- Data flow diagram
- Common variations
- Troubleshooting guide
Success Criteria
A tool is portable when:
- ✅ Works on different devices without modification
- ✅ Auto-discovers common variations in setup
- ✅ Fails gracefully with actionable error messages
- ✅ Can be debugged by reading the error output
- ✅ Documentation covers "what if my setup differs"
Test: Give it to someone with a different setup. If they need to ask you questions, the tool isn't portable yet.
Origin Story
This methodology emerged from debugging the OAuth refresher (2026-01-23):
- Script read wrong keychain entry (didn't specify account)
- Assumed single entry existed (multiple did)
- No validation (used empty data)
- No fallback (failed on different account names)
Patrick's approach:
- Asked for exact data (how many entries, which has tokens)
- Demanded proof (show BEFORE/AFTER tokens)
- Thought cross-device (what if naming differs?)
Result: Tool went from single-device/broken to universal/production-ready.
Key insight: The bugs weren't in the logic - they were in the assumptions.
When To Use This Skill
Use when:
- Building tools that read system configuration
- Working with keychains, credentials, environment variables
- Creating scripts that run on multiple machines
- Publishing skills to ClawdHub (others will use them)
Apply:
- Before implementing: Answer the three questions
- During implementation: Use mandatory patterns
- Before testing: Run pre-flight checklist
- After testing: Document variations and troubleshooting
Remember: Your device is just one case. Build for the general case.
Portable Tools
Build tools that work on everyone's device, not just yours.
The problem: Your code works perfectly on your machine but breaks on others because of hardcoded paths, account names, or missing validation.
This skill: A proven methodology for building tools that auto-discover setup variations and work universally.
📋 Requirements
- bash
- Basic understanding of shell scripts
⚡ What It Does
Your tools work on any device without manual configuration.
This skill provides:
- Three discovery questions to ask before coding
- Four core patterns for universal tools
- An automated pre-publish checklist
- Real-world examples from debugging sessions
Learned from fixing tools that failed on other devices despite working perfectly locally.
🚀 Installation
clawdhub install portable-tools
🔧 How It Works
Result: You build tools once and they work everywhere.
Before writing any code, answer three questions:
-
What varies between devices?
- Account names, file paths, data formats, OS versions?
-
How do I prove this works?
- Can I show BEFORE/AFTER with real values from different setups?
-
What happens when it breaks?
- Have I tested with wrong config, missing data, edge cases?
Then implement using four core patterns:
Pattern 1: Be Explicit
Don't assume. Pass parameters explicitly.
# ❌ Ambiguous
find_entry "service"
# ✅ Explicit
find_entry "service" "account"
Pattern 2: Validate First
Check structure before use.
DATA=$(read_config)
validate "$DATA" || error "Invalid structure"
Pattern 3: Build Fallbacks
Try alternatives when configured value fails.
for candidate in "$configured" "default" "fallback"; do
if has_data "$candidate"; then break; fi
done
Pattern 4: Give Helpful Errors
Show what you tried and how to verify.
error "Not found
Tried: $attempts
Verify with: command_to_check"
Before publishing, run the automated checklist:
bash ~/clawd/skills/portable-tools/pre-publish-checklist.sh /path/to/your/code
Output:
✅ No hardcoded paths
✅ Has validation patterns
⚠️ Errors could be more helpful
Success metric: Someone with a different setup can use your tool without asking you questions.
📚 Additional Information
Everything below is optional. The four patterns above handle most cases.
This section contains:
- Full methodology guide
- Real-world case studies
- Integration examples
You don't need to read this to start building portable tools.
<details> <summary><b>Full Methodology Guide</b></summary> <br>
See SKILL.md for complete patterns and anti-patterns.
Key topics covered:
- Discovery phase (list what varies)
- Implementation patterns in detail
- Testing phase (wrong config, missing data, edge cases)
- Pre-flight checklist breakdown
- Common anti-patterns to avoid
The problem:
Script assumed single keychain entry. Read the wrong one.
Root cause:
Didn't ask "what varies between devices" upfront.
Symptoms:
- Worked on one device
- Failed on others with "no refresh token found"
- Actually had TWO keychain entries (metadata vs full data)
Fix applied:
- Made account name explicit (not ambiguous)
- Validated data structure before use
- Built automatic fallback to common names
- Added helpful errors with verification commands
Result: Works across all device setups without configuration.
See full postmortem in SKILL.md.
With sprint-plan.md
Add to testing section:
## Cross-Device Testing
- [ ] Test with different account names
- [ ] Test with wrong config
- [ ] Test with missing data
With PRIVACY-CHECKLIST.md
Before publishing:
## Portability Check
- [ ] No hardcoded paths
- [ ] Validation on all inputs
- [ ] Helpful errors
With publisher
Answer the three questions upfront when designing new skills.
</details>License
MIT
Permissions & Security
Security level L1: Low-risk skills with minimal permissions. Review inputs and outputs before running in production.
Requirements
- OpenClaw CLI installed and configured.
- Language: Markdown
- License: MIT
- Topics:
FAQ
How do I install portable-tools?
Run openclaw add @tunaissacoding/portable-tools in your terminal. This installs portable-tools into your OpenClaw Skills catalog.
Does this skill run locally or in the cloud?
OpenClaw Skills execute locally by default. Review the SKILL.md and permissions before running any skill.
Where can I verify the source code?
The source repository is available at https://github.com/openclaw/skills/tree/main/skills/tunaissacoding/portable-tools. Review commits and README documentation before installing.
