Daily Featured Skills Count
3,909 3,920 3,927 3,966 4,007 4,057 4,069
04/07 04/08 04/09 04/10 04/11 04/12 04/13
♾️ Free & Open Source 🛡️ Secure & Worry-Free

Import Skills

tombelieber tombelieber
from GitHub Development & Coding
  • 📄 SKILL.md

claude-view

Monitor and query Claude Code sessions — list sessions, search conversations, check costs, view AI fluency score, see live running agents. Use when the user asks about their Claude Code usage, costs, session history, or running agents. --- ## You operate the `claude-view` HTTP API **If the claude-view MCP tools are available in your environment, prefer using them instead of curl.** This skill is the fallback for environments without MCP support. claude-view runs a local server on port 47892 (or `$CLAUDE_VIEW_PORT`). All endpoints return JSON (camelCase field names). Base URL: `http://localhost:47892` ## Resolving the server 1. Check if running: `curl -sf http://localhost:47892/api/health` 2. If not running, tell user: `npx claude-view` ## Endpoints | Intent | Method | Endpoint | Key Params | |--------|--------|----------|------------| | List sessions | GET | `/api/sessions` | `?limit`, `?q`, `?filter`, `?sort`, `?offset`, `?branches`, `?models`, `?time_after`, `?time_before` | | Get session detail | GET | `/api/sessions/{id}` | — | | Search sessions | GET | `/api/search` | `?q` (required), `?limit`, `?offset`, `?scope` | | Dashboard stats | GET | `/api/stats/dashboard` | `?project`, `?branch`, `?from`, `?to` | | AI Fluency Score | GET | `/api/score` | — | | Token stats | GET | `/api/stats/tokens` | — | | Live sessions | GET | `/api/live/sessions` | — | | Live summary | GET | `/api/live/summary` | — | | Server health | GET | `/api/health` | — | ## Reading responses All responses are JSON with camelCase field names. Key shapes: **Sessions list:** `{ sessions: [{ id, project, displayName, gitBranch, durationSeconds, totalInputTokens, totalOutputTokens, primaryModel, messageCount, turnCount, commitCount, modifiedAt }], total, hasMore }` **Session detail:** All session fields plus `commits: [{ hash, message, timestamp, branch }]` and `derivedMetrics: { tokensPerPrompt, reeditRate, toolDensity, editVelocity }` **Search:** `{ query, totalSessions, totalMatches, elapsedMs,

0 42 11 days ago · Uploaded Detail →
spences10 spences10
from GitHub Development & Coding
  • 📁 .claude-plugin/
  • 📄 SKILL.md

analytics

Query Claude Code session analytics from ccrecall database. Use when user asks about token usage, session history, or wants to analyze their Claude Code usage patterns.

0 27 8 days ago · Uploaded Detail →
jackccrawford jackccrawford
from GitHub Tools & Productivity
  • 📄 SKILL.md

Clawmark

Your next session starts cold. No memory of what you built, what broke, what you decided. Every signal you write is a gift to that future session. The richer the signal, the less time re-learning.

0 18 13 days ago · Uploaded Detail →
punt-labs punt-labs
from GitHub Research & Analysis
  • 📁 references/
  • 📄 SKILL.md

prfaq

This skill should be used when the user asks to "write a PR/FAQ", "prfaq", "working backwards", "product discovery", "evaluate a product idea", "press release FAQ", "test product value", "revise prfaq", "update prfaq", "add research to prfaq", "add FAQs", "run a meeting", "review meeting", "hive meeting", "autonomous meeting", "consensus meeting", "stress test my prfaq", "go/no-go decision", "should we build this", "vote on prfaq", or wants to use the Amazon Working Backwards process to evaluate whether a product or feature is worth building. --- # Working Backwards: PR/FAQ ## Purpose Guide the user through the Amazon Working Backwards process to produce a professional PR/FAQ document. The output is a LaTeX file that compiles to a polished PDF suitable for executive review and product decision-making. The process forces clarity about customer value, surfaces risks early, and creates a shared artifact for go/no-go decisions. ## When to Use - Evaluating whether a new product or feature is worth building - Forcing specificity on a vague product idea - Preparing a product pitch for leadership review - Testing whether a team truly understands the customer problem - Structuring a go/no-go decision with an auditable artifact ## Revise Mode Before starting the full workflow, check if a `prfaq.tex` file already exists in the project root (or the path the user specifies). If it does, enter **revise mode** instead of starting from scratch. 1. **Read the existing document.** Parse the `.tex` file to understand what's already written — the press release, FAQs, and risk assessment. 2. **Ask what to revise.** Present the user with the sections found and ask what they want to improve. Common revision goals: - **Refine the product** — sharpen the problem statement, solution, or differentiation based on new thinking - **Incorporate research** — thread new primary data (customer interviews, market analysis, survey results) into existing sections. Run Phase 0 research discovery to find

0 17 13 days ago · Uploaded Detail →
nrwl nrwl
from GitHub Ops & Delivery
  • 📄 SKILL.md
  • 📄 SKILL.md.meta.json

await-polygraph-ci

Wait for CI to settle across all repos in a Polygraph session, then report results and investigate failures. USE WHEN user says "await polygraph", "wait for polygraph ci", "polygraph ci status", "check polygraph ci", "watch polygraph session", "monitor polygraph".

0 12 6 days ago · Uploaded Detail →
brightdata brightdata
from GitHub Development & Coding
  • 📄 SKILL.md

brd-browser-debug

Debug Bright Data Scraping Browser sessions using the Browser Sessions API. Use this skill when the user encounters a Bright Data browser session error, puppeteer stack trace, failed scraper run, or asks about session bandwidth, duration, captchas, or connection issues. Also use when a Bright Data scraper produces unexpected results such as empty data, 0 items found, missing products, or fewer results than expected — session data can reveal whether the issue is network/proxy-side (blocks, captchas, redirects, timeouts) or client-side (selectors, extraction logic). Triggers on phrases like 'why did my session fail', 'debug my bright data session', 'check my scraping browser sessions', 'how much bandwidth did my scraper use', 'got 0 results', 'found 0', 'scraper returned empty', 'scraper not working', 'script didn't work', or when a Bright Data error code or brd.superproxy.io stack trace appears in the conversation. Requires BRIGHTDATA_API_KEY environment variable.

0 12 13 days ago · Uploaded Detail →
dr5hn dr5hn
from GitHub Testing & Security
  • 📄 SKILL.md

ccm

Claude Code Manager — manage accounts, sessions, environments, and optimize token usage. Use when the user mentions switching Claude accounts, cleaning up sessions, environment snapshots, disk usage, token optimization, Claude Code health check, orphaned sessions, orphaned processes, tmp files, MCP audit, project bindings, session search, token usage history, account reorder, profiles, isolated, concurrent sessions, watch, rate limit, auto-switch, dashboard, session archive, setup wizard, recover, usage dashboard, usage compare, claudeignore, permission rules, statusline, status bar, or says "ccm", "doctor", "clean cache", "clean tmp", "session list", "session search", "env snapshot", "bind", "unbind", "reorder", "usage history", "init", "permissions audit", "statusline", "ccm watch", "ccm profiles", "ccm setup", "ccm recover".

0 11 12 days ago · Uploaded Detail →
selftune-dev selftune-dev
from GitHub Tools & Productivity
  • 📁 agents/
  • 📁 assets/
  • 📁 references/
  • 📄 settings_snippet.json
  • 📄 SKILL.md

selftune

Self-improving skills toolkit that watches real agent sessions, detects missed triggers, grades execution quality, and evolves skill descriptions to match how users actually talk. Use when grading sessions, generating evals, evolving skill descriptions or routing tables, checking skill health, viewing the dashboard, ingesting sessions from other platforms, or running autonomous improvement loops. Make sure to use this skill whenever the user mentions skill improvement, skill performance, skill triggers, skill evolution, skill health, undertriggering, overtriggering, session grading, or wants to know how their skills are doing — even if they don't say "selftune" explicitly.

0 10 13 days ago · Uploaded Detail →
buildingopen buildingopen
from GitHub Tools & Productivity
  • 📄 SKILL.md

agents

Scan running Claude sessions to see what other agents are working on. Use when asked "what are the other agents doing", "check other sessions", "what's running", "scan agents", "who's working on what", or before picking up new work to avoid overlap. --- # Agents: Scan Running Claude Sessions Runs `scan.sh` to inspect all tmux sessions running Claude and report what each is doing. ## Usage ```bash bash ~/.claude/skills/agents/scripts/scan.sh # all sessions bash ~/.claude/skills/agents/scripts/scan.sh floom # only floom/* sessions bash ~/.claude/skills/agents/scripts/scan.sh openpaper # only openpaper/* sessions ``` ## What It Shows

0 7 7 days ago · Uploaded Detail →
REMvisual REMvisual
from GitHub Tools & Productivity
  • 📄 skill.md

handoff

Create a structured session handoff when context is running low or work is pausing. Deep context mining, self-validation, multi-file splitting. Captures everything the next session needs.

0 9 13 days ago · Uploaded Detail →

Skill File Structure Sample (Reference)

skill-sample/
├─ SKILL.md              ⭐ Required: skill entry doc (purpose / usage / examples / deps)
├─ manifest.sample.json  ⭐ Recommended: machine-readable metadata (index / validation / autofill)
├─ LICENSE.sample        ⭐ Recommended: license & scope (open source / restriction / commercial)
├─ scripts/
│  └─ example-run.py     ✅ Runnable example script for quick verification
├─ assets/
│  ├─ example-formatting-guide.md  🧩 Output conventions: layout / structure / style
│  └─ example-template.tex         🧩 Templates: quickly generate standardized output
└─ references/           🧩 Knowledge base: methods / guides / best practices
   ├─ example-ref-structure.md     🧩 Structure reference
   ├─ example-ref-analysis.md      🧩 Analysis reference
   └─ example-ref-visuals.md       🧩 Visual reference

More Agent Skills specs Anthropic docs: https://agentskills.io/home

SKILL.md Requirements

├─ ⭐ Required: YAML Frontmatter (must be at top)
│  ├─ ⭐ name                 : unique skill name, follow naming convention
│  └─ ⭐ description          : include trigger keywords for matching
│
├─ ✅ Optional: Frontmatter extension fields
│  ├─ ✅ license              : license identifier
│  ├─ ✅ compatibility        : runtime constraints when needed
│  ├─ ✅ metadata             : key-value fields (author/version/source_url...)
│  └─ 🧩 allowed-tools        : tool whitelist (experimental)
│
└─ ✅ Recommended: Markdown body (progressive disclosure)
   ├─ ✅ Overview / Purpose
   ├─ ✅ When to use
   ├─ ✅ Step-by-step
   ├─ ✅ Inputs / Outputs
   ├─ ✅ Examples
   ├─ 🧩 Files & References
   ├─ 🧩 Edge cases
   ├─ 🧩 Troubleshooting
   └─ 🧩 Safety notes

Why SkillWink?

Skill files are scattered across GitHub and communities, difficult to search, and hard to evaluate. SkillWink organizes open-source skills into a searchable, filterable library you can directly download and use.

We provide keyword search, version updates, multi-metric ranking (downloads / likes / comments / updates), and open SKILL.md standards. You can also discuss usage and improvements on skill detail pages.

Keyword Search Version Updates Multi-Metric Ranking Open Standard Discussion

Quick Start:

Import/download skills (.zip/.skill), then place locally:

~/.claude/skills/ (Claude Code)

~/.codex/skills/ (Codex CLI)

One SKILL.md can be reused across tools.

FAQ

Everything you need to know: what skills are, how they work, how to find/import them, and how to contribute.

1. What are Agent Skills?

A skill is a reusable capability package, usually including SKILL.md (purpose/IO/how-to) and optional scripts/templates/examples.

Think of it as a plugin playbook + resource bundle for AI assistants/toolchains.

2. How do Skills work?

Skills use progressive disclosure: load brief metadata first, load full docs only when needed, then execute by guidance.

This keeps agents lightweight while preserving enough context for complex tasks.

3. How can I quickly find the right skill?

Use these three together:

  • Semantic search: describe your goal in natural language.
  • Multi-filtering: category/tag/author/language/license.
  • Sort by downloads/likes/comments/updated to find higher-quality skills.

4. Which import methods are supported?

  • Upload archive: .zip / .skill (recommended)
  • Upload skills folder
  • Import from GitHub repository

Note: file size for all methods should be within 10MB.

5. How to use in Claude / Codex?

Typical paths (may vary by local setup):

  • Claude Code:~/.claude/skills/
  • Codex CLI:~/.codex/skills/

One SKILL.md can usually be reused across tools.

6. Can one skill be shared across tools?

Yes. Most skills are standardized docs + assets, so they can be reused where format is supported.

Example: retrieval + writing + automation scripts as one workflow.

7. Are these skills safe to use?

Some skills come from public GitHub repositories and some are uploaded by SkillWink creators. Always review code before installing and own your security decisions.

8. Why does it not work after import?

Most common reasons:

  • Wrong folder path or nested one level too deep
  • Invalid/incomplete SKILL.md fields or format
  • Dependencies missing (Python/Node/CLI)
  • Tool has not reloaded skills yet

9. Does SkillWink include duplicates/low-quality skills?

We try to avoid that. Use ranking + comments to surface better skills:

  • Duplicate skills: compare differences (speed/stability/focus)
  • Low quality skills: regularly cleaned up