Daily Featured Skills Count
3,840 3,909 3,920 3,927 3,966 4,007 4,027
04/06 04/07 04/08 04/09 04/10 04/11 04/12
♾️ Free & Open Source 🛡️ Secure & Worry-Free

Import Skills

agentevals-dev agentevals-dev
from GitHub Tools & Productivity
  • 📁 evals/
  • 📄 SKILL.md

eval

Evaluate and score agent behavior against a golden reference. Use this skill whenever the user wants to run evaluation, check pass/fail status, understand metric scores, compare sessions for regressions, validate agent behavior, or score a trace from a file or a live session. Trigger on phrases like "eval this trace", "check my agent output", "did my agent do the right thing", "compare runs", "did my agent regress", "score session X", "evaluate against golden", "run evals". Works with both local trace files and live streaming sessions. --- Evaluate agent behavior and explain what the scores mean. ## Determine the input type First, figure out what to evaluate: - **Trace file(s)** — user mentions a `.json` or `.jsonl` file path → use `evaluate_traces` - **Sessions vs golden** — user has multiple live sessions and wants regression testing → use `evaluate_sessions` - **Single live session** — user wants to score one session against a golden eval set → guide them to use `evaluate_sessions` with one session as golden ## Evaluating trace files 1. Get the file path(s). Check the extension: `.jsonl` → `trace_format: "otlp-json"` | `.json` → `"jaeger-json"` (default) 2. Ask if they have a golden eval set JSON. For `tool_trajectory_avg_score` (the default metric), an eval set is required — it provides the expected tool call sequence to compare against. If they don't have one yet, explain this and suggest starting with `hallucinations_v1`, or ask if they want to create a golden set from a reference run first. 3. Call `evaluate_traces` with the file(s), format, and eval set. 4. Present results as a score table (see Score interpretation below) and explain failures. ## Evaluating sessions (regression testing) This workflow requires the server to be running with the `--dev` flag (which enables WebSocket and session streaming). Plain `agentevals serve` will not have sessions. If you get a connection error from any tool below, tell the user: ```bash uv run agentevals serve --dev ```

0 113 11 days ago · Uploaded Detail →
callstackincubator callstackincubator
from GitHub Tools & Productivity
  • 📁 references/
  • 📄 SKILL.md

react-devtools

React DevTools CLI for AI agents. Use when the user asks you to debug a React or React Native app at runtime, inspect component props/state/hooks, diagnose render performance, profile re-renders, find slow components, or understand why something re-renders. Triggers include "why does this re-render", "inspect the component", "what props does X have", "profile the app", "find slow components", "debug the UI", "check component state", "the app feels slow", or any React runtime debugging task.

0 101 7 days ago · Uploaded Detail →
CleverCloud CleverCloud
from GitHub Ops & Delivery
  • 📁 references/
  • 📄 SKILL.md

clever-tools

CLI to deploy and manage applications, add-ons, and configurations on Clever Cloud PaaS. Use when the user needs to deploy apps, view logs, manage environment variables, configure domains, or interact with Clever Cloud services.

0 86 9 days ago · Uploaded Detail →
piotrski piotrski
from GitHub Tools & Productivity
  • 📁 references/
  • 📄 SKILL.md

react-devtools

React DevTools CLI for AI agents. Use when the user asks you to debug a React or React Native app at runtime, inspect component props/state/hooks, diagnose render performance, profile re-renders, find slow components, or understand why something re-renders. Triggers include "why does this re-render", "inspect the component", "what props does X have", "profile the app", "find slow components", "debug the UI", "check component state", "the app feels slow", or any React runtime debugging task.

0 90 10 days ago · Uploaded Detail →
dcb dcb
from GitHub Development & Coding
  • 📁 references/
  • 📄 SKILL.md

entity-rename

Batch rename Home Assistant entities to follow a consistent naming convention. Discovers entities, proposes renames, executes via HA API, and updates all YAML/TypeScript references automatically. Trigger phrases: "rename entities", "fix entity names", "standardize entity IDs", "entity rename", "clean up names". --- # Entity Rename Skill Rename Home Assistant entities to follow a consistent `domain.{room}_{descriptor}` convention. Updates all YAML automations, scripts, dashboard code, and .storage/ configs automatically. See `references/naming-convention.md` for the full naming convention. See `docs/solutions/tooling/entity-rename-lessons.md` for safety rules from production use. ## Step 0: Prerequisites

0 62 8 days ago · Uploaded Detail →
bidewio bidewio
from GitHub Content & Multimedia
  • 📁 rules/
  • 📄 AGENTS.md
  • 📄 README.md
  • 📄 SKILL.md

vercel-composition-patterns

React composition patterns that scale. Use when refactoring components with boolean prop proliferation, building flexible component libraries, or designing reusable APIs. Triggers on tasks involving compound components, render props, context providers, or component architecture. Includes React 19 API changes.

0 54 6 days ago · Uploaded Detail →
blacktwist blacktwist
from GitHub Content & Multimedia
  • 📁 evals/
  • 📄 SKILL.md

audience-growth-tracker-sms

When the user wants to track follower growth, understand what drives new followers, or analyze audience development. Also use when the user mentions 'follower growth,' 'followers,' 'audience growth,' 'gaining followers,' 'losing followers,' 'who follows me,' or 'grow my audience.' Uses BlackTwist follower data when available. For post-level metrics, see performance-analyzer-sms. For content patterns, see content-pattern-analyzer-sms.

0 57 7 days ago · Uploaded Detail →

Skill File Structure Sample (Reference)

skill-sample/
├─ SKILL.md              ⭐ Required: skill entry doc (purpose / usage / examples / deps)
├─ manifest.sample.json  ⭐ Recommended: machine-readable metadata (index / validation / autofill)
├─ LICENSE.sample        ⭐ Recommended: license & scope (open source / restriction / commercial)
├─ scripts/
│  └─ example-run.py     ✅ Runnable example script for quick verification
├─ assets/
│  ├─ example-formatting-guide.md  🧩 Output conventions: layout / structure / style
│  └─ example-template.tex         🧩 Templates: quickly generate standardized output
└─ references/           🧩 Knowledge base: methods / guides / best practices
   ├─ example-ref-structure.md     🧩 Structure reference
   ├─ example-ref-analysis.md      🧩 Analysis reference
   └─ example-ref-visuals.md       🧩 Visual reference

More Agent Skills specs Anthropic docs: https://agentskills.io/home

SKILL.md Requirements

├─ ⭐ Required: YAML Frontmatter (must be at top)
│  ├─ ⭐ name                 : unique skill name, follow naming convention
│  └─ ⭐ description          : include trigger keywords for matching
│
├─ ✅ Optional: Frontmatter extension fields
│  ├─ ✅ license              : license identifier
│  ├─ ✅ compatibility        : runtime constraints when needed
│  ├─ ✅ metadata             : key-value fields (author/version/source_url...)
│  └─ 🧩 allowed-tools        : tool whitelist (experimental)
│
└─ ✅ Recommended: Markdown body (progressive disclosure)
   ├─ ✅ Overview / Purpose
   ├─ ✅ When to use
   ├─ ✅ Step-by-step
   ├─ ✅ Inputs / Outputs
   ├─ ✅ Examples
   ├─ 🧩 Files & References
   ├─ 🧩 Edge cases
   ├─ 🧩 Troubleshooting
   └─ 🧩 Safety notes

Why SkillWink?

Skill files are scattered across GitHub and communities, difficult to search, and hard to evaluate. SkillWink organizes open-source skills into a searchable, filterable library you can directly download and use.

We provide keyword search, version updates, multi-metric ranking (downloads / likes / comments / updates), and open SKILL.md standards. You can also discuss usage and improvements on skill detail pages.

Keyword Search Version Updates Multi-Metric Ranking Open Standard Discussion

Quick Start:

Import/download skills (.zip/.skill), then place locally:

~/.claude/skills/ (Claude Code)

~/.codex/skills/ (Codex CLI)

One SKILL.md can be reused across tools.

FAQ

Everything you need to know: what skills are, how they work, how to find/import them, and how to contribute.

1. What are Agent Skills?

A skill is a reusable capability package, usually including SKILL.md (purpose/IO/how-to) and optional scripts/templates/examples.

Think of it as a plugin playbook + resource bundle for AI assistants/toolchains.

2. How do Skills work?

Skills use progressive disclosure: load brief metadata first, load full docs only when needed, then execute by guidance.

This keeps agents lightweight while preserving enough context for complex tasks.

3. How can I quickly find the right skill?

Use these three together:

  • Semantic search: describe your goal in natural language.
  • Multi-filtering: category/tag/author/language/license.
  • Sort by downloads/likes/comments/updated to find higher-quality skills.

4. Which import methods are supported?

  • Upload archive: .zip / .skill (recommended)
  • Upload skills folder
  • Import from GitHub repository

Note: file size for all methods should be within 10MB.

5. How to use in Claude / Codex?

Typical paths (may vary by local setup):

  • Claude Code:~/.claude/skills/
  • Codex CLI:~/.codex/skills/

One SKILL.md can usually be reused across tools.

6. Can one skill be shared across tools?

Yes. Most skills are standardized docs + assets, so they can be reused where format is supported.

Example: retrieval + writing + automation scripts as one workflow.

7. Are these skills safe to use?

Some skills come from public GitHub repositories and some are uploaded by SkillWink creators. Always review code before installing and own your security decisions.

8. Why does it not work after import?

Most common reasons:

  • Wrong folder path or nested one level too deep
  • Invalid/incomplete SKILL.md fields or format
  • Dependencies missing (Python/Node/CLI)
  • Tool has not reloaded skills yet

9. Does SkillWink include duplicates/low-quality skills?

We try to avoid that. Use ranking + comments to surface better skills:

  • Duplicate skills: compare differences (speed/stability/focus)
  • Low quality skills: regularly cleaned up