Daily Featured Skills Count
3,626 3,840 3,909 3,920 3,927 3,966 4,007
04/05 04/06 04/07 04/08 04/09 04/10 04/11
♾️ Free & Open Source 🛡️ Secure & Worry-Free

Import Skills

nowork-studio nowork-studio
from GitHub Tools & Productivity
  • 📁 evals/
  • 📁 references/
  • 📄 SKILL.md

ads-audit

Google Ads account audit and business context setup. Run this first — it gathers business information, analyzes account health, and saves context that all other ads skills reuse. Trigger on "audit my ads", "ads audit", "set up my ads", "onboard", "account overview", "how's my account", "ads health check", "what should I fix in my ads", or when the user is new to AdsAgent and hasn't run an audit before. Also trigger proactively when other ads skills detect that business-context.json is missing.

0 95 11 days ago · Uploaded Detail →
LastEld LastEld
from GitHub Testing & Security
  • 📁 agents/
  • 📁 references/
  • 📄 SKILL.md

colibri-audit-memory

Bridge audit trails and memory frames for comprehensive session recording. Greek: ζ (zeta) — Decision Trail, η (eta) — Proof Store. Use when recording audit sessions, creating memory bundles, linking audit trails to memory, or finalizing session proofs with memory archives.

0 11 1 day ago · Uploaded Detail →
davidlee davidlee
from GitHub Testing & Security
  • 📄 SKILL.md

audit-change

Canonical reconciliation runsheet for AUD artefacts. Create or update the audit, disposition every finding, reconcile specs/contracts, and hand off to closure only when audit state supports it.

0 18 11 days ago · Uploaded Detail →
neverinfamous neverinfamous
from GitHub Tools & Productivity
  • 📁 config/
  • 📁 workflows/
  • 📄 SKILL.md

github-commander

Structured workflows for triaging GitHub issues, reviewing PRs, sprinting through milestones, and running security/quality/performance audits — with configurable validation gates, auto-detected security scanning, journal audit trails, and human-in-the-loop checkpoints. Use this skill whenever you are working on a GitHub issue, reviewing or submitting a PR, running any kind of code audit, updating dependencies, or working through a milestone. Also use when the user mentions issue numbers, PR numbers, milestone names, or asks you to "fix", "triage", "audit", "review", or "update deps". --- # GitHub Commander Structured, configurable workflows that teach AI agents to triage GitHub issues, review PRs, and sprint through milestones. Every action is journaled for full audit trails, and human-in-the-loop checkpoints keep you in control. The reason every step journals its results is that it creates a searchable audit trail — future sessions can find exactly what was tried, what passed, and what failed, without the human needing to remember or repeat context. ## When to Load

0 13 11 days ago · Uploaded Detail →
zeyuzhangzyz zeyuzhangzyz
from GitHub Testing & Security
  • 📄 SKILL.md

oss-audit

Audit an existing repository or paper-code release for open-source hardening gaps across correctness, maintainability, testability, security, performance, observability, and documentation. Use when the user says "audit this repo", "harden this project", "open source readiness", or wants a prioritized file-level report before changing code.

0 7 4 days ago · Uploaded Detail →
whylineee whylineee
from GitHub Testing & Security
  • 📄 SKILL.md

security-hardening

Use when reviewing code security, finding vulnerabilities, testing exploitability, hardening implementation details, and validating that fixes are stable and production-safe. Keywords: security audit, vuln scan, hardening, threat model, secure coding, dependency audit, SAST, secrets, path traversal, command injection, SSRF, XSS, CSRF, authz, authn.

0 7 5 days ago · Uploaded Detail →
jalaalrd jalaalrd
from GitHub Testing & Security
  • 📄 README.md
  • 📄 SKILL.md

full-stack-audit

Comprehensive website and web app audit covering security, UX, performance, accessibility, SEO, compliance, and revenue protection. Use this skill whenever the user asks to audit, review, check, or score a website or web application. Also use when the user says 'full-stack audit', 'UX audit', 'security audit', 'launch checklist', 'is my site ready to launch', 'check my site', 'review my code for issues', 'what did I miss', or any variation of wanting a comprehensive quality review before or after launch. This skill catches the issues that AI-built and vibe-coded sites consistently get wrong: client-side paywalls, exposed database tables, missing security headers, broken mobile layouts, and trust gaps that kill conversion. Triggers even if the user only asks about one area (e.g., 'check my security') because problems compound across categories.

0 8 9 days ago · Uploaded Detail →
berrzebb berrzebb
from GitHub Development & Coding
  • 📄 SKILL.md

quorum:audit

Run a quorum audit manually — trigger consensus review, re-run failed audits, test audit prompts, or force a specific provider. Use when the hook-based auto-trigger didn't fire, or you want explicit control. Triggers on 'run audit', 'audit again', 'review my code', 'check evidence'.

0 7 8 days ago · Uploaded Detail →
13luiz 13luiz
from GitHub Tools & Productivity
  • 📁 data/
  • 📁 evals/
  • 📁 examples/
  • 📄 README.md
  • 📄 README.zh.md
  • 📄 skill.json

harness-engineering-guide

Audit, design, and implement AI agent harnesses for any codebase. A harness is the constraints, feedback loops, and verification systems surrounding AI coding agents — improving it is the highest-leverage way to improve AI code quality. Three modes: Audit (scorecard), Implement (set up components), Design (full strategy). Use whenever the user mentions harness engineering, agent guardrails, AI coding quality, AGENTS.md, CLAUDE.md setup, agent feedback loops, entropy management, AI code review, vibe coding quality, harness audit, harness score, AI slop, agent-first engineering. Also trigger when users want to understand why AI agents produce bad code, make their repo work better with AI agents, set up CI/CD for agent workflows, design verification systems, or scale AI-assisted development. Proactively suggest when discussing AI code drift or controlling AI-generated code quality. --- # Harness Engineering Guide You are a harness engineering consultant. Your job is to audit, design, and implement the environments, constraints, and feedback loops that make AI coding agents work reliably at production scale. **Core Insight**: Agent = Model + Harness. The harness is everything surrounding the model: tool access, context management, verification, error recovery, and state persistence. Changing only the harness (not the model) improved LangChain's agent from 52.8% to 66.5% on Terminal Bench 2.0. ## Pre-Assessment Gate Before running an audit, answer these 5 questions to determine the appropriate audit depth. 1. Is the project expected to live beyond 1 month? 2. Will AI agents modify this codebase going forward? 3. Does the project have (or plan to have) >500 LOC? 4. Has there been at least one instance of AI-generated code causing problems? 5. Is there more than one contributor (human or agent)? | "Yes" Count | Route | What You Get | |-------------|-------|--------------| | **4-5** | **Full Audit** | All 45 items scored across 8 dimensions. Detailed report with improvement

0 6 9 days ago · Uploaded Detail →
KilimcininKorOglu KilimcininKorOglu
from GitHub Testing & Security
  • 📁 subcommands/
  • 📄 SKILL.md

bug-report

This skill MUST be invoked when the user asks for systematic bug analysis, or any focused audit such as "api audit", "auditcodex", "cache audit", "disaster recovery", "error review", "feature flags audit", "integration security", "observability audit", "queue audit", "release discipline", "serialization audit", "session audit", "tech debt", "tenant isolation", "test review", "upload security", "ai code audit", "dead code", any security vulnerability scan such as "sql injection", "xss", "rce", "ssrf", "xxe", "access control", "path traversal", "file upload", "ssti", "graphql injection", "business logic", "missing auth", or "security recon", or a FULL security sweep such as "güvenlik taraması", "security scan", "full security scan", "run all security scans", or "security sweep". Use `/bug-report` for general scans, `/bug-report <subcommand>` for domain-specific audits, and `/bug-report security-sweep` to run all security scans in parallel. All modes write verified findings to BUG-REPORT.md using the shared report contract.

0 5 10 days ago · Uploaded Detail →

Skill File Structure Sample (Reference)

skill-sample/
├─ SKILL.md              ⭐ Required: skill entry doc (purpose / usage / examples / deps)
├─ manifest.sample.json  ⭐ Recommended: machine-readable metadata (index / validation / autofill)
├─ LICENSE.sample        ⭐ Recommended: license & scope (open source / restriction / commercial)
├─ scripts/
│  └─ example-run.py     ✅ Runnable example script for quick verification
├─ assets/
│  ├─ example-formatting-guide.md  🧩 Output conventions: layout / structure / style
│  └─ example-template.tex         🧩 Templates: quickly generate standardized output
└─ references/           🧩 Knowledge base: methods / guides / best practices
   ├─ example-ref-structure.md     🧩 Structure reference
   ├─ example-ref-analysis.md      🧩 Analysis reference
   └─ example-ref-visuals.md       🧩 Visual reference

More Agent Skills specs Anthropic docs: https://agentskills.io/home

SKILL.md Requirements

├─ ⭐ Required: YAML Frontmatter (must be at top)
│  ├─ ⭐ name                 : unique skill name, follow naming convention
│  └─ ⭐ description          : include trigger keywords for matching
│
├─ ✅ Optional: Frontmatter extension fields
│  ├─ ✅ license              : license identifier
│  ├─ ✅ compatibility        : runtime constraints when needed
│  ├─ ✅ metadata             : key-value fields (author/version/source_url...)
│  └─ 🧩 allowed-tools        : tool whitelist (experimental)
│
└─ ✅ Recommended: Markdown body (progressive disclosure)
   ├─ ✅ Overview / Purpose
   ├─ ✅ When to use
   ├─ ✅ Step-by-step
   ├─ ✅ Inputs / Outputs
   ├─ ✅ Examples
   ├─ 🧩 Files & References
   ├─ 🧩 Edge cases
   ├─ 🧩 Troubleshooting
   └─ 🧩 Safety notes

Why SkillWink?

Skill files are scattered across GitHub and communities, difficult to search, and hard to evaluate. SkillWink organizes open-source skills into a searchable, filterable library you can directly download and use.

We provide keyword search, version updates, multi-metric ranking (downloads / likes / comments / updates), and open SKILL.md standards. You can also discuss usage and improvements on skill detail pages.

Keyword Search Version Updates Multi-Metric Ranking Open Standard Discussion

Quick Start:

Import/download skills (.zip/.skill), then place locally:

~/.claude/skills/ (Claude Code)

~/.codex/skills/ (Codex CLI)

One SKILL.md can be reused across tools.

FAQ

Everything you need to know: what skills are, how they work, how to find/import them, and how to contribute.

1. What are Agent Skills?

A skill is a reusable capability package, usually including SKILL.md (purpose/IO/how-to) and optional scripts/templates/examples.

Think of it as a plugin playbook + resource bundle for AI assistants/toolchains.

2. How do Skills work?

Skills use progressive disclosure: load brief metadata first, load full docs only when needed, then execute by guidance.

This keeps agents lightweight while preserving enough context for complex tasks.

3. How can I quickly find the right skill?

Use these three together:

  • Semantic search: describe your goal in natural language.
  • Multi-filtering: category/tag/author/language/license.
  • Sort by downloads/likes/comments/updated to find higher-quality skills.

4. Which import methods are supported?

  • Upload archive: .zip / .skill (recommended)
  • Upload skills folder
  • Import from GitHub repository

Note: file size for all methods should be within 10MB.

5. How to use in Claude / Codex?

Typical paths (may vary by local setup):

  • Claude Code:~/.claude/skills/
  • Codex CLI:~/.codex/skills/

One SKILL.md can usually be reused across tools.

6. Can one skill be shared across tools?

Yes. Most skills are standardized docs + assets, so they can be reused where format is supported.

Example: retrieval + writing + automation scripts as one workflow.

7. Are these skills safe to use?

Some skills come from public GitHub repositories and some are uploaded by SkillWink creators. Always review code before installing and own your security decisions.

8. Why does it not work after import?

Most common reasons:

  • Wrong folder path or nested one level too deep
  • Invalid/incomplete SKILL.md fields or format
  • Dependencies missing (Python/Node/CLI)
  • Tool has not reloaded skills yet

9. Does SkillWink include duplicates/low-quality skills?

We try to avoid that. Use ranking + comments to surface better skills:

  • Duplicate skills: compare differences (speed/stability/focus)
  • Low quality skills: regularly cleaned up