Daily Featured Skills Count
3,840 3,909 3,920 3,927 3,966 4,007 4,027
04/06 04/07 04/08 04/09 04/10 04/11 04/12
♾️ Free & Open Source 🛡️ Secure & Worry-Free

Import Skills

kayba-ai kayba-ai
from GitHub Tools & Productivity
  • 📁 stage-1-api-analysis/
  • 📁 stage-2-domain-context/
  • 📁 stage-3-metrics/
  • 📄 SKILL.md

kayba-pipeline

End-to-end agent evaluation and improvement pipeline. Takes a traces folder and optional HITL flag, then orchestrates sub-agents through 7 stages — each stage is its own skill invoked by a dedicated sub-agent. Trigger when the user says "run the pipeline", "kayba pipeline", "evaluate and fix", "full eval", "analyze traces and fix", or provides a traces folder with intent to improve their agent.

0 2.1K 11 days ago · Uploaded Detail →
fakechris fakechris
from GitHub Data & AI
  • 📄 SKILL.md

obsidian-vault-pipeline

使用 Obsidian Vault Pipeline 自动化整理知识库。 **触发场景:** - 用户说 "运行 WIGS 流程"、"整理 Obsidian Vault"、"处理知识库" - 用户说 "提取 Evergreen"、"更新 MOC"、"运行 Pipeline" - 用户提到 "整理笔记"、"知识管理"、"处理书签" - 用户说 "质检"、"质量检查"、"检查一致性" **Vault 位置设置:** 默认使用当前工作目录作为 vault 根目录,或通过 `--vault-dir` 参数指定。 只要用户提到 Obsidian、知识管理、WIGS、Pipeline、Evergreen、MOC 等关键词,就立即使用此 skill。 --- # Obsidian Vault Pipeline Skill ## 概述 此 skill 用于帮助用户运行 Obsidian Vault Pipeline 自动化知识管理流程。 ## 安装 ```bash pip install obsidian-vault-pipeline ``` ## Vault 位置设置 Pipeline 自动检测 vault 位置(按优先级): 1. **当前工作目录** - 默认使用 `cwd` 2. ** `--vault-dir` 参数** - 显式指定 3. **环境变量** - `VAULT_DIR` **最佳实践:** ```bash cd /path/to/my-vault # 进入 vault 目录 ovp --check # 检查环境 ovp --full # 运行完整 pipeline ``` ## 可用命令 | 命令 | 说明 | |------|------| | `ovp --check` | 检查环境配置 | | `ovp --init` | 初始化配置(交互式) | | `ovp --full` | 运行完整 pipeline | | `ovp-article --process-inbox` | 处理 50-Inbox/01-Raw/ 中的文章 | | `ovp-evergreen --recent 7` | 提取最近7天的 Evergreen 笔记 | | `ovp-moc --scan` | 扫描并更新 MOC 索引 | | `ovp-quality --recent 7` | 质量检查 | ## 标准操作流程 ### 1. 首次使用 ```bash # 进入 vault 目录 cd my-vault # 检查环境 ovp --check # 如果提示未配置,运行初始化 ovp --init ``` ### 2. 日常处理 ```bash # 放入新文章到 50-Inbox/01-Raw/ cp article.md my-vault/50-Inbox/01-Raw/ # 运行 pipeline ovp --full ``` ### 3. WIGS 完整性检查 ```bash # 5层一致性检查 ./60-Logs/scripts/check-consistency.sh # 自动修复低风险问题 ./60-Logs/scripts/repair.sh --auto ``` ## 配置文件 在 vault 根目录创建 `.env`: ```bash AUTO_VAULT_API_KEY=your_api_key AUTO_VAULT_API_BASE=https://api.minimaxi.com/anthropic AUTO_VAULT_MODEL=minimax/MiniMax-M2.5 ``` ## 触发词映射 | 用户说 | 执行命令 | |--------|----------| | "运行 WIGS 流程" | `./60-Logs/scripts/check-consistency.sh` | | "整理 Obsidian" | `ovp --full` | | "处理文章" | `ovp-article --process-inbox` | | "提取 Evergreen" | `ovp-evergreen --recent 7` | | "更新 MOC" | `ovp-moc --scan` | | "质量检查" | `ovp-quality --recent 7` | | "检查一致性" | `./60-Logs/scripts/check-consistency.sh` | ## 处理流程 ``` 50-Inbox/01-Raw/ → ovp-article → 20-Areas/深度解读 20-Areas/ →

0 87 7 days ago · Uploaded Detail →
NomaDamas NomaDamas
from GitHub Data & AI
  • 📁 references/
  • 📁 scripts/
  • 📄 SKILL.md

autorag-query

Query AutoRAG-Research pipeline results using natural language. Converts questions to SQL, executes safely (SELECT-only), returns formatted results. Auto-detects DB connection from configs/db.yaml or env vars. Use for pipeline comparison, metrics analysis, token usage.

0 91 11 days ago · Uploaded Detail →
uw-ssec uw-ssec
from GitHub Data & AI
  • 📁 references/
  • 📄 SKILL.md

download-script-dev

This skill should be used when the user asks to "develop a download script", "debug data download", "fix download error", "create data pipeline template", "download template", "GAIA data pipeline", "download from S3", "access Zarr store", "cloud data access", or mentions specific data source names like "CONUS404", "HRRR", "WRF", "PRISM", "Stage IV", "USGS", "ORNL", "DEM", "Synoptic", or "IRIS" in the context of downloading or processing data. Provides templates, configuration validation, and debugging guidance for hydroclimatological data download scripts used in the GAIA project.

0 20 9 days ago · Uploaded Detail →
aescaffre aescaffre
from GitHub Tools & Productivity
  • 📁 reference/
  • 📄 SKILL.md

pixinsight-pipeline

Automated deep sky astrophotography processing with PixInsight. Use when processing astronomical images (nebulae, galaxies, star clusters) through the full pipeline: channel combination, calibration, stretching, Ha/narrowband injection, star handling, and final adjustments. Covers HaRGB, HaLRGB, and LRGB workflows. Drives PixInsight's PJSR scripting engine via Node.js file-based IPC bridge. --- # PixInsight Deep Sky Pipeline ## Overview Config-driven, branching pipeline that processes linear astronomical masters into publication-quality deep sky images. The pipeline is a Node.js script (`scripts/run-pipeline.mjs`) that sends PJSR commands to PixInsight via file-based IPC (`~/.pixinsight-mcp/bridge/`). ## Quick Start — New Target 1. **Prepare data** — Stack your subs in WBPP. Place linear masters (`.xisf`) in one folder. 2. **Create config** — Copy `editor/default-config.json`, or use the web editor (`node editor/server.mjs`). 3. **Set file paths** — Fill in `files.R`, `files.G`, `files.B`, `files.Ha`, `files.L` (if applicable), `files.outputDir`, `files.targetName`. 4. **Choose workflow**: - **HaRGB** (no luminance): disable `l_stretch`, `l_nxt`, `l_bxt`, `lrgb_combine` - **HaLRGB** (with luminance): enable lum branch steps + `lrgb_combine` - **LRGB** (no Ha): set `files.Ha` to `""`, disable `ha_sxt`, `ha_stretch`, `ha_curves`, `ha_ghs`, `ha_inject`. Pipeline auto-detects `hasHa` and skips Ha file opening/cloning. - **RGB only** (no Ha, no L): set `files.Ha` and `files.L` to `""`, disable Ha + lum branch steps 5. **Open PixInsight** — Start PixInsight with the PJSR watcher script loaded. 6. **Run** — `node scripts/run-pipeline.mjs --config path/to/config.json` 7. **Iterate** — Review JPEG previews at each step. Adjust params in config. Re-run. ## Pipeline Architecture ### Branches | Branch | Label | Color | Forks After | Merges At | |--------|-------|-------|-------------|-----------| | `main` | RGB | blue | — | — | | `stars` | Stars | yellow | `sxt` | `star_add` |

0 10 12 days ago · Uploaded Detail →
zinan92 zinan92
from GitHub Content & Multimedia
  • 📁 capabilities/
  • 📁 docs/
  • 📁 examples/
  • 📄 .gitignore
  • 📄 capability.json
  • 📄 cli.js

videocut

AI-powered video editing — transcribe, autocut, subtitle, hook, clip, cover, speed, pipeline. Read this to know WHEN and HOW to use videocut.

0 8 6 days ago · Uploaded Detail →

Skill File Structure Sample (Reference)

skill-sample/
├─ SKILL.md              ⭐ Required: skill entry doc (purpose / usage / examples / deps)
├─ manifest.sample.json  ⭐ Recommended: machine-readable metadata (index / validation / autofill)
├─ LICENSE.sample        ⭐ Recommended: license & scope (open source / restriction / commercial)
├─ scripts/
│  └─ example-run.py     ✅ Runnable example script for quick verification
├─ assets/
│  ├─ example-formatting-guide.md  🧩 Output conventions: layout / structure / style
│  └─ example-template.tex         🧩 Templates: quickly generate standardized output
└─ references/           🧩 Knowledge base: methods / guides / best practices
   ├─ example-ref-structure.md     🧩 Structure reference
   ├─ example-ref-analysis.md      🧩 Analysis reference
   └─ example-ref-visuals.md       🧩 Visual reference

More Agent Skills specs Anthropic docs: https://agentskills.io/home

SKILL.md Requirements

├─ ⭐ Required: YAML Frontmatter (must be at top)
│  ├─ ⭐ name                 : unique skill name, follow naming convention
│  └─ ⭐ description          : include trigger keywords for matching
│
├─ ✅ Optional: Frontmatter extension fields
│  ├─ ✅ license              : license identifier
│  ├─ ✅ compatibility        : runtime constraints when needed
│  ├─ ✅ metadata             : key-value fields (author/version/source_url...)
│  └─ 🧩 allowed-tools        : tool whitelist (experimental)
│
└─ ✅ Recommended: Markdown body (progressive disclosure)
   ├─ ✅ Overview / Purpose
   ├─ ✅ When to use
   ├─ ✅ Step-by-step
   ├─ ✅ Inputs / Outputs
   ├─ ✅ Examples
   ├─ 🧩 Files & References
   ├─ 🧩 Edge cases
   ├─ 🧩 Troubleshooting
   └─ 🧩 Safety notes

Why SkillWink?

Skill files are scattered across GitHub and communities, difficult to search, and hard to evaluate. SkillWink organizes open-source skills into a searchable, filterable library you can directly download and use.

We provide keyword search, version updates, multi-metric ranking (downloads / likes / comments / updates), and open SKILL.md standards. You can also discuss usage and improvements on skill detail pages.

Keyword Search Version Updates Multi-Metric Ranking Open Standard Discussion

Quick Start:

Import/download skills (.zip/.skill), then place locally:

~/.claude/skills/ (Claude Code)

~/.codex/skills/ (Codex CLI)

One SKILL.md can be reused across tools.

FAQ

Everything you need to know: what skills are, how they work, how to find/import them, and how to contribute.

1. What are Agent Skills?

A skill is a reusable capability package, usually including SKILL.md (purpose/IO/how-to) and optional scripts/templates/examples.

Think of it as a plugin playbook + resource bundle for AI assistants/toolchains.

2. How do Skills work?

Skills use progressive disclosure: load brief metadata first, load full docs only when needed, then execute by guidance.

This keeps agents lightweight while preserving enough context for complex tasks.

3. How can I quickly find the right skill?

Use these three together:

  • Semantic search: describe your goal in natural language.
  • Multi-filtering: category/tag/author/language/license.
  • Sort by downloads/likes/comments/updated to find higher-quality skills.

4. Which import methods are supported?

  • Upload archive: .zip / .skill (recommended)
  • Upload skills folder
  • Import from GitHub repository

Note: file size for all methods should be within 10MB.

5. How to use in Claude / Codex?

Typical paths (may vary by local setup):

  • Claude Code:~/.claude/skills/
  • Codex CLI:~/.codex/skills/

One SKILL.md can usually be reused across tools.

6. Can one skill be shared across tools?

Yes. Most skills are standardized docs + assets, so they can be reused where format is supported.

Example: retrieval + writing + automation scripts as one workflow.

7. Are these skills safe to use?

Some skills come from public GitHub repositories and some are uploaded by SkillWink creators. Always review code before installing and own your security decisions.

8. Why does it not work after import?

Most common reasons:

  • Wrong folder path or nested one level too deep
  • Invalid/incomplete SKILL.md fields or format
  • Dependencies missing (Python/Node/CLI)
  • Tool has not reloaded skills yet

9. Does SkillWink include duplicates/low-quality skills?

We try to avoid that. Use ranking + comments to surface better skills:

  • Duplicate skills: compare differences (speed/stability/focus)
  • Low quality skills: regularly cleaned up