This skill should be used when the user asks to "test the harness", "run integration tests", "validate features with real API", "test with real model calls", "run agent loop tests", "verify end-to-end", or needs to verify OpenHarness features on a real codebase with actual LLM calls.
Create a temporary real project and prove a prove_it feature works (or doesn't) end-to-end. Builds a disposable git repo, writes a focused config, runs real dispatches through the installed or local prove_it, and produces a human-readable session transcript. Use when you need to prove a feature, reproduce a bug, or validate a fix against a real project — not just unit tests. --- # Prove a feature works (or doesn't) Build a throwaway project and exercise a prove_it feature through the real dispatcher pipeline. The output is a human-readable transcript the user can read to confirm the system works end-to-end. ## What "prove" means — read this first **Proving a feature means watching the feature do its actual job, not just watching the dispatcher accept a config and return a decision.** If the feature is a reviewer that detects dead code, you must: 1. Create a project that **contains dead code** → run the reviewer → see it **catch** the dead code 2. Create a project that **has no dead code** → run the reviewer → see it **pass clean** If the feature is a task that validates API design, you must: 1. Write an API file with **real design violations** → see the task **reject** it 2. Write a clean API file → see the task **approve** it If the feature is a when-condition gate, you must: 1. Run with the condition **unmet** → see the task **get skipped** 2. Run with the condition **met** → see the task **actually execute and produce its real output**
Bug confirmation and reproduction. Use when: (1) a bug has been found by model checking and needs code-level validation, (2) reproducing a bug in the real system to confirm it is not a false positive, (3) assessing whether a TLA+ counterexample maps to a real triggerable scenario.
Skill files are scattered across GitHub and communities, difficult to search, and hard to evaluate. SkillWink organizes open-source skills into a searchable, filterable library you can directly download and use.
We provide keyword search, version updates, multi-metric ranking (downloads / likes / comments / updates), and open SKILL.md standards. You can also discuss usage and improvements on skill detail pages.
Sort by downloads/likes/comments/updated to find higher-quality skills.
4. Which import methods are supported?
Upload archive: .zip / .skill (recommended)
Upload skills folder
Import from GitHub repository
Note: file size for all methods should be within 10MB.
5. How to use in Claude / Codex?
Typical paths (may vary by local setup):
Claude Code:~/.claude/skills/
Codex CLI:~/.codex/skills/
One SKILL.md can usually be reused across tools.
6. Can one skill be shared across tools?
Yes. Most skills are standardized docs + assets, so they can be reused where format is supported.
Example: retrieval + writing + automation scripts as one workflow.
7. Are these skills safe to use?
Some skills come from public GitHub repositories and some are uploaded by SkillWink creators. Always review code before installing and own your security decisions.
8. Why does it not work after import?
Most common reasons:
Wrong folder path or nested one level too deep
Invalid/incomplete SKILL.md fields or format
Dependencies missing (Python/Node/CLI)
Tool has not reloaded skills yet
9. Does SkillWink include duplicates/low-quality skills?
We try to avoid that. Use ranking + comments to surface better skills: