Delegate a coding task directly to OpenCode without a planning phase. **Trigger this skill whenever the user's request starts with or contains a direct delegation verb** — "delegate to opencode", "delegate this to opencode", "use opencode to X", "have opencode do X", "just use opencode", "run this through opencode", "ask opencode to X", "hand this to opencode" — REGARDLESS of how large or complex the task itself is. The user's explicit instruction to delegate is authoritative; do not second-guess it based on task size, scope, or complexity. Also trigger when the user names a specific OpenCode-supported model (GLM-5.1, GPT-5, Gemini, Kimi, DeepSeek, Claude via opencode, local Ollama) and asks to use it. If the user wants Claude to plan first before delegating ("plan and implement", "plan this and then delegate", "design first then implement", "think through the approach"), use the `opencode-implement` skill instead — but only if the user's phrasing explicitly asks for planning. "Delegate" without "plan" means direct delegation, even for big tasks.
- 📁 .github/
- 📁 _layouts/
- 📁 assets/
- 📄 .gitignore
- 📄 _config.yml
- 📄 CHANGELOG.md
Tailor the student's resume + generate a cover letter + build an interview-prep bundle for a specific job posting. Runs in the student's working directory so it can cite evidence from their actual project files (capstone, notebooks, READMEs, PDFs). Outputs ATS-friendly PDFs in ./applications/<company-slug>-<date>/. Also investigates its own output when students report issues — if a student says the PDF looks wrong (missing content, stretched photo, weird section order, anything off), follow the "Debugging this skill" playbook at the bottom of this file to match the symptom against known failure modes and draft a bug report for the maintainer.
- 📁 agents/
- 📁 evals/
- 📁 references/
- 📄 SKILL.md
**Dra. Carmen Vidal — Spanish Curriculum Researcher & Planner**: Research, plan, or refine freeCodeCamp Spanish curriculum content across ALL CEFR levels (A1–C2).
Google Ads: Manage campaigns, ad groups, ads, keywords, budgets, and performance analysis.
Reconstruct high-level past events (decisions, commitments, timelines) from /chronicle chapters. Use for longer-term recall beyond what <context_summary> paragraphs cover — arc of past discussions, decisions made, what happened last week(s).
- 📁 .claude/
- 📁 db/
- 📁 docs/
- 📄 .gitattributes
- 📄 .gitignore
- 📄 CHANGELOG.md
Use this skill whenever the user asks anything about Higgsfield AI — writing or refining video/image prompts, choosing a model (Kling, Sora 2, Veo, Wan, Seedance, Minimax Hailuo, DoP, Soul, Nano Banana, Seedream, Flux, GPT Image, etc.), camera controls, named motion presets, Soul ID character consistency, Cinema Studio 2.5/3.0, Vibe Motion, troubleshooting failed generations, credit optimization, Photodump, or any mention of higgsfield.ai. Also trigger on generic "write me a video prompt" or "make me an AI video prompt" requests when Higgsfield is the user's configured platform.
Cross-industry operations compliance read from chunk-level scene, security, logistics, and attendance JSON.
- 📁 .github/
- 📄 BCOS.md
- 📄 config.json
- 📄 CONTRIBUTING.md
Access Elyan Labs GPU compute marketplace — V100, RTX 5070, POWER8 inference endpoints via x402 USDC micropayments
- 📁 architecture-ownership/
- 📁 codex-analysis/
- 📁 codex-sandbox/
- 📄 SKILL.md
Build, critique, and iterate high-converting marketing or product landing pages using React + Vite + TypeScript + Tailwind and shadcn/ui components, with all icons sourced from Iconify. Use when the user asks for a landing page, sales page, signup page, CRO improvements, above-the-fold vs below-the-fold structure, hero + CTA copy, section order, or wants production-ready shadcn + Vite code.
Test TUI (Text User Interface) applications using tmux. Use this skill when you need to automate testing of terminal-based applications by sending keystrokes and capturing pane output.
- 📁 examples/
- 📁 references/
- 📁 scripts/
- 📄 .gitignore
- 📄 CONTRIBUTING.md
- 📄 LICENSE
Git Context Controller (GCC) v2 — Lean agent memory backed by real git. Stores hash + intent + optional decision notes instead of verbose markdown. Auto-bridges to aiyoucli vector memory when available. Dual mode: git-backed (lean index.yaml) or standalone (markdown fallback). Triggers on /gcc commands or natural language like 'commit this progress', 'branch to try an alternative', 'merge results', 'recover context'.
- 📁 references/
- 📁 scripts/
- 📄 README.md
- 📄 SKILL.md
Set up hierarchical Intent Layer (AGENTS.md files) for codebases. Use when initializing a new project, adding context infrastructure to an existing repo, user asks to set up AGENTS.md, add intent layer, make agents understand the codebase, or scaffolding AI-friendly project documentation. --- # Intent Layer Hierarchical AGENTS.md infrastructure so agents navigate codebases like senior engineers. ## Core Principle **Only ONE root context file.** CLAUDE.md and AGENTS.md should NOT coexist at project root. Child AGENTS.md in subdirectories are encouraged for complex subsystems. ## Workflow ``` 1. Detect state scripts/detect_state.sh /path/to/project → Returns: none | partial | complete 2. Route none/partial → Initial setup (steps 3-5) complete → Maintenance (step 6) 3. Measure [gate - show table first] scripts/analyze_structure.sh /path/to/project scripts/estimate_tokens.sh /path/to/each/source/dir 4. Decide No root file → Ask: CLAUDE.md or AGENTS.md? Has root file → Add Intent Layer section + child nodes if needed 5. Execute Use references/templates.md for structure Use references/node-examples.md for real-world patterns