- 📁 evals/
- 📁 references/
- 📄 SKILL.md
Google Ads account audit and business context setup. Run this first — it gathers business information, analyzes account health, and saves context that all other ads skills reuse. Trigger on "audit my ads", "ads audit", "set up my ads", "onboard", "account overview", "how's my account", "ads health check", "what should I fix in my ads", or when the user is new to AdsAgent and hasn't run an audit before. Also trigger proactively when other ads skills detect that business-context.json is missing.
- 📁 agents/
- 📁 references/
- 📄 SKILL.md
Bridge audit trails and memory frames for comprehensive session recording. Greek: ζ (zeta) — Decision Trail, η (eta) — Proof Store. Use when recording audit sessions, creating memory bundles, linking audit trails to memory, or finalizing session proofs with memory archives.
Canonical reconciliation runsheet for AUD artefacts. Create or update the audit, disposition every finding, reconcile specs/contracts, and hand off to closure only when audit state supports it.
- 📁 config/
- 📁 workflows/
- 📄 SKILL.md
Structured workflows for triaging GitHub issues, reviewing PRs, sprinting through milestones, and running security/quality/performance audits — with configurable validation gates, auto-detected security scanning, journal audit trails, and human-in-the-loop checkpoints. Use this skill whenever you are working on a GitHub issue, reviewing or submitting a PR, running any kind of code audit, updating dependencies, or working through a milestone. Also use when the user mentions issue numbers, PR numbers, milestone names, or asks you to "fix", "triage", "audit", "review", or "update deps". --- # GitHub Commander Structured, configurable workflows that teach AI agents to triage GitHub issues, review PRs, and sprint through milestones. Every action is journaled for full audit trails, and human-in-the-loop checkpoints keep you in control. The reason every step journals its results is that it creates a searchable audit trail — future sessions can find exactly what was tried, what passed, and what failed, without the human needing to remember or repeat context. ## When to Load
Audit an existing repository or paper-code release for open-source hardening gaps across correctness, maintainability, testability, security, performance, observability, and documentation. Use when the user says "audit this repo", "harden this project", "open source readiness", or wants a prioritized file-level report before changing code.
Use when reviewing code security, finding vulnerabilities, testing exploitability, hardening implementation details, and validating that fixes are stable and production-safe. Keywords: security audit, vuln scan, hardening, threat model, secure coding, dependency audit, SAST, secrets, path traversal, command injection, SSRF, XSS, CSRF, authz, authn.
Comprehensive website and web app audit covering security, UX, performance, accessibility, SEO, compliance, and revenue protection. Use this skill whenever the user asks to audit, review, check, or score a website or web application. Also use when the user says 'full-stack audit', 'UX audit', 'security audit', 'launch checklist', 'is my site ready to launch', 'check my site', 'review my code for issues', 'what did I miss', or any variation of wanting a comprehensive quality review before or after launch. This skill catches the issues that AI-built and vibe-coded sites consistently get wrong: client-side paywalls, exposed database tables, missing security headers, broken mobile layouts, and trust gaps that kill conversion. Triggers even if the user only asks about one area (e.g., 'check my security') because problems compound across categories.
Use when the user asks for a bug audit of a project or component
Run a quorum audit manually — trigger consensus review, re-run failed audits, test audit prompts, or force a specific provider. Use when the hook-based auto-trigger didn't fire, or you want explicit control. Triggers on 'run audit', 'audit again', 'review my code', 'check evidence'.
- 📁 data/
- 📁 evals/
- 📁 examples/
- 📄 README.md
- 📄 README.zh.md
- 📄 skill.json
Audit, design, and implement AI agent harnesses for any codebase. A harness is the constraints, feedback loops, and verification systems surrounding AI coding agents — improving it is the highest-leverage way to improve AI code quality. Three modes: Audit (scorecard), Implement (set up components), Design (full strategy). Use whenever the user mentions harness engineering, agent guardrails, AI coding quality, AGENTS.md, CLAUDE.md setup, agent feedback loops, entropy management, AI code review, vibe coding quality, harness audit, harness score, AI slop, agent-first engineering. Also trigger when users want to understand why AI agents produce bad code, make their repo work better with AI agents, set up CI/CD for agent workflows, design verification systems, or scale AI-assisted development. Proactively suggest when discussing AI code drift or controlling AI-generated code quality. --- # Harness Engineering Guide You are a harness engineering consultant. Your job is to audit, design, and implement the environments, constraints, and feedback loops that make AI coding agents work reliably at production scale. **Core Insight**: Agent = Model + Harness. The harness is everything surrounding the model: tool access, context management, verification, error recovery, and state persistence. Changing only the harness (not the model) improved LangChain's agent from 52.8% to 66.5% on Terminal Bench 2.0. ## Pre-Assessment Gate Before running an audit, answer these 5 questions to determine the appropriate audit depth. 1. Is the project expected to live beyond 1 month? 2. Will AI agents modify this codebase going forward? 3. Does the project have (or plan to have) >500 LOC? 4. Has there been at least one instance of AI-generated code causing problems? 5. Is there more than one contributor (human or agent)? | "Yes" Count | Route | What You Get | |-------------|-------|--------------| | **4-5** | **Full Audit** | All 45 items scored across 8 dimensions. Detailed report with improvement
Audit BMAD source files for file-reference convention violations using parallel Haiku subagents. Use when users requests an "audit file references" for a skill, workflow or task.
- 📁 subcommands/
- 📄 SKILL.md
This skill MUST be invoked when the user asks for systematic bug analysis, or any focused audit such as "api audit", "auditcodex", "cache audit", "disaster recovery", "error review", "feature flags audit", "integration security", "observability audit", "queue audit", "release discipline", "serialization audit", "session audit", "tech debt", "tenant isolation", "test review", "upload security", "ai code audit", "dead code", any security vulnerability scan such as "sql injection", "xss", "rce", "ssrf", "xxe", "access control", "path traversal", "file upload", "ssti", "graphql injection", "business logic", "missing auth", or "security recon", or a FULL security sweep such as "güvenlik taraması", "security scan", "full security scan", "run all security scans", or "security sweep". Use `/bug-report` for general scans, `/bug-report <subcommand>` for domain-specific audits, and `/bug-report security-sweep` to run all security scans in parallel. All modes write verified findings to BUG-REPORT.md using the shared report contract.