Query and browse evaluation results stored in MLflow. Use when the user wants to look up runs by invocation ID, compare metrics across models, fetch artifacts (configs, logs, results), or set up the MLflow MCP server. ALWAYS triggers on mentions of MLflow, experiment results, run comparison, invocation IDs in the context of results, or MLflow MCP setup.
- 📁 references/
- 📁 scripts/
- 📄 SKILL.md
Query AutoRAG-Research pipeline results using natural language. Converts questions to SQL, executes safely (SELECT-only), returns formatted results. Auto-detects DB connection from configs/db.yaml or env vars. Use for pipeline comparison, metrics analysis, token usage.
Architecture Decision Records with evaluate and design modes. Document significant technical decisions with context, alternatives, and consequences.
Add two numbers and echo the result
- 📄 LICENSE
- 📄 README.md
- 📄 SimulationsInferentialMistakes.R
Statistical quality checklist for movement science and neuroscience research. Auto-triggers when analyzing data, interpreting results, running statistics, writing results sections, or reviewing analysis code. Based on Makin & Orban de Xivry (2019, eLife).
Analyze deal pipeline health and predict outcomes.
Pipeline gates documentation and troubleshooting. Auto-triggers on: 'gate failed', 'agent blocked', 'missing artifact', 'verification failed', 'scope=', 'claude-gates', 'pipeline ordering', 'writing an agent', 'verification: field', 'conditions: field', 'Result: PASS', 'Result: FAIL', 'Result: REVISE', 'Result: CONVERGED', 'SubagentStop', 'SubagentStart', 'gate chain', 'plan-gate', 'pipeline-block'.