Behavioral guidelines to reduce common LLM coding mistakes. Use when writing, reviewing, or refactoring code to avoid overcomplication, make surgical changes, surface assumptions, and define verifiable success criteria.
Use when a user asks to initialize a domain flywheel from natural language context, especially when environment details are incomplete or mixed with execution assumptions.
- 📄 agent-mistakes.md
- 📄 README.md
- 📄 silent-bugs.md
Use this skill whenever a researcher wants to test, validate, stress-test, or falsify a research idea or hypothesis — especially in AI/ML/deep learning. Trigger on phrases like "I have an idea," "would this work," "test this hypothesis," "sanity check my idea," "what's wrong with this idea," "review my results," "is this publishable," "why isn't this working," or any request to evaluate the feasibility, novelty, or correctness of a research concept.
- 📁 assets/
- 📁 commands/
- 📁 docs/
- 📄 .gitignore
- 📄 .skillshare-meta.json
- 📄 audit-permissions
A strategic thinking partner for Claude Code that separates deciding from building. Challenges assumptions, compares approaches, and hands execution a ready-to-run prompt in a fresh session. Handles skill routing, context handoff, and memory management.
Core methodology for AI-powered scientific hypothesis generation. Auto-loaded when discovering connections, generating hypotheses, exploring cross-disciplinary links. Includes facet recombination, adversarial prompting, evolutionary refinement, groundedness checking, and hypothesis card format.
Behavioral guidelines to reduce common LLM coding mistakes. Use when writing, reviewing, or refactoring code to avoid overcomplication, make surgical changes, surface assumptions, and define verifiable success criteria.
Behavioral guidelines to reduce common LLM coding mistakes. Use when writing, reviewing, or refactoring code to avoid overcomplication, make surgical changes, surface assumptions, and define verifiable success criteria.
- 📁 docs/
- 📁 examples/
- 📁 prompts/
- 📄 .gitattributes
- 📄 .gitignore
- 📄 LICENSE
PoggioAI/MSc research pipeline: hypothesis to paper in ≤10 steers. Runs persona debate, adversarial lit review, parallel theory+experiment tracks, and editorial quality gates.
Policy for non-semantic refactors that keep math meaning unchanged while making Lean/Mathlib code easier to review and harder to break: minimal imports, scoped assumptions, localized `classical`, proof tidying, lint fixes, perf/typeclass risk control, and PR splitting.