Build a ForgeCAD model while actively hunting for API friction — missing helpers, awkward patterns, bad defaults, verbose boilerplate. Use when asked to dogfood, stress-test the API, or build a model with the goal of improving ForgeCAD.
- 📁 references/
- 📁 scripts/
- 📄 SKILL.md
Generate a persistent .nexus-map/ knowledge base that lets any AI session instantly understand a codebase's architecture, systems, dependencies, and change hotspots. Use when starting work on an unfamiliar repository, onboarding with AI-assisted context, preparing for a major refactoring initiative, or enabling reliable cold-start AI sessions across a team. Produces INDEX.md, systems.md, concept_model.json, git_forensics.md and more. Requires shell execution and Python 3.10+. For ad-hoc file queries or instant impact analysis during active development, use nexus-query instead.
- 📁 evals/
- 📁 install/
- 📁 references/
- 📄 .gitignore
- 📄 CHANGELOG.md
- 📄 CONTRIBUTING.md
检查和清理中英文文本里的 AI 套路,适用于”去 AI 味””说人话””自然一点””别像模板”这类改写需求;按场景控制力度,同时保留事实、术语和语域。
- 📁 agents/
- 📁 assets/
- 📁 references/
- 📄 SKILL.md
Inspect external prediction model implementations and adapt them to EasyTSF task contracts. Use when the user provides model code, class definitions, forward logic, or config fragments and wants Codex to classify the target task as `sequence_prediction`, `graph_prediction`, or `grid_prediction`, determine the current repository fit, and produce either a direct adaptation plan or a repository extension plan.
Create a pull request for EmbodiChain following the project's PR template and conventions
- 📁 reference/
- 📄 README.md
- 📄 SKILL.md
Offensive AI security testing and exploitation framework. Systematically tests LLM applications for OWASP Top 10 vulnerabilities including prompt injection, model extraction, data poisoning, and supply chain attacks. Integrates with pentest workflows to discover and exploit AI-specific threats.
- 📁 .github/
- 📁 agents/
- 📁 assets/
- 📄 README.md
- 📄 README_CN.md
- 📄 SKILL.md
Use when user requests diagrams, flowcharts, architecture charts, or visualizations. Also use proactively when explaining systems with 3+ components, complex data flows, or relationships that benefit from visual representation. Generates .drawio XML files and exports to PNG/SVG/PDF locally using the native draw.io desktop CLI.
Add assessment annotations to a Semiont resource — flag scheduling risks, dangers, inaccuracies, logical gaps, or other evaluative concerns using AI-assisted or manual assessment
- 📁 references/
- 📄 embed.go
- 📄 SKILL.md
Investigate incidents, debug performance issues, analyze logs, and manage observability resources in Dynatrace using the dtctl CLI. Use this skill whenever the user asks about error rates, latency spikes, service health, crash-looping pods, web vitals, SLO status, open problems, root cause analysis, log patterns, trace analysis, or building dashboards — even if they don't mention Dynatrace by name. Also covers DQL queries, workflow management, notebook and dashboard creation, settings configuration, and any operations against a Dynatrace environment.
A deterministic thinking partner that challenges assumptions and applies mental models to sharpen decisions, solve problems, and think more clearly. Use this skill whenever a user says "help me think through X", "challenge my thinking", "what am I missing", "apply mental models to this", "play devil's advocate", "stress test this idea", "poke holes in my plan", "help me decide between X and Y", "what are the second-order effects", "I'm stuck on a decision", names any specific model (SWOT, first principles, inversion, pre-mortem, etc.), or asks for structured reasoning on any ambiguous, high-stakes, or complex problem. Also trigger when the user seems uncertain, is rationalizing, or is asking "am I thinking about this right?" Even casual phrases like "what do you think about..." on non-trivial topics should trigger this skill. --- # Thinking Partner A deterministic thinking partner that challenges assumptions and applies mental models to help users think better and clearer. Not a lecture — a sparring session. ## Core Philosophy Good thinking is an active achievement, not a default state. The goal is not to tell the user what to think, but to sharpen *how* they think by: 1. **Challenging assumptions** — Surface hidden beliefs the user is treating as facts 2. **Applying mental models** — Select and deploy the right thinking frameworks for the situation 3. **Detecting orientation capture** — Notice when thinking serves comfort instead of truth 4. **Maintaining productive tension** — Hold complexity open long enough to find real insight You are not a yes-machine. You are not an interrogator. You are a thinking partner: respectful, direct, genuinely curious, and willing to push back. ## When This Triggers - "Help me think through X" - "Challenge my thinking / assumptions" - "What am I missing?" - "Apply [any model name] to this" - "Play devil's advocate" - "Stress test this idea / plan" - "Help me decide between X and Y" - "What are the second-order effects?" - "Am I thin
- 📁 assets/
- 📁 references/
- 📄 SKILL.md
Use this skill when working with Salesforce Agent Script — the scripting language for authoring Agentforce agents using the Atlas Reasoning Engine. Triggers include: creating, modifying, or comprehending Agent Script agents; working with AiAuthoringBundle files or .agent files; designing topic graphs or flow control; producing or updating an Agent Spec; validating Agent Script or diagnosing compilation errors; previewing agents or debugging behavioral issues; deploying, publishing, activating, or deactivating agents; deleting or renaming agents; authoring AiEvaluationDefinition test specs or running agent tests. This skill teaches Agent Script from scratch — AI models have zero prior training data on this language. Do NOT use for Apex development, Flow building, Prompt Template authoring, Experience Cloud configuration, or general Salesforce CLI tasks unrelated to Agent Script.
Research and compare Apple products to help decide if they're worth buying. Use when the user: (1) asks whether to buy a Mac, iPhone, iPad, Apple Watch, or AirPods; (2) wants to compare models; (3) seeks a buying recommendation; (4) mentions an Apple product model name or number (e.g. "iPhone 17", "MacBook Pro M4", "iPad Air 13"); (5) uses a comparison pattern like "X vs Y" where X or Y is an Apple product (e.g. "iPhone 17 vs iPhone 17e", "MacBook Air vs MacBook Pro"); (6) asks about upgrading, waiting, or which model to choose.