Build against the memories.sh SDK packages in application code. Use when working with `@memories.sh/core` or `@memories.sh/ai-sdk`, including: (1) Initializing `MemoriesClient`, (2) Reading, writing, searching, or editing memories from backend code, route handlers, workers, or scripts, (3) Integrating memories with the Vercel AI SDK via `memoriesMiddleware`, `memoriesTools`, `preloadContext`, or `createMemoriesOnFinish`, (4) Choosing and applying `tenantId` / `userId` / `projectId` scoping, (5) Managing SDK skill files or management APIs, or (6) Debugging memories SDK usage in TypeScript or JavaScript applications. Use `memories-cli` for CLI workflows, `memories-mcp` for MCP setup, and `memories-dev` for monorepo internals.
Scaffold and build Splunk custom visualizations using Canvas 2D
Autonomously optimize any Claude Code skill by running it repeatedly, scoring outputs against binary evals, mutating the prompt, and keeping improvements. Based on Karpathy's autoresearch methodology. Use when: optimize this skill, improve this skill, run autoresearch on, make this skill better, self-improve skill, benchmark skill, eval my skill, run evals on. Outputs: an improved SKILL.md, a results log, and a changelog of every mutation tried.
You MUST use this before any creative work - creating features, building components, adding functionality, or modifying behavior. Explores user intent, requirements and design before implementation.
- 📄 inspect.sh
- 📄 README.md
- 📄 README.zh-TW.md
Show installed Claude Code skills, plugins, hooks, MCP servers, and commands in a browser dashboard
- 📁 assets/
- 📁 references/
- 📁 scripts/
- 📄 SKILL.md
Use this when you need to EVALUATE OR IMPROVE or OPTIMIZE an existing LLM agent's output quality - including improving tool selection accuracy, answer quality, reducing costs, or fixing issues where the agent gives wrong/incomplete responses. Evaluates agents systematically using MLflow evaluation with datasets, scorers, and tracing. IMPORTANT - Always also load the instrumenting-with-mlflow-tracing skill before starting any work. Covers end-to-end evaluation workflow or individual components (tracing setup, dataset creation, scorer definition, evaluation execution).
Summarize webpage(s) into clear key points.
- 📁 docs/
- 📁 scripts/
- 📄 SKILL.md
Provides local documentation for the Claude Agent SDK (formerly Claude Code SDK).
Canonical reconciliation runsheet for AUD artefacts. Create or update the audit, disposition every finding, reconcile specs/contracts, and hand off to closure only when audit state supports it.
分析备忘录(Analytical Memo)生成工具。研究者在编码过程中或编码后,直接 说出脑子里的想法——一个编码、一段资料、一个困惑、一个"这里有什么"的感觉—— skill 自动生成结构化的分析备忘录并保存为 Markdown 文件到本地。 适用于主题分析(TA)、扎根理论(GT)及一切质性研究方法。 与 memo-coach 的区别:analytic-memo 由 AI 代写分析内容; memo-coach 由研究者自己写,AI 只负责追问(专用于程序化扎根理论)。 当用户提到"写备忘录""记录分析思路""写 memo""分析笔记""帮我记下这个想法" "这个编码有点意思""这里好像有什么""这个值得记录" "这个受访者说的很奇怪",或在编码/主题分析过程中表达任何需要捕捉的分析直觉时触发。 --- # 分析备忘录(Analytical Memo) 分析备忘录是质性研究中捕捉分析动能的核心工具。Charmaz(2014)将备忘录定义为 研究者与数据之间持续进行的智识对话,而非填写分类表格的形式操作。 此 skill 的设计原则:**研究者只管说出想法,工具负责追问和结构化**。 ## 启动:获取必要信息 触发后,只需收集两项信息(其余由 skill 自动判断): 1. **触发内容**:用户输入的编码片段、类属名称、原始资料段落、初步想法或困惑 (直接使用用户的原始表述,不要要求用户重新整理或分类) 2. **保存路径**(可选):若未提供,默认保存到 `~/Documents/research-memos/` 若用户在之前对话中已提供研究背景(研究主题、研究问题),直接沿用,不重复询问。 --- ## 内部识别逻辑(对用户不可见) 根据用户输入,自动判断分析方向,**不向用户暴露这个判断过程**: **→ 概念深化**(输入是单个编码或类属,附带描述或疑问) 追问:这个概念的核心含义和边界是什么?在哪些条件下更显著或消退? 与已有理论概念有何联系或张力?它暗示了什么理论主张? **→ 关系假设**(输入涉及两个或以上概念,且包含关系词:关系、影响、导致、联系、之间) 追问:这个关系的性质是什么(因果、条件、并行、对立)? 数据中有哪些直接证据?在什么情境下成立或不成立(边界条件)? **→ 负面案例**(输入包含反差信号:但是、例外、不符合、反而、奇怪、矛盾、和别人不一样) 追问:这是真正的反例,还是揭示了边界条件? 是否需要修订现有类属或理论假设?修订方向是什么? **→ 反身性**(输入包含研究者自我指涉:我觉得、我担心、我是否、我的立场、我注意到自己) 追问:研究者的哪种预设或情绪可能影响了这段分析? 这个反思对理论抽样或研究设计有什么启示? **→ 综合展开**(输入混合多种信号,或信号不明确) 先用一句话锚定这段想法的核心,再沿最主要的分析方向展开。 --- ## 发展阶段判断(参考 Birks, Chapman & Francis, 2008) 根据用户描述的研究进展,在文件 frontmatter 中自动标注阶段: - `preliminary`:研究者处于开放编码早期,想法贴近数据、印象式 - `interim`:开始跨类属思考,建立概念间联系 - `advanced`:涉及核心类属、理论命题或整体理论框架 判断依据: - "刚开始编码"/"第一份访谈" → preliminary - 提到多个类属的关系/"开始看到模式" → interim - 提到核心类属/"理论框架"/"饱和" → advanced - 无法判断 → 留空,不强行填写 --- ## 备忘录生成 按以下结构生成分析内容(对话中展示,同时写入文件): ### 文件 frontmatter ```yaml ---
Assesses whether an existing Python, bash, or hybrid pipeline is a good fit for Seamless (content-addressed caching, reproducible execution, local-to-cluster scaling). Triggers when wrapping scripts or functions without rewriting them, avoiding recomputation, comparing workflow frameworks (vs Snakemake, Nextflow, CWL, Airflow, Prefect), migrating a pipeline, or setting up remote/HPC execution. Covers direct/delayed decorators, seamless-run CLI, nesting, module inclusion, scratch/witness patterns, deep checksums, and execution backends (local, jobserver, daskserver). Provides safe guidance on remote execution and determinism — avoids naive "copy code to server" suggestions.
Run comprehensive pre-press preflight checks on Adobe Illustrator documents using illustrator-mcp tools. Detects print-critical issues (RGB in CMYK, broken links, low-res images, white overprint, text not outlined), text consistency problems (dummy text, notation variations), and PDF/X compliance. Use when user asks to check a document before printing, submission, or handoff — or mentions "preflight", "pre-press check", "print check", "submission check".