- 📁 language/
- 📁 util/
- 📄 index.rs
- 📄 lang.rs
- 📄 main.rs
cx
Prefer cx over reading files. Escalate: overview → symbols → definition/references → Read tool.
Prefer cx over reading files. Escalate: overview → symbols → definition/references → Read tool.
Query 50 Indonesian government APIs and data sources — BPJPH halal certification, BPOM food safety, OJK financial legality, BPS statistics, BMKG weather/earthquakes, Bank Indonesia exchange rates, IDX stocks, CKAN open data portals, pasal.id (third-party law MCP). Use when building apps with Indonesian government data, scraping .go.id websites, checking halal certification, verifying company legality, looking up financial entity status, or connecting to Indonesian MCP servers. Includes ready-to-run Python patterns, CSRF handling, CKAN API usage, and IP blocking workarounds. --- # Querying Indonesian Government Data 🇮🇩 STARTER_CHARACTER = 🇮🇩 Route the user's intent to the right child reference, then follow its patterns. ## Router | User intent | Load reference | Quick pattern | |------------|---------------|---------------| | Halal certification, halal product check | [references/bpjph-halal.md](references/bpjph-halal.md) | `POST cmsbl.halal.go.id/api/search/data_penyelia` JSON, no auth | | Food/drug/cosmetic registration, BPOM | [references/bpom-products.md](references/bpom-products.md) | Session + CSRF → `POST cekbpom.pom.go.id/produk-dt` | | Is this fintech/investment legal, OJK | [references/ojk-legality.md](references/ojk-legality.md) | `GET sikapiuangmu.ojk.go.id/FrontEnd/AlertPortal/Search` | | Weather in Indonesia, earthquake, tsunami | [references/bmkg-weather.md](references/bmkg-weather.md) | `GET data.bmkg.go.id/DataMKG/TEWS/autogempa.json` | | GDP, inflation, population, trade stats | [references/bps-statistics.md](references/bps-statistics.md) | `GET webapi.bps.go.id/v1/api/...` (free API key) | | USD/IDR exchange rate, BI Rate | [references/bank-indonesia.md](references/bank-indonesia.md) | Scrape `bi.go.id/id/statistik/informasi-kurs/` | | Indonesian law, regulation, specific pasal | [references/pasal-id-law.md](references/pasal-id-law.md) | MCP (third-party): `claude mcp add --transport http pasal-id ...` | | Government datasets on any topic | [refere
法律文件脱敏/还原工具 - 将法律文档中的敏感信息进行智能替换和脱敏处理,或将脱敏稿还原为原文 <examples> - 帮我把这份合同脱敏处理 - 我需要脱敏这个法律文件 - 生成脱敏版本的合同文档 - 将这份法律文书中的敏感信息替换掉 - 创建合同的脱敏版本 - 帮我把脱敏稿还原成原文 - 使用比对词还原审核稿 </examples> --- # 法律文件脱敏处理 将法律文档中的敏感信息进行智能替换和脱敏处理,生成可对外分享的脱敏版本。支持将脱敏稿交由外部审核后,使用比对词还原为原文。 ## 核心功能 ### 脱敏功能 - **多种脱敏类型**:名称、日期、价格、文件名、项目名、银行账号、案号等 - **自定义脱敏类型**:创建自定义类型(如"合同名称"、"产品型号"),批量输入精准匹配内容 - **批量模式**:多文件上传自动进入批量模式,统一编号确保跨文件一致性 - **规则设置**:可自主开启/关闭16种内置脱敏类别,灵活控制识别范围 - **智能替换**:根据上下文识别角色(买方/卖方公司) - **实时预览**:黄色高亮显示脱敏内容 - **格式保留**:完整保留原文格式(段落、表格、字体) - **白名单/黑名单管理**:精确控制特定内容的脱敏行为;黑名单支持记录项目类型 - **优先级机制**:黑名单 > 白名单 > 脱敏类别(内置+自定义) - **冲突检测**:添加到列表时自动检测是否已存在于其他列表 - **调试模式**:详细日志输出,便于排查问题 ### 还原功能 - **自动化还原**:根据比对词自动将【X】标记还原为原文 - **批量还原**:支持多文件同时还原,自动匹配文件配对,ZIP打包下载 - **保留审核痕迹**:还原时保留文档中的修订、批注等审核痕迹 - **runs级别替换**:精确替换,不影响其他内容的格式 ## 使用方式 ### HTML离线工具(推荐) #### 脱敏模式 **单文件脱敏:** 1. 打开 `assets/index.html`,选择"脱敏模式" 2. 拖拽或选择单个 docx 文件上传 3. 自动识别并预览脱敏效果 4. 手动编辑脱敏项 5. 导出脱敏文件和比对.md文档 **批量脱敏:** 1. 上传多个 docx 文件,自动进入批量模式 2. 统一识别:相同内容使用相同替换文本 3. 文件切换:通过列表栏切换查看各文件 4. 同步编辑:删除/添加脱敏项会同步到所有文件 5. 导出结果:每个文件生成独立的 `{文件名}_比对.md` #### 还原模式 **单文件还原:** 1. 打开 `assets/index.html`,选择"还原模式" 2. 上传脱敏稿(带审核痕迹的docx) 3. 上传对应的比对.md文件 4. 点击"执行还原",自动下载还原后的文件 **批量还原(4步流程):** 1. **上传文件**:上传多个脱敏稿 + 多个比对.md文件 2. **确认配对**:系统自动匹配文件名,支持手动调整 3. **执行还原**:批量处理,显示进度条 4. **下载结果**:ZIP打包下载 ### Python脚本 ```bash # 安装依赖 pip install python-docx # 执行脱敏 python scripts/redact.py input.docx data/rules.json -o output.docx # 执行还原(保留修订、批注) python scripts/restore.py redacted.docx mapping.md -o restored.docx ``` ## 详细文档 - **工作流程**: [references/workflow.md](references/workflow.md) - **规则模式库**: [references/patterns.md](references/patterns.md) - **数据格式**: [references/data-formats.md](references/data-formats.md) - **脚本使用**: [scripts/README.md](scripts/README.md) - **HTML使用**: [assets/README.md](assets/README.md) ## 版本历史 - **v1.5.0(2026-03-29)右键菜单集成 + Python/HTML识别统一**: - **Windows 右键菜单**:右键 .docx 文件可直接"用脱敏工具打开"、"一键脱敏"或"一键还原",通过注册表集成,无需管理员权限 - **macOS 右键菜单**:通过 Automator Quick Action 实现,Fi
需求分析阶段入口;聚合评分、追问与范围判定规则,按需加载 references/assets/scripts。
Write, review, debug, and explain PLECS C-Script code for custom control blocks in PLECS simulations. Use this skill whenever the user asks about C-Script, wants to implement a custom block in PLECS, needs help with PLECS macros (InputSignal, OutputSignal, ContState, DiscState, ZCSignal, etc.), asks about sample time configuration, state variables, zero-crossing detection, user parameters, or needs to port controller C code into a PLECS simulation. Trigger even if the user just mentions "PLECS block", "custom block", "C-Script", or "cscript". --- # PLECS C-Script Skill You are an expert on PLECS C-Script custom control blocks. When this skill is active, generate correct, well-structured C-Script code that integrates cleanly with the PLECS solver. For the full macro reference, see [references/macros.md](references/macros.md). For complete worked examples, see [references/examples.md](references/examples.md). If the user is editing or generating a `.plecs` file, load [references/plecs-file-format.md](references/plecs-file-format.md) and [references/cscript.plecs](references/cscript.plecs) for the complete file format and a working reference model.(CAUTION: if not required or edited directly, DO NOT GENERATE .plecs files) --- *ALWAYS READ ALL LINES OF THIS DOCUMENT SKILL.MD before making changes.* # C-Script Architecture ## Block Setup Parameters These are configured in the **Setup** tab of the C-Script block dialog before writing any code. ### `Number of inputs` Defines the number and width of input ports. | Value | Effect | |---|---| | `n` (scalar integer) | Single input port accepting a scalar signal | | `[n1, n2, ...]` (vector) | Multiple input ports; port `i` accepts a signal of width `ni` | | `-1` | Dynamic sizing: width determined by connected signal | > **Format note:** In the PLECS dialog both comma-separated (`[2, 3]`) and space-separated (`[2 3]`) are accepted. Inside `.plecs` files the space-separated form is used (e.g. `"[2 3]"`).
Expert FinOps guidance covering cloud, AI, and SaaS technology spend. Includes AI cost management, GenAI capacity planning, Anthropic billing, AWS (EC2, Bedrock, Savings Plans, CUR, commitment strategy), Azure (reservations, Savings Plans, AHB, OpenAI PTUs, portfolio liquidity), GCP (Vertex AI, Compute Engine, BigQuery), tagging governance, SaaS management (SAM, licence optimisation, SMPs, shadow IT), AI coding tools (Cursor, Claude Code, Copilot, Windsurf, Codex), ITAM, Databricks, Snowflake, OCI, and GreenOps. Use for any query about technology cost, commitment portfolio management, rightsizing, cost allocation, SaaS sprawl, AI dev tool spend, or connecting spend to business value. Built by OptimNow. --- # FinOps - Expert Guidance > Built by OptimNow. Grounded in hands-on enterprise delivery, not abstract frameworks. --- ## How to use this skill This skill covers cloud, AI, SaaS, and adjacent technology spend domains. Read `references/optimnow-methodology.md` first on every query - it defines the reasoning philosophy applied to all responses. Then load the domain reference that matches the query. ### Domain routing | Query topic | Load reference | |---|---| | AI costs, LLM inference, token economics, agentic cost patterns, AI ROI, AI cost allocation, GPU cost attribution, RAG harness costs | `references/finops-for-ai.md` | | AI investment governance, AI Investment Council, stage gates, incremental funding, AI value management, AI practice operations | `references/finops-ai-value-management.md` | | GenAI capacity planning, provisioned vs shared capacity, traffic shape, spillover, throughput units | `references/finops-genai-capacity.md` | | AWS billing, EC2 rightsizing, RIs, Savings Plans, commitment strategy, portfolio liquidity, phased purchasing, CUR, Cost Explorer, EDP negotiation, RDS cost management, database commitments | `references/finops-aws.md` | | AWS Bedrock billing, Bedrock provisioned throughput, model unit pricing, Bedrock batch inference | `referenc
This skill should be used when the user asks to "check Slack", "triage my Slack", "check my messages", "Slack summary", "what did I miss on Slack", or invokes /slack or /messages. Scans Eric's Slack workspace for recent messages, DMs, threads, and mentions — classifies by priority tier, and offers reply drafting. References porres-family-assistant for contacts context. --- # Slack Triage Skill ## Overview Scan Eric's Slack workspace for recent messages, classify them into three priority tiers, and offer to draft replies for urgent items. This skill mirrors the email-triage pattern but adapted for Slack's channel-based, threaded communication model. This skill does NOT maintain its own contact data — it reads from the porres-family-assistant skill as the canonical source for people context. ## Available Slack MCP Tools The Slack connector (https://mcp.slack.com/mcp) provides these tools: | Tool | Purpose | |------|---------| | `slack_read_channel` | Read recent messages from a specific channel | | `slack_search_public_and_private` | Search across all accessible channels | | `slack_search_users` | Find users by name or email | | `slack_search_channels` | Find channels by name or topic | ## Step 0 — Load Context (runtime references) Before scanning, read these files from the family assistant to establish priority context: | File | What it provides | |------|-----------------| | `shared/skills/porres-family-assistant/references/family-members.md` | Family names — helps identify personal messages from family members | | `shared/skills/porres-family-assistant/references/email-aliases.md` | Alias routing — email/Slack identity overlap | **Load only these two.** Don't load insurance, medical, or finance unless a specific message requires that context. Also load `references/workspace-config.md` from this skill for channel priority mappings (once Eric configures it). ## Step 1 — Scan Workspace Use the Slack MCP tools to gather recent activity. Run these searches in parallel:
Analyzes a codebase and generates animated HTML architecture reports — beautiful, bespoke visualizations with interactive animated diagrams showing how the system works. Use this skill whenever the user asks to "visualize the codebase", "explain the architecture", "generate a diagram", "show how the code flows", "create an architecture diagram", "animate the data flow", "explain this repo visually", "show me how this works", or "generate an architecture report". --- # Codebase Visualizer Analyzes a codebase and produces beautiful, self-contained HTML architecture outputs with animated flow diagrams. =================================================================== OUTPUT MODES =================================================================== /archflow → Full architecture report (default) /archflow-diagram → Animated diagram only (legacy, self-contained) /archflow-slides → Slide deck presentation =================================================================== WORKFLOW — FULL REPORT (default: /archflow) =================================================================== 1. ANALYZE Read references/analysis.md → scan the codebase Read references/layouts.md → decide the diagram layout pattern 2. THINK (commit to a visual direction before coding) Read references/design-system.md → CSS patterns library Read references/libraries.md → fonts, Mermaid, CDN imports Read references/design-qa.md → quality gates
Checks AGENTS.md and SKILL.md files against the actual codebase for drift. Surfaces references to packages, directories, commands, or patterns that no longer match reality.
Use this skill to execute a shaped Package within a build session. Implements the full building process: orient on the codebase, pick a first piece (core/small/novel), integrate vertically with TDD, discover and map scopes, track progress with hill charts, and scope hammer when capacity runs low. For web projects, verifies with browser automation. Writes handover documents for multi-session continuity. Only use after a Package has Shape Go approval. Use when the user says "/build NNN" or "let's build feature NNN" or "start building NNN". --- # Shape Up: Build You are running a **Build session** — the execution phase of the Shape Up methodology. Building turns a shaped Package into deployed software within a fixed appetite. > **Reference Index** — Read only what you need, when you need it. > > | File | Contains | When to read | > |------|----------|-------------| > | `references/02-building-process.md` | Full building methodology: orientation, vertical integration, scopes, shipping | **Read now** — core to this skill | > | `references/05-hill-chart-protocol.md` | Hill chart model, uphill/downhill phases, stuck scope protocol | **Read now** — needed for progress tracking | > | `references/04-scope-hammering-rules.md` | Scope cutting decision framework, must-have vs nice-to-have | **Read at Step 6** when capacity gets tight | > | `references/07-pitfalls.md` | Three critical failure modes | Read if scopes are stuck or work feels undershaped | > | `references/00-glossary.md` | Shape Up terminology definitions | Read if you encounter an unfamiliar term | > | `references/01-shaping-process.md` | How shaping works | Read if the Package seems incomplete or unclear | > | `references/03-pitch-template.md` | Package format (5 ingredients) | Read if you need to interpret the Package structure | > | `references/06-agent-workflow-guide.md` | Full pipeline overview, agent decision rules | Read if reactive work conflicts with build | > | `references/08-framing.md` | Framing methodol
Практическая инженерия диффузионных моделей: архитектуры, обучение, инференс, оптимизация памяти. Использовать при любых задачах с диффузионными моделями: проектирование или модификация архитектуры (UNet/DiT/Flow/Flux), выбор и настройка schedulers/samplers, дообучение (LoRA/DreamBooth/full fine-tune), оптимизация памяти (AMP/checkpointing/ZeRO/FSDP/quantization), замена или fusion текст-энкодеров (CLIP/Qwen), работа с Diffusers, отладка диффузионных пайплайнов, оценка качества (FID/CLIPScore/LPIPS), latent diffusion, VAE, guidance/CFG, rectified flow, Stable Diffusion, SDXL, Flux. Также применять при вопросах про GPU-память при обучении генеративных моделей, text-to-image пайплайны, ControlNet, multi-encoder fusion, WebDataset. --- # Diffusion Engineering Skill ## Быстрая ориентация Три инженерных решения, которые больше всего влияют на качество/скорость/стоимость: 1. **Где идёт диффузия** → пиксели (дорого) или латентное пространство (LDM/SD-семейство — практично) 2. **Backbone денойзера** → UNet (классика, проще) или Transformer/DiT/Flow (масштабируется лучше) 3. **Управление сэмплингом** → scheduler, число шагов, guidance_scale — часто дают больше, чем правка сети --- ## Reference files — читать по задаче | Тема | Файл | Когда читать | |---|---|---| | Архитектуры и data flow | `references/architectures.md` | DDPM/SDE/LDM/DiT/Flux/VAE/SDXL, схема пайплайна | | Schedulers и guidance | `references/samplers.md` | DDIM/Euler/Heun/DPM-Solver/PNDM, CFG, prediction_type | | Обучение и дообучение | `references/training.md` | Loss/цели, LoRA/DreamBooth/full FT, гиперпараметры | | Память и распределённость | `references/memory.md` | AMP, checkpointing, ZeRO, FSDP, quantization, FP8 | | Текст-энкодеры и данные | `references/encoders-data.md` | CLIP/Qwen/multi-encoder, токенизация, data pipeline | | Оценка и траблшутинг | `references/eval-debug.md` | FID/CLIPScore/LPIPS, типовые поломки и фиксы, лицензии | --- ## Быстрый чеклист «я строю/модифицирую diffusion» - [ ] **Backbo
Практическая инженерия диффузионных моделей: архитектуры, обучение, инференс, оптимизация памяти. Использовать при любых задачах с диффузионными моделями: проектирование или модификация архитектуры (UNet/DiT/Flow/Flux), выбор и настройка schedulers/samplers, дообучение (LoRA/DreamBooth/full fine-tune), оптимизация памяти (AMP/checkpointing/ZeRO/FSDP/quantization), замена или fusion текст-энкодеров (CLIP/Qwen), работа с Diffusers, отладка диффузионных пайплайнов, оценка качества (FID/CLIPScore/LPIPS), latent diffusion, VAE, guidance/CFG, rectified flow, Stable Diffusion, SDXL, Flux. Также применять при вопросах про GPU-память при обучении генеративных моделей, text-to-image пайплайны, ControlNet, multi-encoder fusion, WebDataset. --- # Diffusion Engineering Skill ## Быстрая ориентация Три инженерных решения, которые больше всего влияют на качество/скорость/стоимость: 1. **Где идёт диффузия** → пиксели (дорого) или латентное пространство (LDM/SD-семейство — практично) 2. **Backbone денойзера** → UNet (классика, проще) или Transformer/DiT/Flow (масштабируется лучше) 3. **Управление сэмплингом** → scheduler, число шагов, guidance_scale — часто дают больше, чем правка сети --- ## Reference files — читать по задаче | Тема | Файл | Когда читать | |---|---|---| | Архитектуры и data flow | `references/architectures.md` | DDPM/SDE/LDM/DiT/Flux/VAE/SDXL, схема пайплайна | | Schedulers и guidance | `references/samplers.md` | DDIM/Euler/Heun/DPM-Solver/PNDM, CFG, prediction_type | | Обучение и дообучение | `references/training.md` | Loss/цели, LoRA/DreamBooth/full FT, гиперпараметры | | Память и распределённость | `references/memory.md` | AMP, checkpointing, ZeRO, FSDP, quantization, FP8 | | Текст-энкодеры и данные | `references/encoders-data.md` | CLIP/Qwen/multi-encoder, токенизация, data pipeline | | Оценка и траблшутинг | `references/eval-debug.md` | FID/CLIPScore/LPIPS, типовые поломки и фиксы, лицензии | --- ## Быстрый чеклист «я строю/модифицирую diffusion» - [ ] **Backbo
skill-sample/ ├─ SKILL.md ⭐ Required: skill entry doc (purpose / usage / examples / deps) ├─ manifest.sample.json ⭐ Recommended: machine-readable metadata (index / validation / autofill) ├─ LICENSE.sample ⭐ Recommended: license & scope (open source / restriction / commercial) ├─ scripts/ │ └─ example-run.py ✅ Runnable example script for quick verification ├─ assets/ │ ├─ example-formatting-guide.md 🧩 Output conventions: layout / structure / style │ └─ example-template.tex 🧩 Templates: quickly generate standardized output └─ references/ 🧩 Knowledge base: methods / guides / best practices ├─ example-ref-structure.md 🧩 Structure reference ├─ example-ref-analysis.md 🧩 Analysis reference └─ example-ref-visuals.md 🧩 Visual reference
More Agent Skills specs Anthropic docs: https://agentskills.io/home
├─ ⭐ Required: YAML Frontmatter (must be at top) │ ├─ ⭐ name : unique skill name, follow naming convention │ └─ ⭐ description : include trigger keywords for matching │ ├─ ✅ Optional: Frontmatter extension fields │ ├─ ✅ license : license identifier │ ├─ ✅ compatibility : runtime constraints when needed │ ├─ ✅ metadata : key-value fields (author/version/source_url...) │ └─ 🧩 allowed-tools : tool whitelist (experimental) │ └─ ✅ Recommended: Markdown body (progressive disclosure) ├─ ✅ Overview / Purpose ├─ ✅ When to use ├─ ✅ Step-by-step ├─ ✅ Inputs / Outputs ├─ ✅ Examples ├─ 🧩 Files & References ├─ 🧩 Edge cases ├─ 🧩 Troubleshooting └─ 🧩 Safety notes
Skill files are scattered across GitHub and communities, difficult to search, and hard to evaluate. SkillWink organizes open-source skills into a searchable, filterable library you can directly download and use.
We provide keyword search, version updates, multi-metric ranking (downloads / likes / comments / updates), and open SKILL.md standards. You can also discuss usage and improvements on skill detail pages.
Quick Start:
Import/download skills (.zip/.skill), then place locally:
~/.claude/skills/ (Claude Code)
~/.codex/skills/ (Codex CLI)
One SKILL.md can be reused across tools.
Everything you need to know: what skills are, how they work, how to find/import them, and how to contribute.
A skill is a reusable capability package, usually including SKILL.md (purpose/IO/how-to) and optional scripts/templates/examples.
Think of it as a plugin playbook + resource bundle for AI assistants/toolchains.
Skills use progressive disclosure: load brief metadata first, load full docs only when needed, then execute by guidance.
This keeps agents lightweight while preserving enough context for complex tasks.
Use these three together:
Note: file size for all methods should be within 10MB.
Typical paths (may vary by local setup):
One SKILL.md can usually be reused across tools.
Yes. Most skills are standardized docs + assets, so they can be reused where format is supported.
Example: retrieval + writing + automation scripts as one workflow.
Some skills come from public GitHub repositories and some are uploaded by SkillWink creators. Always review code before installing and own your security decisions.
Most common reasons:
We try to avoid that. Use ranking + comments to surface better skills: