AI-powered autonomous data extraction that navigates complex sites and returns structured JSON. Use this skill when the user wants structured data from websites, needs to extract pricing tiers, product listings, directory entries, or any data as JSON with a schema. Triggers on "extract structured data", "get all the products", "pull pricing info", "extract as JSON", or when the user provides a JSON schema for website data. More powerful than simple scraping for multi-page structured extraction.
- 📄 LICENSE
- 📄 README.md
- 📄 SKILL.md
Use when running promptminder or promptminder-agent commands, setting PROMPTMINDER_TOKEN, passing --team for workspace scoping, handling JSON stderr errors like "Missing token" or HTTP 401, or using the agent wrapper with dot-notation actions and --input JSON.
Validate, format, and convert YAML/JSON files using fast-yaml (fy) tool. Triggers on: 'validate yaml', 'format yaml', 'lint yaml', 'check yaml syntax', 'convert yaml to json', 'convert json to yaml', 'yaml formatter', 'fix yaml formatting', 'json to yaml'. Supports bidirectional YAML ↔ JSON conversion, YAML 1.2.2 spec with parallel processing for batch operations.
- 📁 assets/
- 📁 references/
- 📁 scripts/
- 📄 SKILL.md
プ譜(プロジェクト譜)のJSONデータを既存のドキュメントから自動生成するスキル。 使用するタイミング: (1) PowerPoint、Excel、PDF、テキストファイルからプ譜を生成したい (2) 複数のドキュメントを解析してプ譜の要素を抽出したい (3) プロジェクト関連資料からプ譜を自動作成したい (4) プ譜エディタ(pufu-editor)で使えるJSON形式でエクスポートしたい (5) 時系列で複数ステップがあるプロジェクトの複数局面プ譜を生成したい (6) 偶数局面を振り返り局面として、計画と振り返りのペアでプ譜を生成したい # プ譜ジェネレーター (Pufu Generator) 既存のドキュメント(pptx, xlsx, pdf, docx, txt, md)からプ譜エディタ互換のJSONデータを自動生成する。 単一局面のプ譜だけでなく、時系列で複数局面のプ譜を生成可能。偶数局面は振り返り局面として自動構成される。 ## ワークフロー ``` 入力ファイル → 読み取り・分析 → プ譜JSON生成 → サマリ表示 → (任意)画像生成 ``` **処理フロー:** 1. **読み取り・分析**: 入力ファイルを直接読み取り、プ譜の各要素を抽出する - 対応形式: pptx, xlsx, pdf, docx, txt, md - 時系列のステップがある場合は局面を検出し、複数局面モードで処理 2. **プ譜JSON生成**: 抽出した要素からpufu-editor互換のJSONを生成し、ファイルに保存 - 単一局面: `ProjectScoreModel` 形式 - 複数局面: `ProjectScoreMap` 形式(局面ごとの個別JSONも出力) 3. **サマリ表示**: 生成結果の概要をユーザーに表示 4. **画像生成**(オプション): Playwrightスクリプトでプ譜をPNG画像に変換 > **注意**: ファイルの読み取り・要素抽出・統合はClaude自身が直接行う。 > Pythonスクリプトは最終成果物の生成(JSON整形・画像キャプチャ)にのみ使用する。 ### 作業ディレクトリ構成 各ステップの成果物をフォルダに格納する。処理開始時にディレクトリを作成すること。 ``` {work_dir}/ ├── 01_analysis/ # Step 1: 読み取り・分析の結果 │ └── analysis.json # 抽出した要素の分析結果 ├── 02_output/ # Step 2: プ譜JSON(最終成果物) │ ├── pufu.json # 単一局面の場合 │ ├── pufu_all_phases.json # 複数局面の場合(ProjectScoreMap) │ ├── pufu_phase1.json # 複数局面の場合(個別局面) │ ├── pufu_phase2.json │ └── ... └── 03_image/ # Step 4: 画像(オプション) ├── pufu.png # 単一局面の場合 ├── pufu_phase1.png # 複数局面の場合(局面ごと) ├── pufu_phase2.png └── ... ``` **01_analysis/analysis.json の形式(単一局面):** ```json { "source_files": ["project_plan.pdf"], "mode": "single", "gainingGoal": "抽出した獲得目標テキスト", "winCondition": "抽出した勝利条件テキスト", "purposes": [ { "text": "中間目的テキスト", "measures": [ {"text": "施策テキスト", "color": "red"} ] } ], "elements": { "people": "抽出したテキスト", "money": "抽出したテキスト", "time": "抽出したテキスト", "quality": "抽出したテキスト", "businessScheme": "抽出したテキスト", "environment": "抽出したテキスト", "rival": "抽出したテキスト", "foreignEnemy": "抽出したテキスト" } } ``` **01_analysis/analysis.json の形式(複数局面):** ```json { "source_files": ["p
Processes Firefox bookmark exports (JSON) to organize links by category, generate summaries, and produce a visual HTML feed. Activate when the user mentions "bookmarks", "bookmark curator", "organizar bookmarks", "exportei os bookmarks", or "bookmark feed". --- # Bookmark Curator Process Firefox bookmark JSON exports into organized, categorized outputs: a structured markdown file for the training-mentor Skill and a visual HTML feed for browsing. ## Input Firefox bookmark JSON export. Default location: `~/Downloads/bookmarks-YYYY-MM-DD.json` (or ask the user for the filename). If the file is not found, ask the user to export: > Firefox > Bookmarks > Manage Bookmarks > Import and Backup > Backup > Save as JSON ## Processing Pipeline ### Step 0: Check Progress Read `references/progress.md` (in this Skill's folder). This file tracks which URLs have already been processed. If it doesn't exist, create it. Compare all bookmark URLs from the JSON against the processed list. Only process new URLs not yet in the list.
Create or modify tools (.json + .sh pairs) and skills (SKILL.md files) and hot-reload them into the active conversation using reload_capabilities. Use when you want to build a new capability, extend yourself with a new tool, fix an existing tool, create or update a skill, or build a complete application (web server, API, data pipeline, CLI) — all without restarting. --- ## Overview You can extend yourself at runtime. New tools and skills take effect immediately via `reload_capabilities` — no session restart required. Session-scoped capabilities live in `core/` inside your session directory: - **Tools**: `core/tools/<name>.json` (schema) + `core/tools/<name>.sh` (implementation) - **Skills**: `core/skills/<name>/SKILL.md` (frontmatter + body) **There is no limit on what a tool can do.** The shell script can call Python, Node.js, any language or binary on the system. Build first, use immediately. --- ## Building a tool 1. Write the JSON schema to `core/tools/<name>.json` 2. Write the shell implementation to `core/tools/<name>.sh` 3. `chmod +x core/tools/<name>.sh` 4. Call `reload_capabilities` 5. Test by calling the tool ### JSON schema template ```json { "name": "my_tool", "description": "What this tool does. Be specific — the model reads this to decide when to call it.", "input_schema": { "type": "object", "properties": { "arg1": {"type": "string", "description": "Description of arg1."} }, "required": ["arg1"] } } ``` ### Shell tool contract - All kwargs arrive as a JSON object on **stdin** - Write result to **stdout** - Exit 0 = success; non-zero = error (stderr returned as error message) - Timeout: 30 seconds ```bash #!/usr/bin/env bash python3 << 'PYEOF' import sys, json args = json.load(sys.stdin) result = args['arg1'].upper() print(result) PYEOF ``` ### Building full applications Since `.sh` can do anything, tools can build and drive complete applications: **Persistent background process (e.g., web server)** ```bash #!/usr/bin/env bash PORT=$(python3 -c "import
This skill should be used when interacting with the beankeeper accounting system via the `bk` CLI. Use when the user asks to "record a transaction", "post an entry", "check a balance", "generate a report", "create a company", "set up accounts", "import bank statements", "set a budget", "compare budget vs actual", "reconcile entries", "verify the ledger", "export data", "attach a receipt", or any financial bookkeeping task. Also use when piping structured JSON output from `bk` into other tools or agents. --- # Beankeeper (`bk`) -- Double-Entry Accounting CLI Beankeeper is a double-entry accounting system operated entirely through the `bk` command-line interface. It stores data in a local SQLite database (optionally encrypted via SQLCipher). All output supports three formats: human-readable tables, machine-readable JSON, and CSV. ## Core Concepts - **Double-entry**: Every transaction has balanced debits and credits. Total debits always equal total credits. - **Companies**: Multi-tenant -- each company has its own chart of accounts and ledger. Specified via `--company SLUG` or `BEANKEEPER_COMPANY` env var. - **Accounts**: Five types: `asset`, `liability`, `equity`, `revenue`, `expense`. Each has a code (e.g. `1000`) and a normal balance direction (debit or credit). - **Amounts**: Always specified in **major units** (dollars, not cents) on the CLI. Stored internally as minor units (cents). Example: `2500` means $2,500.00. - **Append-only ledger**: Transactions cannot be edited or deleted after posting. Corrections are made via reversing entries. - **Idempotency**: Use `--reference KEY` with `--on-conflict skip` for safe retry of duplicate posts. For detailed accounting concepts and account types, see [`references/accounting.md`](references/accounting.md). ## Agent Integration **Always use `--json` for programmatic access.** Every JSON response uses a uniform envelope: ```json { "ok": true, "meta": { "command": "...", "company": "...", "timestamp": "..." }, "data": { ...