Burp Suite scanning via MCP tools — passive traffic analysis, active payload testing, OOB verification, and vulnerability reporting using Burp's proxy, HTTP sender, Collaborator, and scanner APIs. Use when the user has Burp Suite running with the AI Agent MCP server and wants to scan, test, or analyze web traffic through an AI coding assistant (Claude Code, Gemini CLI, Codex, etc.).
Diagnostic guide for troubleshooting rulii rules that aren't firing, returning unexpected PASS/FAIL/SKIP, or throwing exceptions — covers bindings, types, scope, tracing, and validation violations
Deep Abstract Syntax Tree analysis for understanding code structure, dependencies, impact analysis, and pattern detection at the structural level across multiple programming languages
Create structured implementation plan in docs/plans/
- 📁 skill_search/
- 📄 SKILL.md
> 从 105K+ 技能卡中语义搜索最匹配的 skill。零依赖,内置默认 API 地址,开箱即用。
>-
Creates and maintains Figma Code Connect template files that map Figma components to code snippets. Use when the user mentions Code Connect, Figma component mapping, design-to-code translation, or asks to create/update .figma.js files.
- 📁 scripts/
- 📄 package.json
- 📄 SKILL.md
Guardian service that monitors the active runtime agent's state and automatically restarts it if stopped. Use when checking agent liveness state or understanding the auto-restart mechanism.
Standardize branch creation and PR submission for the GGA project.
CLI to estimate the required memory to load either Safetensors or GGUF model weights for inference from the Hugging Face Hub
Build and deploy Statespace apps — Markdown-based web applications that agents interact with over HTTP. TRIGGER when: user asks to create a Statespace app, add tools/components to markdown, deploy with `statespace` CLI, or connect agents to a Statespace endpoint. DO NOT TRIGGER when: general markdown editing, static site generators, or unrelated CLI tools.
- 📁 references/
- 📁 scripts/
- 📄 .security-scan-passed
- 📄 SKILL.md
Transcribes audio and video files to text using Qwen3-ASR. Supports two modes — local MLX inference on macOS Apple Silicon (no API key, 15-27x realtime) and remote API via vLLM/OpenAI-compatible endpoints. Auto-detects platform and recommends the best path. Triggers when the user wants to transcribe recordings, convert audio/video to text, do speech-to-text, or mentions ASR, Qwen ASR, 转录, 语音转文字, 录音转文字. Also triggers for meeting recordings, lectures, interviews, podcasts, screen recordings, or any audio/video file the user wants converted to text.