Design, plan, and analyze A/B tests with statistical rigor. Use when the user asks about A/B testing, split testing, experiment design, statistical significance, sample size calculation, test duration, multivariate testing, or conversion experiments. Trigger phrases include "A/B test", "split test", "experiment", "statistical significance", "sample size", "test duration", "which version wins", "conversion experiment", "hypothesis test", "variant testing".
Interacts with live Hunk diff review sessions via CLI. Inspects review focus, navigates files and hunks, reloads session contents, and adds inline review comments. Use when the user has a Hunk session running or wants to review diffs interactively.
Guides interactive module design via Q&A before writing. Use when the user wants to design a module, class, or feature together, or when they say "/spec-design".
- 📄 prepare_visual_script.rb
- 📄 SKILL.md
Adds visual descriptions to transcripts by extracting and analyzing video frames with ffmpeg. Creates visual transcript with periodic visual descriptions of the video clip. Use when all files have audio transcripts present (transcript) but don't yet have visual transcripts created (visual_transcript).
- 📁 references/
- 📁 scripts/
- 📄 convert_cookies.py
- 📄 requirements.txt
- 📄 SKILL.md
使用剪映(Jianying/小云雀)的 Seedance 2.0 模型自动生成AI视频。支持文生视频(T2V)、图生视频(I2V)、参考视频生成(V2V)和向后延伸(Extend)四种模式。当用户需要生成AI视频、使用Seedance模型创作短片、基于参考图像/视频进行风格转换,或对已有结果继续延长时使用此技能。需要预先配置 cookies.json 登录凭证。
- 📁 demo-workflows/
- 📄 .gitignore
- 📄 api.md
- 📄 batch-operations.md
Portable ComfyUI workflow and API guidance for any install. Use when building, validating, or troubleshooting ComfyUI image/video workflows, discovering available nodes/models via /object_info, wiring loaders/encoders/VAEs/LoRAs correctly, submitting jobs through the REST or WebSocket APIs, training LoRAs with ComfyUI, adapting a workflow to an unknown user machine without assuming specific checkpoints, paths, hardware, or custom nodes.
Adapt designs to work across different screen sizes, devices, contexts, or platforms. Implements breakpoints, fluid layouts, and touch targets. Use when the user mentions responsive design, mobile layouts, breakpoints, viewport adaptation, or cross-device compatibility.
Use before any creative work or significant changes. Activates on "brainstorm", "let's brainstorm", "deep analysis", "analyze this feature", "think through", "help me design", "explore options for", or when user asks for thorough analysis of changes, features, or architectural decisions. Guides collaborative dialogue to turn ideas into designs through one-at-a-time questions, approach exploration, and incremental validation.
- 📁 scripts/
- 📄 remotion-video.md
- 📄 round-video-character.md
- 📄 SKILL.md
complete workflow to create talking character videos with lipsync and captions. use when creating ai character videos, talking avatars, narrated content, or social media character content with voiceover.
- 📁 scripts/
- 📄 LICENSE
- 📄 pyproject.toml
- 📄 README.md
Use when user asks to explain, break down, or help understand technical concepts (AI, ML, or other technical topics). Makes complex ideas accessible through plain English and narrative structure. Use the provided scripts to transcribe videos
Create new skills from documents, tutorials, or examples. Use when user wants to create a skill from learning materials or existing content.
- 📁 scripts/
- 📄 README.md
- 📄 skill.md
Reads source legal documents (PDFs, images, scans via OCR), triages by importance, summarizes each document, classifies by type, and produces a structured index with metadata — the foundational skill for all legal document work.