- 📁 references/
- 📁 scripts/
- 📄 SKILL.md
VectCutAPI is a powerful cloud-based video editing API tool that provides programmatic control over CapCut/JianYing (剪映) for professional video editing. Use this skill when users need to: (1) Create video draft projects programmatically, (2) Add video/audio/image materials with precise control, (3) Add text, subtitles, and captions, (4) Apply effects, transitions, and animations, (5) Add keyframe animations, (6) Process videos in batch, (7) Generate AI-powered videos, (8) Integrate with n8n workflows, (9) Build MCP video editing agents. The API supports HTTP REST and MCP protocols, works with both CapCut (international) and JianYing (China), and provides web preview without downloading.
- 📁 references/
- 📁 scripts/
- 📄 .security-scan-passed
- 📄 SKILL.md
Transcribes audio and video files to text using Qwen3-ASR. Supports two modes — local MLX inference on macOS Apple Silicon (no API key, 15-27x realtime) and remote API via vLLM/OpenAI-compatible endpoints. Auto-detects platform and recommends the best path. Triggers when the user wants to transcribe recordings, convert audio/video to text, do speech-to-text, or mentions ASR, Qwen ASR, 转录, 语音转文字, 录音转文字. Also triggers for meeting recordings, lectures, interviews, podcasts, screen recordings, or any audio/video file the user wants converted to text.
Generate AI videos from text prompts using the HeyGen API. Use when: (1) Generating videos from text descriptions, (2) Creating AI-generated video clips for content production, (3) Image-to-video generation with a reference image, (4) Choosing between video generation providers (VEO, Kling, Sora, Runway, Seedance), (5) Working with HeyGen's /v1/workflows/executions endpoint for video generation.
You cannot access video content on your own. Use Cerul to search what was said, shown, or presented in tech talks, podcasts, conference presentations, and earnings calls. Use when a user asks about what someone said, wants video evidence, or needs citations from talks and interviews.
Collection of agent skills for Helios video engine. Use when working with programmatic video creation, browser-native animations, or Helios compositions. Install individual skills by path for specific capabilities.
- 📁 references/
- 📁 scripts/
- 📁 workflows/
- 📄 SKILL.md
AI video & audio summarizer. Summarize YouTube videos, Bilibili videos, podcasts, TikTok, Twitter/X, Xiaohongshu, and any online video or audio. Use when the user wants to summarize a video, extract transcripts/subtitles, get chapter-by-chapter summaries, or understand video content quickly.
- 📁 references/
- 📁 scripts/
- 📄 .gitignore
- 📄 LICENSE
- 📄 README.es.md
Professional-grade virtual film director and prompt engineer for Seedance 2.0 (即梦). Transforms vague ideas into cinematic, production-ready video prompts with Hollywood-caliber shot design. Covers every workflow — text-to-video, image-to-video, multi-modal references, video extension, character swap, dialogue-driven short films, and music-synced edits. Ships with a cinematography dictionary (50+ safe camera-move phrases), a director style library (Villeneuve, Wes Anderson, Shinkai, Wuxia & more), a 3-layer lighting & quality-anchor system that kills the "plastic AI look," and built-in Python auto-validation so every prompt passes before delivery. Supports bilingual output (Chinese/English) with smart >15 s auto-segmentation for long-form storytelling. Trigger words: Seedance, Shot Design, AI video, storyboard, video prompt, short film, ad video, film prompt, cinematic prompt, generate a video, make a clip, shoot a scene, video script, vlog script, create video prompt, music video, product video, drone shot, camera movement, 即梦, 视频提示词, 分镜, 帮我写个视频, 帮我拍, 做个视频脚本, 写一段视频, 生成视频, 视频文案, 短视频, 拍一个, 做分镜, 视频脚本, AI视频, 抖音视频, 短片脚本, 广告视频, 宣传片, 产品视频, vlog, 运镜, 镜头设计.
Downloads embedded videos from web pages. Fetches the page, identifies the video hosting service (Vimeo, YouTube, etc.), resolves the correct embed/player URL, and downloads using yt-dlp. Handles private/unlisted videos that require referer headers or embed URLs. Use this skill when someone says "download this video", "save this video", "grab the video from this page", "rip this video", or provides a URL and asks to download media from it. Also trigger when someone pastes a URL to a page with an embedded video and wants the video file locally.
NVIDIA DeepStream SDK 9.0 development with Python pyservicemaker API. Use when building video analytics pipelines, GStreamer-based video processing, TensorRT inference integration, object detection/tracking, or Kafka/message broker integration.
- 📁 docs/
- 📁 remotion-standup/
- 📁 scripts/
- 📄 .gitignore
- 📄 README.md
- 📄 REMOTION_VOICEOVER.md
Automated video editing skill for talk/vlog/standup videos. Use when: cutting video, splitting video into sentences, merging video clips, extracting audio, transcribing speech, auto-editing oral presentation videos, combining selected sentence clips into a final video, generating video cover/thumbnail with title, B-roll cutaway editing, persistent video overlay/watermark, blinking REC indicator, ending title cards, multi-source audio mixing, generating voiceover videos with Remotion (audio-only to video with animated visuals/subtitles). Requires ffmpeg and whisper. Remotion workflow additionally requires Node.js and npm.
This skill should be used when the user asks to "generate video prompts", "create Seedance prompts", "write video descriptions", mentions "Seedance", "seedance", "即梦", "即梦平台", "视频提示词", "视频生成", "AI视频", "短剧", "广告视频", "视频延长", "生成图片", "文生图", "图生图", "图生视频", "文生视频", or discusses video prompt engineering, AI video generation, or Seedance 2.0 workflows. It also handles requests to create, edit, or manipulate images and videos using the dreamina CLI tool.
Watch a tutorial or demo video and generate a Claude Code skill from it. Activated when user says "create a skill from this video" or similar.