video-editing
Verifiedby affaan-m · Updated Apr 16, 2026
AI-assisted video editing workflows for cutting, structuring, and augmenting real footage. Covers the full pipeline from raw capture through FFmpeg, Remotion, ElevenLabs, fal.ai, and final polish in Descript or CapCut. Use when the user wants to edit video, cut footage, create vlogs, or build video content.
See It In Action
Both assets generated. Here's the complete deliverable:
6. Generated Assets
| Asset | URL | Usage |
|---|---|---|
| 🖼️ End card visual | End card image | Last 3 seconds + thumbnail base |
| 🎵 Background music | BG track (90s) | Ducked under dialogue, full volume on transitions + end card |
Mix the background music under dialogue:
# Duck music to -18dB under speech, full at -6dB during transitions
ffmpeg -i teaser_captioned.mp4 -i bg_music.wav \
-filter_complex "[1:a]volume=0.15[bg];[0:a][bg]amix=inputs=2:duration=first:dropout_transition=3[aout]" \
-map 0:v -map "[aout]" -c:v copy teaser_final.mp4
7. Complete Storyboard
| Time | Segment | Visual | Overlay | Audio |
|---|---|---|---|---|
| 0:00–0:03 | — | Fade in from black | Launchpad logo + "3.0" | Music intro |
| 0:03–0:29 | A | Dex presenting (screen share of deck) | Caption: "Every PR → live preview in 45 seconds" | Dex VO + ducked music |
| 0:29–0:55 | B | Dex presenting stats | Animated stat callouts: 62%↓ · 22 min · <1% failures | Dex VO + ducked music |
| 0:55–1:09 | C (trimmed) | Terminal → Launchpad dashboard → preview URL loads | Caption: "38 seconds. Full stack." | Dex VO + music rises on "Done" |
| 1:09–1:19 | D (trimmed) | PM clicking preview → comment appears on GitHub | Caption: "Feedback → code, zero context-switch" | Dex VO + ducked music |
| 1:19–1:24 | E (trimmed) | Analytics dashboard | Caption: "73% drop-off caught before merge" | Dex VO |
| 1:24–1:27 | F | Dex saying promo code | Text flash: LAUNCH30 | Dex VO |
| 1:27–1:30 | End card | Generated end card visual | "Try Launchpad 3.0 Free → launchpad.dev" | Music up, fade out |
8. Export Specs for Ads
Run this once assembled in Descript/CapCut for final polish:
| Platform | Aspect | Resolution | Max Length | File |
|---|---|---|---|---|
| YouTube Pre-roll | 16:9 | 1920×1080 | 90s | teaser_yt.mp4 |
| LinkedIn Feed | 16:9 | 1920×1080 | 90s | teaser_li.mp4 |
| TikTok/Reels | 9:16 | 1080×1920 | 90s | teaser_vertical.mp4 |
| X/Twitter | 16:9 | 1280×720 | 90s | teaser_x.mp4 |
# Vertical reframe for TikTok/Reels
ffmpeg -i teaser_final.mp4 -vf "crop=ih*9/16:ih,scale=1080:1920" teaser_vertical.mp4
# Twitter-optimized
ffmpeg -i teaser_final.mp4 -vf "scale=1280:720" -b:v 5M teaser_x.mp4
9. Execution Checklist (to hit 3pm)
- Get raw recording file → drop as
webinar_raw.mp4 - Run FFmpeg cuts → 6 segments in
/segments - Concatenate + normalize audio →
teaser_normalized.mp4 - Transcribe (Whisper, word-level) → generate
.srtfile - Burn in captions (FFmpeg or do in CapCut for animated pop)
- Add logo overlay (Remotion or CapCut — persistent top-left)
- Add stat callouts on segments B + C (Remotion or motion text in CapCut)
- Mix background music at -18dB under speech
- Append end card (3 seconds, generated image + CTA text)
- Final polish in Descript/CapCut — pacing, transitions, color
- Export all platform variants
- Ship by 3pm ✅
All the cut commands, code, and assets are ready. Just need the raw recording file to execute. Want me to generate any additional variants (e.g., a 30-second hard cut, or a different end card)?
External Tools
| Tool | Type |
|---|---|
| fal-ai | mcp |
| ffmpeg | binary |
| node | binary |
| npx | binary |
Permissions
| Scope | Description |
|---|---|
| filesystem:read | |
| filesystem:write | |
| process:spawn | |
| network:outbound | |
| env:read |
SKILL.md
Video Editing
AI-assisted editing for real footage. Not generation from prompts. Editing existing video fast.
When to Activate
- User wants to edit, cut, or structure video footage
- Turning long recordings into short-form content
- Building vlogs, tutorials, or demo videos from raw capture
- Adding overlays, subtitles, music, or voiceover to existing video
- Reframing video for different platforms (YouTube, TikTok, Instagram)
- User says "edit video", "cut this footage", "make a vlog", or "video workflow"
Core Thesis
AI video editing is useful when you stop asking it to create the whole video and start using it to compress, structure, and augment real footage. The value is not generation. The value is compression.
The Pipeline
Screen Studio / raw footage
→ Claude / Codex
→ FFmpeg
→ Remotion
→ ElevenLabs / fal.ai
→ Descript or CapCut
Each layer has a specific job. Do not skip layers. Do not try to make one tool do everything.
Layer 1: Capture (Screen Studio / Raw Footage)
Collect the source material:
- Screen Studio: polished screen recordings for app demos, coding sessions, browser workflows
- Raw camera footage: vlog footage, interviews, event recordings
- Desktop capture via VideoDB: session recording with real-time context (see
videodbskill)
Output: raw files ready for organization.
Layer 2: Organization (Claude / Codex)
Use Claude Code or Codex to:
- Transcribe and label: generate transcript, identify topics and themes
- Plan structure: decide what stays, what gets cut, what order works
- Identify dead sections: find pauses, tangents, repeated takes
- Generate edit decision list: timestamps for cuts, segments to keep
- Scaffold FFmpeg and Remotion code: generate the commands and compositions
Example prompt:
"Here's the transcript of a 4-hour recording. Identify the 8 strongest segments
for a 24-minute vlog. Give me FFmpeg cut commands for each segment."
This layer is about structure, not final creative taste.
Layer 3: Deterministic Cuts (FFmpeg)
FFmpeg handles the boring but critical work: splitting, trimming, concatenating, and preprocessing.
Extract segment by timestamp
ffmpeg -i raw.mp4 -ss 00:12:30 -to 00:15:45 -c copy segment_01.mp4
Batch cut from edit decision list
#!/bin/bash
# cuts.txt: start,end,label
while IFS=, read -r start end label; do
ffmpeg -i raw.mp4 -ss "$start" -to "$end" -c copy "segments/${label}.mp4"
done < cuts.txt
Concatenate segments
# Create file list
for f in segments/*.mp4; do echo "file '$f'"; done > concat.txt
ffmpeg -f concat -safe 0 -i concat.txt -c copy assembled.mp4
Create proxy for faster editing
ffmpeg -i raw.mp4 -vf "scale=960:-2" -c:v libx264 -preset ultrafast -crf 28 proxy.mp4
Extract audio for transcription
ffmpeg -i raw.mp4 -vn -acodec pcm_s16le -ar 16000 audio.wav
Normalize audio levels
ffmpeg -i segment.mp4 -af loudnorm=I=-16:TP=-1.5:LRA=11 -c:v copy normalized.mp4
Layer 4: Programmable Composition (Remotion)
Remotion turns editing problems into composable code. Use it for things that traditional editors make painful:
When to use Remotion
- Overlays: text, images, branding, lower thirds
- Data visualizations: charts, stats, animated numbers
- Motion graphics: transitions, explainer animations
- Composable scenes: reusable templates across videos
- Product demos: annotated screenshots, UI highlights
Basic Remotion composition
import { AbsoluteFill, Sequence, Video, useCurrentFrame } from "remotion";
export const VlogComposition: React.FC = () => {
const frame = useCurrentFrame();
return (
<AbsoluteFill>
{/* Main footage */}
<Sequence from={0} durationInFrames={300}>
<Video src="/segments/intro.mp4" />
</Sequence>
{/* Title overlay */}
<Sequence from={30} durationInFrames={90}>
<AbsoluteFill style={{
justifyContent: "center",
alignItems: "center",
}}>
<h1 style={{
fontSize: 72,
color: "white",
textShadow: "2px 2px 8px rgba(0,0,0,0.8)",
}}>
The AI Editing Stack
</h1>
</AbsoluteFill>
</Sequence>
{/* Next segment */}
<Sequence from={300} durationInFrames={450}>
<Video src="/segments/demo.mp4" />
</Sequence>
</AbsoluteFill>
);
};
Render output
npx remotion render src/index.ts VlogComposition output.mp4
See the Remotion docs for detailed patterns and API reference.
Layer 5: Generated Assets (ElevenLabs / fal.ai)
Generate only what you need. Do not generate the whole video.
Voiceover with ElevenLabs
import os
import requests
resp = requests.post(
f"https://api.elevenlabs.io/v1/text-to-speech/{voice_id}",
headers={
"xi-api-key": os.environ["ELEVENLABS_API_KEY"],
"Content-Type": "application/json"
},
json={
"text": "Your narration text here",
"model_id": "eleven_turbo_v2_5",
"voice_settings": {"stability": 0.5, "similarity_boost": 0.75}
}
)
with open("voiceover.mp3", "wb") as f:
f.write(resp.content)
Music and SFX with fal.ai
Use the fal-ai-media skill for:
- Background music generation
- Sound effects (ThinkSound model for video-to-audio)
- Transition sounds
Generated visuals with fal.ai
Use for insert shots, thumbnails, or b-roll that doesn't exist:
generate(model_name: "fal-ai/nano-banana-pro", input: {
"prompt": "professional thumbnail for tech vlog, dark background, code on screen",
"image_size": "landscape_16_9"
})
VideoDB generative audio
If VideoDB is configured:
voiceover = coll.generate_voice(text="Narration here", voice="alloy")
music = coll.generate_music(prompt="lo-fi background for coding vlog", duration=120)
sfx = coll.generate_sound_effect(prompt="subtle whoosh transition")
Layer 6: Final Polish (Descript / CapCut)
The last layer is human. Use a traditional editor for:
- Pacing: adjust cuts that feel too fast or slow
- Captions: auto-generated, then manually cleaned
- Color grading: basic correction and mood
- Final audio mix: balance voice, music, and SFX levels
- Export: platform-specific formats and quality settings
This is where taste lives. AI clears the repetitive work. You make the final calls.
Social Media Reframing
Different platforms need different aspect ratios:
| Platform | Aspect Ratio | Resolution |
|---|---|---|
| YouTube | 16:9 | 1920x1080 |
| TikTok / Reels | 9:16 | 1080x1920 |
| Instagram Feed | 1:1 | 1080x1080 |
| X / Twitter | 16:9 or 1:1 | 1280x720 or 720x720 |
Reframe with FFmpeg
# 16:9 to 9:16 (center crop)
ffmpeg -i input.mp4 -vf "crop=ih*9/16:ih,scale=1080:1920" vertical.mp4
# 16:9 to 1:1 (center crop)
ffmpeg -i input.mp4 -vf "crop=ih:ih,scale=1080:1080" square.mp4
Reframe with VideoDB
# Smart reframe (AI-guided subject tracking)
reframed = video.reframe(start=0, end=60, target="vertical", mode=ReframeMode.smart)
Scene Detection and Auto-Cut
FFmpeg scene detection
# Detect scene changes (threshold 0.3 = moderate sensitivity)
ffmpeg -i input.mp4 -vf "select='gt(scene,0.3)',showinfo" -vsync vfr -f null - 2>&1 | grep showinfo
Silence detection for auto-cut
# Find silent segments (useful for cutting dead air)
ffmpeg -i input.mp4 -af silencedetect=noise=-30dB:d=2 -f null - 2>&1 | grep silence
Highlight extraction
Use Claude to analyze transcript + scene timestamps:
"Given this transcript with timestamps and these scene change points,
identify the 5 most engaging 30-second clips for social media."
What Each Tool Does Best
| Tool | Strength | Weakness |
|---|---|---|
| Claude / Codex | Organization, planning, code generation | Not the creative taste layer |
| FFmpeg | Deterministic cuts, batch processing, format conversion | No visual editing UI |
| Remotion | Programmable overlays, composable scenes, reusable templates | Learning curve for non-devs |
| Screen Studio | Polished screen recordings immediately | Only screen capture |
| ElevenLabs | Voice, narration, music, SFX | Not the center of the workflow |
| Descript / CapCut | Final pacing, captions, polish | Manual, not automatable |
Key Principles
- Edit, don't generate. This workflow is for cutting real footage, not creating from prompts.
- Structure before style. Get the story right in Layer 2 before touching anything visual.
- FFmpeg is the backbone. Boring but critical. Where long footage becomes manageable.
- Remotion for repeatability. If you'll do it more than once, make it a Remotion component.
- Generate selectively. Only use AI generation for assets that don't exist, not for everything.
- Taste is the last layer. AI clears repetitive work. You make the final creative calls.
Related Skills
fal-ai-media— AI image, video, and audio generationvideodb— Server-side video processing, indexing, and streamingcontent-engine— Platform-native content distribution
FAQ
What does video-editing do?
AI-assisted video editing workflows for cutting, structuring, and augmenting real footage. Covers the full pipeline from raw capture through FFmpeg, Remotion, ElevenLabs, fal.ai, and final polish in Descript or CapCut. Use when the user wants to edit video, cut footage, create vlogs, or build video content.
When should I use video-editing?
Use it when you need a repeatable workflow that produces text response.
What does video-editing output?
In the evaluated run it produced text response.
How do I install or invoke video-editing?
npx skills add https://github.com/affaan-m/everything-claude-code --skill video-editing
Which agents does video-editing support?
Claude Code
What tools, channels, or permissions does video-editing need?
It uses fal-ai, ffmpeg, node, npx; channels commonly include text; permissions include filesystem:read, filesystem:write, process:spawn, network:outbound, env:read.
Is video-editing safe to install?
Static analysis marked this skill as medium risk; review side effects and permissions before enabling it.
How is video-editing different from an MCP or plugin?
A skill packages instructions and workflow conventions; tools, MCP servers, and plugins are dependencies the skill may call during execution.
Does video-editing outperform not using a skill?
About video-editing
When to use video-editing
You need to turn long recordings into shorter videos or highlight clips. You want repeatable video edits with FFmpeg and code-driven overlays via Remotion. You need to reframe, normalize, or preprocess footage before final manual polish.
When video-editing is not the right choice
You want fully AI-generated videos from prompts rather than editing real footage. You need a purely manual GUI-first editing workflow without command-line or code tooling.
What it produces
Produces text response.
Install
npx skills add https://github.com/affaan-m/everything-claude-code --skill video-editingInvoke: Ask Claude Code to use video-editing for the task.