interview-coach
Verifiedby dbhat93 · Updated Apr 16, 2026
Full job search coaching system — JD decoding, resume, storybank, mock interviews, transcript analysis, comp negotiation. 23 commands, persistent state.
See It In Action
Let me start by exploring your project files to find your notes, then I'll fire up the interview coaching system.
Let me read your notes in detail, then build your coaching plan.
Alright Alex, I've read everything. Here's the full picture:
- Today at 3pm PT: Stripe comp call with Maria Santos
- Tomorrow (April 14): Cloudflare final round — same 4x45min format you just went through at Datadog
- Datadog post-mortem: Clear patterns to fix — freezing on scale, unstructured STAR, weak articulation of decisions you actually understand
Let me give you both: the salary script for today, and the overnight prep plan for tomorrow.
PART 1: Stripe Comp Call — Today at 3pm
Your leverage position
- Cloudflare final round tomorrow (range $170K-$230K)
- You're actively in final rounds — this is real leverage
- Stripe L3/L4 Payments Infrastructure, SF Bay Area
- Your target: $180K-$220K base + equity. Current: $155K (don't volunteer this)
The 3 questions Maria will ask and your scripts:
Q1: "What are your compensation expectations?"
"I'm targeting a total compensation package in the range of $190K to $220K base, plus meaningful equity. I've been focused on finding the right team and technical challenge — Payments Infrastructure is exactly the kind of systems work I want to do — so I'm flexible on the exact structure if the overall package is competitive."
Why this works: You anchor at the upper half of your range, signal flexibility on structure (gives them room to load equity), and redirect to fit.
Q2: "Do you have competing offers or timelines?"
"I'm in final rounds at another company this week, and I expect to have a decision timeline in the next 7-10 days. I'd love to keep Stripe in sync with that — I don't want to rush, but I also want to be transparent about where I am in the process."
Why this works: Creates urgency without naming Cloudflare (you don't have an offer yet, so don't claim one). "7-10 days" gives a concrete window.
Q3: "What's your current compensation?"
If she asks directly (she shouldn't, it's illegal in CA, but some recruiters still fish):
"I'd prefer to focus on the market rate for this role and what I'd bring to the team. I'm confident we can find something that works for both sides."
Three rules for the call:
- Never say your current salary. California law is on your side.
- Never say "I'd take less for the right opportunity." You won't need to — you're a strong candidate.
- Ask her: "What's the comp band for this role at L3 vs L4?" — this tells you how they're leveling you.
PART 2: Cloudflare Final Round Prep — Tomorrow
Your Datadog post-mortem identified 5 failure patterns. Cloudflare's loop is the same format. Here's how to fix each one overnight.
Pattern 1: Freeze on scale/numbers (System Design, scored 4/10)
The fix: Pre-load 5 capacity estimation anchors
Memorize these for Cloudflare's context (50M+ req/sec, 300+ data centers):
| What | Number | Mental model |
|---|---|---|
| 1 req/sec | ~86K req/day | ~2.5M/month |
| 1M req/sec | ~1KB each = 1GB/sec throughput | |
| 50M req/sec (Cloudflare scale) | ~50GB/sec, ~4.3T req/day | |
| Single Kafka partition | ~10K msg/sec write throughput | |
| Single Postgres node | ~5K-20K TPS depending on query complexity |
Tomorrow's system design opening move (do this FIRST, before drawing anything):
"Before I jump into components, let me do some quick math on scale to anchor the design..."
Then spend 2-3 minutes on napkin math. This is exactly what you skipped at Datadog.
Pattern 2: Know it but can't explain it (Architecture, scored 5/10)
The fix: Practice the "WHY sandwich" for your Paysail Kafka pipeline tonight
For every component in your architecture, prep this structure:
- What it does (1 sentence)
- Why you chose it over the alternative (1 sentence)
- What breaks if you remove it (1 sentence)
Example for your notification pipeline:
"We route through Kafka rather than direct API calls because payment events need guaranteed delivery and the notification channels (email/SMS/push) have different latency profiles. If we removed Kafka, a slow SMS provider would back-pressure the payment processing pipeline."
Prep 3 "what would you change" answers that aren't "use different tech":
- Better schema evolution (you'd add a schema registry)
- Per-channel dead letter queues instead of a shared one
- Better consumer group rebalancing with cooperative sticky assignor
Pattern 3: Unstructured behavioral stories (scored 5/10)
The fix: Prep 4 stories in strict 90-second STAR format
Tight structure: Situation (15 sec) → Task (10 sec) → Action (45 sec) → Result (20 sec)
Here are 4 stories from your resume, reformatted:
Story 1 — Disagreement/Technical decision (MongoDB vs Postgres)
- S: "At DataDash, our team was evaluating databases for a new multi-tenant analytics feature."
- T: "Tech lead wanted MongoDB for schema flexibility. I believed PostgreSQL with row-level security was the better fit."
- A: "I built a proof of concept showing Postgres RLS handled tenant isolation with zero app-level code, wrote a one-page comparison doc on query patterns, and presented trade-offs at our design review."
- R: "Team chose Postgres. Multi-tenant queries ran 60% faster than our previous approach, and we avoided a class of data isolation bugs."
Story 2 — Failure/Outage
- S: "I deployed a payments service change that caused a 23-minute outage on the checkout flow."
- T: "Needed to restore service immediately and prevent recurrence."
- A: "Rolled back within 8 minutes, then spent the next week building automated canary deploys and adding latency-based circuit breakers. I also wrote the incident runbook that reduced our MTTR from 45 to 12 minutes."
- R: "We haven't had an unplanned checkout outage since. The runbook is now used by the whole on-call rotation."
Story 3 — Impact at scale (Rails→Go migration)
- S/T: "Paysail's checkout p99 was 800ms on a monolithic Rails API — it was hurting conversion."
- A: "I led the migration of the hot path to Go microservices, starting with the payment authorization flow. Designed the strangler fig pattern so we could migrate incrementally without a big bang."
- R: "P99 dropped from 800ms to 120ms. We did it over 6 weeks with zero downtime."
Story 4 — Why Cloudflare (DO THIS RESEARCH TONIGHT)
- Know these: Workers, R2, D1, Queues, the recent Magic WAN and Hyperdrive launches
- Connect to your experience: "I've been building event-driven systems at Paysail on top of Kafka — Cloudflare Queues solves the same problem at the edge, which is a fascinating architectural shift I want to be part of."
Pattern 4: Go concurrency gaps (Coding, scored 6/10)
Tonight, review these 4 things:
sync.Mutexvssync.RWMutex— RWMutex when reads >> writes (like rate limiter checks)sync.Map— only useful when keys are stable and written once, read many. NOT a general concurrent map.- Channels for fan-out/fan-in — know the
select+context.Done()pattern for cancellation atomicpackage — for simple counters, cheaper than mutex.atomic.Int64for rate limiter counters.
If you get the distributed rate limiter question again: Redis + sliding window with ZRANGEBYSCORE + ZADD in a Lua script for atomicity. Prep this as a 3-minute explanation.
Pattern 5: Company research (scored poorly on "Why Datadog?")
Cloudflare-specific talking points to memorize:
- 300+ data centers, 50M+ req/sec — one of the largest edge networks
- Edge Platform team = the infrastructure that everything else runs on
- Recent products: Workers (serverless at edge), R2 (S3-compatible without egress fees), D1 (SQLite at edge), Queues
- Their engineering blog posts about how they handle DDoS at scale
- Your hook: "I've been building payment systems that need exactly-once processing at 500K events/day. Cloudflare operates at 50M requests per second — I want to learn how to build systems at that scale and bring my event-driven architecture experience to the Edge Platform."
Tonight's Schedule
| Time | Activity | Duration |
|---|---|---|
| 3:00 PM | Stripe comp call (use scripts above) | 20 min |
| 4:00 PM | Capacity estimation drills — practice napkin math out loud | 45 min |
| 5:00 PM | Architecture walkthrough — practice your Paysail Kafka pipeline with WHY sandwiches | 45 min |
| 6:00 PM | Break / dinner | 60 min |
| 7:00 PM | Behavioral stories — say all 4 stories out loud, time each to 90 sec | 30 min |
| 7:30 PM | Cloudflare research — blog posts, recent products, "Why Cloudflare" answer | 30 min |
| 8:00 PM | Go concurrency review — mutex/RWMutex/channels/atomic, distributed rate limiter script | 45 min |
| 9:00 PM | One full mock system design (pick "Design a CDN" or "Design edge-based rate limiting") — do capacity math FIRST | 45 min |
| 10:00 PM | Done. Sleep. |
Want me to run a mock interview for any of the 4 rounds, drill your STAR stories, or role-play the Stripe comp call before 3pm?
Permissions
| Scope | Description |
|---|---|
| filesystem:read | |
| filesystem:write |
SKILL.md
Interview Coach
Overview
A persistent, adaptive coaching system for the full job search lifecycle.
Not a question bank — an opinionated system that tracks your patterns,
scores your answers, and gets sharper the more you use it. State persists
in coaching_state.md across sessions so you always pick up where you left off.
Install
npx skills add dbhat93/job-search-os
Then type /coach → kickoff.
When to Use This Skill
- Use when starting a job search and need a structured system
- Use when preparing for a specific interview (company research, mock, hype)
- Use when you want to analyze a past interview transcript
- Use when negotiating an offer or handling comp questions on recruiter screens
- Use when building or maintaining a storybank of interview-ready stories
What It Covers
- JD decoding — six lenses, fit verdict, recruiter questions to ask
- Resume + LinkedIn — ATS audit, bullet rewrites, platform-native optimization
- Mock interviews — behavioral, system design, case, panel, technical formats
- Transcript analysis — paste from Otter/Zoom/Grain, auto-detected format
- Storybank — STAR stories with earned secrets, retrieval drills, portfolio optimization
- Comp + negotiation — pre-offer scripting, offer analysis, exact negotiation scripts
- 23 total commands across the full search lifecycle
Examples
Example 1: Start your job search
/coach
kickoff
The coach asks for your resume, target role, and timeline — then builds your profile and gives you a prioritized action plan.
Example 2: Prep for a specific company
/coach
prep Stripe Senior PM
Runs company research, generates a role-specific prep brief, and queues up mock interview questions tailored to Stripe's process.
Example 3: Analyze an interview transcript
/coach
analyze
Paste a raw transcript from Otter, Zoom, or any tool. The coach auto-detects the format, scores each answer across five dimensions, and gives you a drill plan targeting your specific gaps.
Example 4: Handle a comp question
/coach
salary
Coaches you through the recruiter screen "what are your salary expectations?" moment with a defensible range and exact scripts.
Source
FAQ
What does interview-coach do?
Full job search coaching system — JD decoding, resume, storybank, mock interviews, transcript analysis, comp negotiation. 23 commands, persistent state.
When should I use interview-coach?
Use it when you need a repeatable workflow that produces text response.
What does interview-coach output?
In the evaluated run it produced text response.
How do I install or invoke interview-coach?
Ask the agent to use this skill when the task matches its documented workflow.
Which agents does interview-coach support?
Agent support is inferred from the source, but not explicitly declared.
What tools, channels, or permissions does interview-coach need?
It uses no extra tools; channels commonly include text; permissions include filesystem:read, filesystem:write.
Is interview-coach safe to install?
Static analysis marked this skill as medium risk; review side effects and permissions before enabling it.
How is interview-coach different from an MCP or plugin?
A skill packages instructions and workflow conventions; tools, MCP servers, and plugins are dependencies the skill may call during execution.
Does interview-coach outperform not using a skill?
About interview-coach
When to use interview-coach
When you want structured guidance for an ongoing job search with persistent context. When preparing for a specific interview and needing tailored prep materials or mock coaching. When reviewing interview transcripts or practicing compensation and negotiation conversations.
When interview-coach is not the right choice
When you need direct integrations with hiring platforms, messaging tools, or external applicant tracking systems. When you want automated job applications or real-time company data fetching as the primary function.
What it produces
Produces text response.