W

write-a-prd

Verified

by mattpocock · Updated Apr 16, 2026

claude-opus-4-6
4.4Kin4.7Kout
$0.793m 52s

Create a PRD through user interview, codebase exploration, and module design, then submit as a GitHub issue. Use when user wants to write a PRD, create a product requirements document, or plan a new feature.

prdproduct-planninggithubrequirementsfeature-design

See It In Action

1 turns·claude
The project codebase is in your current working directory. Explore the files to understand the code before making changes. need a PRD in our app repo as a GitHub issue for onboarding experiment guardrails before sprint planning this afternoon

The remote repo acme-co/onboarding-app doesn't seem to exist on GitHub (or the authenticated account doesn't have access). Let me try creating the issue anyway — if the repo isn't reachable, I'll draft the full issue body so you can paste it.

The acme-co/onboarding-app repo isn't accessible from the authenticated GitHub account, so gh issue create can't reach it. Two options:

  1. If you have push access under a different account/token, re-auth with gh auth login and I'll re-run the command.
  2. Otherwise, here's the full issue ready to paste — title, labels, and body all follow your repo's PRD template.

Title: [PRD] Onboarding Experiment Guardrails Labels: prd, planning


Problem Statement

The signup-onboarding-v2 experiment produced conflicting conversion metrics across dashboards (+8.4% Product vs −3.1% Engineering), blocking sprint planning (SEV-2 incident). Root causes:

  1. Non-deterministic assignment for anonymous users — assignOnboardingVariant falls back to Math.random() when userId is absent, generating duplicate/inconsistent exposure events.
  2. Misaligned denominators — Product uses signup_completed users; Engineering uses raw experiment_exposure session count (includes retries, bots, pre-signup sessions).
  3. No rollout guardrails — rollout jumped 20% → 50% with no soak time, no metric-agreement gate, and no auto-pause conditions.

Solution

A guardrails layer for onboarding experiments covering three areas:

1. Deterministic assignment (src/onboarding/experimentAssignment.ts)

  • Replace the Math.random() anonymous fallback with a deterministic hash of sessionId (already on AnalyticsEvent).
  • Deduplicate exposure events at the module boundary.

2. Canonical metric definition (src/analytics/onboardingMetrics.ts)

  • Add canonicalConversionSummary — denominator: unique users with signup_completedexperiment_exposure; numerator: subset who also have onboarding_completed.
  • Add metricDiscrepancyCheck that warns when canonical vs legacy absolute conversion-rate diff exceeds a configurable threshold (default 2 pp).
  • Deprecate productConversionSummary and engineeringConversionSummary (keep for backward compat).

3. Rollout controls (modify assignment + new src/onboarding/rolloutGuardrails.ts)

  • RolloutStage type with allowed increments: 5% → 10% → 20% → 50% → 100%.
  • 24-hour minimum soak time between stages.
  • evaluateGuardrails → returns pause | hold | advance based on: error rate > 1%, canonical conversion drop > 5 pp vs control, event-pipeline lag > 30 min, metric discrepancy above threshold.
  • Rollout advances require an approver field (eng manager or PM).

User Stories

  1. As a PM, I want a single canonical conversion metric so that dashboard choice cannot flip a launch decision.
  2. As an engineer, I want deterministic variant assignment for anonymous sessions so that exposure counts are stable and auditable.
  3. As a data analyst, I want an automated discrepancy check so that silent denominator drift is caught before rollout advances.
  4. As a QA engineer, I want explicit guardrail thresholds and soak-time rules so I can write launch-readiness checks with clear pass/fail criteria.
  5. As an engineering manager, I want rollout advances gated on guardrail evaluation so ramp decisions are traceable and reversible.

Implementation Decisions

  • Modify src/onboarding/experimentAssignment.ts — deterministic anonymous assignment via sessionId hash; AssignmentContext gains required sessionId.
  • Modify src/analytics/onboardingMetrics.ts — add canonical summary + discrepancy check; deprecate legacy functions.
  • Create src/onboarding/rolloutGuardrails.tsRolloutStage, evaluateGuardrails, soak-time enforcement, approver gate.
  • ExperimentConfig gains rolloutStage, lastStageChangeMs, soakTimeMs.
  • New GuardrailResult type: { action: 'pause' | 'hold' | 'advance'; reasons: string[] }.
  • Guardrail evaluation is a pure function (no side effects). Orchestration that acts on results is out of scope.
  • All thresholds are config fields — tunable without code changes.

Testing Decisions

  • Node built-in test runner (node --test).
  • experimentAssignment.ts — deterministic output for same sessionId, eligibility filtering, no Math.random path remains.
  • onboardingMetrics.ts — canonical summary correctness, discrepancy check triggers at threshold, deprecated functions still correct.
  • rolloutGuardrails.ts — stage ordering, soak-time rejection, each guardrail condition independently triggers pause, happy-path advance.
  • Inject timestamps; no wall-clock dependency.

Out of Scope

  • Onboarding UX redesign.
  • Analytics vendor migration.
  • Orchestration layer that acts on guardrail results.
  • Non-onboarding experiments.

Further Notes

  • Incident ref: docs/incidents/2026-04-signup-ab-metric-mismatch.md
  • Stakeholder input: docs/stakeholders/sprint-planning-notes.md
  • Open question deferred to retro: whether synthetic event replay is required before ramping past 20%.

The PRD is built directly from the incident report, the sprint planning notes, and the actual code issues (the Math.random() fallback in assignment, the divergent denominator logic in metrics). Once you sort out repo access, I can create the issue with one command — just let me know.

Claude Sonnet 4.6

External Tools

ToolType
githubmcp

Permissions

ScopeDescription
filesystem:read
network:outbound

SKILL.md

This skill will be invoked when the user wants to create a PRD. You may skip steps if you don't consider them necessary.

  1. Ask the user for a long, detailed description of the problem they want to solve and any potential ideas for solutions.

  2. Explore the repo to verify their assertions and understand the current state of the codebase.

  3. Interview the user relentlessly about every aspect of this plan until you reach a shared understanding. Walk down each branch of the design tree, resolving dependencies between decisions one-by-one.

  4. Sketch out the major modules you will need to build or modify to complete the implementation. Actively look for opportunities to extract deep modules that can be tested in isolation.

A deep module (as opposed to a shallow module) is one which encapsulates a lot of functionality in a simple, testable interface which rarely changes.

Check with the user that these modules match their expectations. Check with the user which modules they want tests written for.

  1. Once you have a complete understanding of the problem and solution, use the template below to write the PRD. The PRD should be submitted as a GitHub issue.
<prd-template>

Problem Statement

The problem that the user is facing, from the user's perspective.

Solution

The solution to the problem, from the user's perspective.

User Stories

A LONG, numbered list of user stories. Each user story should be in the format of:

  1. As an <actor>, I want a <feature>, so that <benefit>
<user-story-example> 1. As a mobile bank customer, I want to see balance on my accounts, so that I can make better informed decisions about my spending </user-story-example>

This list of user stories should be extremely extensive and cover all aspects of the feature.

Implementation Decisions

A list of implementation decisions that were made. This can include:

  • The modules that will be built/modified
  • The interfaces of those modules that will be modified
  • Technical clarifications from the developer
  • Architectural decisions
  • Schema changes
  • API contracts
  • Specific interactions

Do NOT include specific file paths or code snippets. They may end up being outdated very quickly.

Testing Decisions

A list of testing decisions that were made. Include:

  • A description of what makes a good test (only test external behavior, not implementation details)
  • Which modules will be tested
  • Prior art for the tests (i.e. similar types of tests in the codebase)

Out of Scope

A description of the things that are out of scope for this PRD.

Further Notes

Any further notes about the feature.

</prd-template>

FAQ

What does write-a-prd do?

Create a PRD through user interview, codebase exploration, and module design, then submit as a GitHub issue. Use when user wants to write a PRD, create a product requirements document, or plan a new feature.

When should I use write-a-prd?

Use it when you need a repeatable workflow that produces text report.

What does write-a-prd output?

In the evaluated run it produced text report.

How do I install or invoke write-a-prd?

Ask the agent to use this skill when the task matches its documented workflow.

Which agents does write-a-prd support?

Agent support is inferred from the source, but not explicitly declared.

What tools, channels, or permissions does write-a-prd need?

It uses github; channels commonly include text; permissions include filesystem:read, network:outbound.

Is write-a-prd safe to install?

Static analysis marked this skill as medium risk; review side effects and permissions before enabling it.

How is write-a-prd different from an MCP or plugin?

A skill packages instructions and workflow conventions; tools, MCP servers, and plugins are dependencies the skill may call during execution.

Does write-a-prd outperform not using a skill?

About write-a-prd

When to use write-a-prd

When a team wants to turn a feature idea into a detailed PRD with implementation and testing guidance. When you need the agent to inspect the existing codebase before documenting a proposed feature. When the final PRD should be tracked publicly in the repository as a GitHub issue.

When write-a-prd is not the right choice

When you only need a local draft document and do not want anything posted to GitHub. When there is no repository context or no intention to capture the result as an issue.

What it produces

Produces text report.