## Problem Statement

The `signup-onboarding-v2` experiment produced conflicting conversion metrics across dashboards (+8.4% Product vs −3.1% Engineering), blocking sprint planning (SEV-2 incident). Root causes:

1. Non-deterministic assignment for anonymous users — `assignOnboardingVariant` falls back to `Math.random()` when `userId` is absent, generating duplicate/inconsistent exposure events.
2. Misaligned denominators — Product uses `signup_completed` users; Engineering uses raw `experiment_exposure` session count (includes retries, bots, pre-signup sessions).
3. No rollout guardrails — rollout jumped 20% → 50% with no soak time, no metric-agreement gate, and no auto-pause conditions.

## Solution

A guardrails layer for onboarding experiments covering three areas:

### 1. Deterministic assignment (`src/onboarding/experimentAssignment.ts`)
- Replace the `Math.random()` anonymous fallback with a deterministic hash of `sessionId` (already on `AnalyticsEvent`).
- Deduplicate exposure events at the module boundary.

### 2. Canonical metric definition (`src/analytics/onboardingMetrics.ts`)
- Add `canonicalConversionSummary` — denominator: unique users with `signup_completed` ∩ `experiment_exposure`; numerator: subset who also have `onboarding_completed`.
- Add `metricDiscrepancyCheck` that warns when canonical vs legacy absolute conversion-rate diff exceeds a configurable threshold (default 2 pp).
- Deprecate `productConversionSummary` and `engineeringConversionSummary` (keep for backward compat).

### 3. Rollout controls (modify assignment + new `src/onboarding/rolloutGuardrails.ts`)
- `RolloutStage` type with allowed increments: 5% → 10% → 20% → 50% → 100%.
- 24-hour minimum soak time between stages.
- `evaluateGuardrails` → returns `pause | hold | advance` based on: error rate > 1%, canonical conversion drop > 5 pp vs control, event-pipeline lag > 30 min, metric discrepancy above threshold.
- Rollout advances require an `approver` field (eng manager or PM).

## User Stories

1. As a **PM**, I want a single canonical conversion metric so that dashboard choice cannot flip a launch decision.
2. As an **engineer**, I want deterministic variant assignment for anonymous sessions so that exposure counts are stable and auditable.
3. As a **data analyst**, I want an automated discrepancy check so that silent denominator drift is caught before rollout advances.
4. As a **QA engineer**, I want explicit guardrail thresholds and soak-time rules so I can write launch-readiness checks with clear pass/fail criteria.
5. As an **engineering manager**, I want rollout advances gated on guardrail evaluation so ramp decisions are traceable and reversible.

## Implementation Decisions

- Modify `src/onboarding/experimentAssignment.ts` — deterministic anonymous assignment via `sessionId` hash; `AssignmentContext` gains required `sessionId`.
- Modify `src/analytics/onboardingMetrics.ts` — add canonical summary + discrepancy check; deprecate legacy functions.
- Create `src/onboarding/rolloutGuardrails.ts` — `RolloutStage`, `evaluateGuardrails`, soak-time enforcement, approver gate.
- `ExperimentConfig` gains `rolloutStage`, `lastStageChangeMs`, `soakTimeMs`.
- New `GuardrailResult` type: `{ action: 'pause' | 'hold' | 'advance'; reasons: string[] }`.
- Guardrail evaluation is a pure function (no side effects). Orchestration that acts on results is out of scope.
- All thresholds are config fields — tunable without code changes.

## Testing Decisions

- Node built-in test runner (`node --test`).
- `experimentAssignment.ts` — deterministic output for same `sessionId`, eligibility filtering, no `Math.random` path remains.
- `onboardingMetrics.ts` — canonical summary correctness, discrepancy check triggers at threshold, deprecated functions still correct.
- `rolloutGuardrails.ts` — stage ordering, soak-time rejection, each guardrail condition independently triggers `pause`, happy-path `advance`.
- Inject timestamps; no wall-clock dependency.

## Out of Scope

- Onboarding UX redesign.
- Analytics vendor migration.
- Orchestration layer that acts on guardrail results.
- Non-onboarding experiments.

## Further Notes

- Incident ref: `docs/incidents/2026-04-signup-ab-metric-mismatch.md`
- Stakeholder input: `docs/stakeholders/sprint-planning-notes.md`
- Open question deferred to retro: whether synthetic event replay is required before ramping past 20%.
