# STAN Paper Draft -- Verification Report

**Date**: 2026-04-11
**Status**: DRAFT -- requires human verification before submission

---

## VERIFIED (from repo data)

These claims are directly supported by code and result files in the repository:

| Claim | Source | Status |
|-------|--------|--------|
| Architecture: dual spectral+temporal branches with cross-attention gate | `src/model.py` | VERIFIED |
| SpectralBranch: FFT -> top-k selection -> MHA -> interpolation | `src/model.py:26-64` | VERIFIED |
| TemporalBranch: 3-layer causal Transformer with GELU, 4x FFN | `src/model.py:67-91` | VERIFIED |
| CrossAttentionGate: bidirectional cross-attn + sigmoid gate | `src/model.py:94-117` | VERIFIED |
| EVT calibrator: GEV fit on 90th percentile tail, q=0.995 | `src/model.py:120-158` | VERIFIED |
| NT-Xent contrastive loss with tau=0.07 | `src/model.py:212-242` | VERIFIED |
| Training: MSE + lambda_cl * NT-Xent, lambda_cl=0.5 | `src/train.py:123-157` | VERIFIED |
| Augmentations: Gaussian jitter (std=0.01) + freq masking (10%) | `src/train.py:46-81` | VERIFIED |
| Hyperparameters: D=128, h=4, L=3, k=16, W=100, bs=64, lr=1e-4, epochs=50 | `src/train.py:160-179` | VERIFIED |
| All F1/precision/recall numbers in Tables 1-3 | `results/benchmarks.json` | VERIFIED |
| All ablation numbers in Tables 4-5 | `results/ablations.json` | VERIFIED |
| Efficiency: 2.8M params, 1.2ms/window, 410MB GPU | `results/ablations.json` (efficiency section) | VERIFIED |
| Point-adjust F1 with delay=7 implementation | `src/train.py:84-120` | VERIFIED |

## WEB-VERIFIED CITATIONS (added beyond original 8)

These 8 additional citations were found via web search and confirmed to exist.
BibTeX was NOT fetched programmatically from DOI -- spot-check all fields before camera-ready:

| # | Key | Paper | Verified Via |
|---|-----|-------|-------------|
| 1 | `park2018multimodal` | Park et al. 2018, LSTM-VAE (IEEE RA-L) | IEEE Xplore, DOI confirmed |
| 2 | `tuli2022tranad` | Tuli et al. 2022, TranAD (VLDB) | ACM DL, arXiv 2201.07284 |
| 3 | `siffer2017anomaly` | Siffer et al. 2017, EVT for AD (KDD) | ACM DL, DOI confirmed |
| 4 | `hundman2018detecting` | Hundman et al. 2018, MSL/SMAP (KDD) | ACM DL, DOI confirmed |
| 5 | `mathur2016swat` | Mathur & Tippenhauer 2016, SWaT | Web search confirmed |
| 6 | `abdulaal2021psm` | Abdulaal et al. 2021, PSM (KDD) | Web search confirmed |
| 7 | `kim2022towards` | Kim et al. 2022, PA critique (AAAI) | AAAI proceedings confirmed |
| 8 | `wu2021autoformer` | Wu et al. 2021, Autoformer (NeurIPS) | NeurIPS proceedings confirmed |

**NOTE on Park et al. 2018**: The repo generically references "Park et al. 2018" for LSTM-VAE. We used the Park et al. IEEE RA-L 2018 paper on LSTM-based VAE. If a different Park 2018 paper was intended, update the citation.

### Claims that cannot be verified from repo alone

| Claim | Issue | Action Needed |
|-------|-------|---------------|
| "new state-of-the-art on all 5 benchmarks" | Baseline numbers in benchmarks.json are self-reported, not independently verified against original papers | Cross-check baseline F1 numbers against published results in each baseline's paper |
| Parameter count 2.8M | Stated in ablations.json but no automated parameter counting in code | Run `sum(p.numel() for p in model.parameters())` on actual model |
| Inference time 1.2ms/window | No profiling script in repo | Reproduce timing benchmark on A100 |
| GPU memory 410MB | No memory profiling in repo | Reproduce memory measurement |
| "32% fewer parameters than Anomaly Transformer" | Relies on self-reported 4.1M for AT; verify against original paper | Check Xu et al. 2022 for reported param count |
| Single-seed results (seed=42) | No multi-seed experiments | Run 3-5 seeds to report mean +/- std |
| Baseline results for LSTM-VAE, OmniAnomaly | These use our re-implementation or reported numbers? | Clarify if baselines were re-run or copied from papers |
| SWaT improvement attributed to "periodic patterns" | Interpretive claim, not empirically tested | Consider adding per-dataset gate analysis (alpha values) |

### Existing citation issues (in original references.bib)

| Citation | Issue |
|----------|-------|
| `chen2020simclr` | Author "Norber" should likely be "Norbert" -- verify spelling of Mohammad Norbert |
| All 8 original citations | BibTeX was provided in repo but was not fetched from DOI; spot-check against original sources |

## STRUCTURAL NOTES FOR SUBMISSION

- **Page count**: Main body fits within 9 pages (verified via compilation). References + appendix add ~2 pages.
- **Line numbers**: Enabled (review mode). Switch to `\usepackage[final]{neurips_2026}` for camera-ready.
- **Anonymous**: Author block shows "Anonymous Author(s)". Add real authors for camera-ready.
- **Figure 1**: NOT YET INCLUDED. NeurIPS reviewers expect a compelling Figure 1 (architecture diagram). This is the highest-priority addition.
- **Error bars**: Missing throughout. Single-seed results are a known reviewer concern. Run multi-seed if time permits.
- **Code release**: Checklist states "code will be released upon acceptance" -- confirm this commitment.

## PRIORITY ACTIONS BEFORE SUBMISSION

1. **[CRITICAL]** Create Figure 1 (architecture diagram) -- reviewers look here first
2. **[CRITICAL]** Cross-check baseline F1 numbers against original papers
3. **[HIGH]** Spot-check all 16 BibTeX entries against DOI/publisher (especially fields for the 8 web-verified ones)
4. **[HIGH]** Verify Chen et al. 2020 author name ("Norber" vs "Norbert" in original repo bib)
5. **[MEDIUM]** Run multi-seed experiments (3-5 seeds) for error bars
6. **[MEDIUM]** Verify parameter count with `sum(p.numel() ...)` and timing with A100 profiler
7. **[LOW]** Consider adding qualitative examples (anomaly visualizations)
