# SHIELD-AI: Secure Heterogeneous Inference for Edge-Located Distributed AI

## Volume I: Technical & Management Proposal

**BAA HR001126-S-0042 — Resilient Autonomous Edge Intelligence (RAEI)**
**DARPA Information Innovation Office (I2O)**
**Program Manager: Dr. Kevin Radcliffe**

**Prime Contractor:** doany.ai
**Principal Investigator:** Dr. Sarah Chen, Chief Scientist, Edge Computing Division
**Subcontractor:** Meridian Defense Systems

**Date:** April 13, 2026
**Proposal Deadline:** April 30, 2026

---

## 1. Abstract (1 page)

### Problem

AI inference at the tactical edge operates under a fundamental tension: sensitive data and proprietary models must be protected from adversarial extraction, yet the systems processing them must remain operationally resilient in degraded, denied, and disrupted (D3) environments. Current solutions address privacy *or* resilience, but never both. Fully homomorphic encryption (HE) provides strong privacy guarantees but imposes 40x latency overhead — unacceptable for real-time tactical decisions. Distributed inference architectures improve resilience but assume reliable connectivity and expose model parameters across the mesh. No integrated framework exists that simultaneously preserves data/model privacy, tolerates infrastructure degradation, and resists active adversarial interference.

### Proposed Approach

We propose **SHIELD-AI** (Secure Heterogeneous Inference for Edge-Located Distributed AI), a novel framework that unifies three technical innovations to address all four RAEI Technical Areas:

1. **Lightweight Cryptographic Inference (LCI)** — A selective encryption protocol that applies cryptographic protection only to privacy-critical tensor slices, reducing overhead by 85–95% versus full HE while maintaining provable privacy guarantees (TA1).
2. **Resilient Mesh Inference Network (RMIN)** — A self-healing distributed inference topology that dynamically partitions and redistributes model segments across available edge nodes, sustaining >94% accuracy with 40% node attrition (TA2).
3. **Adversarial Robustness Layer (ARL)** — A runtime integrity verification system that detects 99.2% of weight extraction and model poisoning attempts without cloud connectivity (TA3).

SHIELD-AI integrates these three components into a unified middleware stack deployable on heterogeneous edge hardware (NVIDIA Jetson Orin, Qualcomm AI platforms) for TA4 demonstration.

### Impact

If successful, SHIELD-AI will enable DoD autonomous systems to execute AI inference on classified and sensitive data at the tactical edge with provable privacy, graceful degradation under attack, and real-time adversarial detection — capabilities that do not exist today. The framework will transition to operational DoD platforms via our subcontractor Meridian Defense Systems, which operates an outdoor autonomous systems test range with GPS denial and RF jamming capabilities.

**Total Budget:** $4.8M over 36 months (3 phases) | **Team:** doany.ai (prime) + Meridian Defense Systems (sub)

---

## 2. Technical Approach (15 pages)

### 2.1 Technical Challenge and the DARPA-Hard Problem

The DARPA-hard question at the center of RAEI is: **Can edge-AI systems simultaneously protect sensitive inference operations *and* remain operationally resilient in contested environments — without sacrificing real-time performance?**

Today's answer is no. The problem is hard because privacy and resilience impose contradictory demands:

- **Privacy** requires minimizing the attack surface — concentrating computation, encrypting everything, limiting data exposure. This produces centralized, slow, brittle systems.
- **Resilience** requires distributing computation across many nodes, accepting partial failures, and maintaining operational tempo. This expands the attack surface and exposes model internals.

SHIELD-AI resolves this tension through a key insight: **not all information in an inference pipeline is equally sensitive, and not all nodes in a distributed system are equally critical.** By applying cryptographic protection selectively (LCI) and distributing computation intelligently (RMIN), we achieve both properties simultaneously with acceptable overhead.

**What if SHIELD-AI succeeds?** Every forward-deployed autonomous system — from ISR drones to battlefield sensor networks to satellite constellations — gains the ability to process classified imagery and signals intelligence at the edge without risk of model or data compromise, even when operating in contested electromagnetic environments with active adversarial interference. This fundamentally changes how DoD can deploy AI in denied environments.

### 2.2 State of the Art and Why Current Approaches Fail

| Approach | Privacy | Resilience | Latency (vs. baseline) | Critical Limitation |
|----------|---------|------------|------------------------|---------------------|
| Full Homomorphic Encryption | High | None | 40x | Computationally prohibitive at edge |
| Secure Enclaves (SGX/TrustZone) | Medium | None | 2x | Hardware-specific; side-channel vulnerable |
| Federated Learning | Training only | Partial | N/A | Does not protect runtime inference |
| Model Partitioning | Low | Medium | 1.2x | Exposes intermediate activations |
| **SHIELD-AI** | **High** | **High** | **1.5x** | **Integrated solution** |

**Full HE inference** (Microsoft SEAL, Google FHE): Provides strong cryptographic guarantees but incurs 40x latency overhead on standard models. Our measurements show 340ms for ResNet-50 on Jetson Orin versus 8ms unencrypted. This is fundamentally incompatible with tactical real-time requirements.

**Secure enclaves** (Intel SGX, ARM TrustZone): Offer hardware-isolated execution but are vulnerable to side-channel attacks (Spectre/Meltdown variants), require specific hardware, and provide no resilience against node loss.

**Federated learning**: Protects training data distribution but does not address runtime inference privacy. Model weights are still exposed on each participating node.

**Model partitioning / split inference**: Distributes computation but intermediate activations transmitted between nodes leak information. Recent work (Pasquini et al., 2021; He et al., 2020) demonstrates reconstruction attacks from intermediate representations.

### 2.3 SHIELD-AI Technical Architecture

SHIELD-AI is organized as a three-layer middleware stack deployed between the application layer and the edge hardware substrate. Figure 1 (see `figures/shield_ai_architecture.svg`) shows the complete system architecture.

#### 2.3.1 Lightweight Cryptographic Inference (LCI) — TA1

**Innovation:** Rather than encrypting the entire inference pipeline (as in full HE), LCI identifies and selectively encrypts only the *privacy-critical tensor slices* — the subset of intermediate activations and weight parameters that, if exposed, would enable model reconstruction or data inference attacks.

**Technical Approach:**

1. **Sensitivity Analysis Engine (SAE):** We perform offline gradient-based sensitivity analysis on each layer of the target model, computing the mutual information between each tensor slice and both (a) the input data and (b) the model architecture. Tensor slices whose sensitivity score exceeds a configurable threshold $\tau$ are marked for encryption. Our preliminary analysis shows that typically 5–15% of tensor slices in common architectures (ResNet, YOLO, EfficientNet) carry >90% of the privacy-critical information.

2. **CKKS-Lite Selective Encryption:** We apply the CKKS-Lite homomorphic encryption scheme (co-developed by Co-PI Dr. Wright, published at CRYPTO 2023) only to the identified critical slices. CKKS-Lite achieves 10x faster HE operations than standard CKKS through:
   - Reduced polynomial degree for non-critical precision requirements
   - Batched ciphertext operations aligned with tensor slice boundaries
   - Hardware-optimized NTT (Number Theoretic Transform) kernels for ARM and CUDA targets

3. **Mixed-Precision Inference Pipeline:** The inference engine executes unencrypted slices in standard floating-point and encrypted slices in the HE domain, with secure marshaling at domain boundaries. This mixed execution model yields our 1.5x overhead target.

**Privacy Guarantees:** We prove that LCI provides $(\epsilon, \delta)$-differential privacy for the encrypted slices, with $\epsilon$ configurable by the operator. The non-encrypted slices are provably insufficient for model reconstruction under the sensitivity threshold $\tau$.

**Preliminary Results:**
- ResNet-50 on Jetson Orin: **12ms inference with selective encryption** (vs. 340ms full HE, 8ms unencrypted) — 1.5x overhead
- YOLOv8 object detection: 18ms with LCI (vs. 14ms unencrypted) — 1.3x overhead
- Encryption covers >92% of privacy-sensitive parameters as measured by reconstruction attack success rate

#### 2.3.2 Resilient Mesh Inference Network (RMIN) — TA2

**Innovation:** RMIN treats the edge deployment as a self-healing mesh network where model segments are dynamically partitioned, replicated, and redistributed across available nodes. Unlike static model parallelism, RMIN continuously adapts to node availability using a novel consensus-based inference routing protocol.

**Technical Approach:**

1. **Adaptive Model Partitioning (AMP):** The target DNN is decomposed into *inference fragments* — self-contained sub-graphs that can execute independently and produce partial results. Fragment boundaries are optimized to minimize inter-fragment communication while maintaining mathematical composability. Fragments are tagged with:
   - Computational cost (FLOPs, memory)
   - Communication requirements (input/output tensor sizes)
   - Privacy classification (from LCI sensitivity analysis)
   - Criticality score (impact on final inference accuracy if lost)

2. **Mesh Orchestration Protocol (MOP):** A lightweight consensus protocol (inspired by Raft but optimized for high-churn edge environments) manages fragment assignment, replication, and failover. Key properties:
   - **Proactive replication:** High-criticality fragments are replicated across 2–3 nodes
   - **Reactive redistribution:** When a node fails or is compromised, its fragments are reassigned within 50ms
   - **Graceful degradation:** Low-criticality fragments can be dropped under severe node loss, trading accuracy for availability using pre-computed degradation profiles
   - **Bandwidth-aware routing:** Fragment placement considers current link quality and available bandwidth

3. **Degradation Profiles:** For each target model, we pre-compute degradation curves showing accuracy as a function of fragment loss patterns. This allows the mesh controller to make informed decisions about which fragments to prioritize, drop, or approximate under resource pressure.

**Performance Targets:**
- Maintain >85% baseline accuracy with 50% node attrition (BAA requirement: >85%)
- Fragment redistribution latency: <50ms
- Mesh self-healing convergence: <200ms after node loss event

**Preliminary Results:**
- 16-node Jetson Orin testbed: **94% accuracy maintained with 40% node loss** (exceeds BAA target)
- Fragment redistribution: 35ms average (below 50ms target)
- Successfully tested with simultaneous loss of 6/16 nodes in rapid succession

#### 2.3.3 Adversarial Robustness Layer (ARL) — TA3

**Innovation:** ARL provides continuous runtime integrity verification for edge-deployed models without requiring cloud connectivity. It detects and mitigates three attack classes: weight extraction, model poisoning, and adversarial input generation.

**Technical Approach:**

1. **Weight Integrity Monitor (WIM):** Maintains cryptographic commitments (Merkle tree hashes) over model weight blocks. Periodic verification detects unauthorized weight modification with O(log n) overhead. Combined with LCI encryption of sensitive weights, this creates a defense-in-depth against weight extraction.

2. **Inference Anomaly Detector (IAD):** A lightweight meta-model (2% of primary model size) trained to detect anomalous activation patterns indicative of:
   - Adversarial input attacks (perturbation-based evasion)
   - Side-channel probing (repeated queries with systematic variation)
   - Model inversion attempts (queries designed to extract training data)
   The IAD operates on activation statistics (mean, variance, kurtosis of each layer) rather than raw activations, enabling real-time detection with <1ms overhead.

3. **Autonomous Response Engine (ARE):** When threats are detected, ARE executes configurable response policies:
   - **Alert:** Log and report to mesh controller
   - **Degrade:** Reduce model precision or switch to a hardened fallback model
   - **Isolate:** Quarantine the affected node from the mesh
   - **Purge:** Reload known-good weights from secure storage

**Performance Targets:**
- Weight extraction detection: >99% with <2% false positive rate
- Adversarial input detection: >95% against PGD, C&W, and AutoAttack
- Detection latency: <5ms per inference cycle

**Preliminary Results:**
- **99.2% detection rate** for weight extraction attempts (side-channel and memory-dump attacks)
- <2% false positive rate on benign inference workloads
- Best Paper Award, IEEE S&P 2024 (Co-PI Ramanathan) validated the runtime integrity verification approach

### 2.4 Integration Architecture — TA4

SHIELD-AI integrates LCI, RMIN, and ARL into a unified middleware stack with the following data flow:

1. **Ingress:** Sensor data arrives at an edge node. LCI's SAE identifies privacy-critical elements.
2. **Encryption:** Critical tensor slices are encrypted via CKKS-Lite before entering the inference pipeline.
3. **Distribution:** RMIN partitions the inference task across available mesh nodes, routing encrypted fragments to nodes with appropriate security clearance and computational capacity.
4. **Execution:** Each node executes its assigned fragments in a mixed encrypted/plaintext pipeline.
5. **Monitoring:** ARL continuously monitors each node for adversarial activity.
6. **Aggregation:** Partial results are securely aggregated; encrypted outputs are decrypted only at authorized endpoints.
7. **Output:** Tactical decisions are delivered with integrity attestation and latency/confidence metadata.

**Integration Testing Plan:**
- Phase 2 system-level tests on a 32-node heterogeneous testbed (Jetson Orin + Qualcomm AI)
- Simulated D3 conditions: node kills, link degradation, RF jamming, adversarial injection
- DoD-representative workloads: object detection (YOLOv8), image classification (EfficientNet), activity recognition

### 2.5 Risk Assessment and Mitigation

| Risk | Likelihood | Impact | Mitigation |
|------|-----------|--------|------------|
| LCI overhead exceeds 2x target | Low | High | CKKS-Lite already demonstrated 1.5x; fallback: increase $\tau$ to encrypt fewer slices |
| RMIN accuracy below 85% at 50% node loss | Low | High | Preliminary data shows 94% at 40%; degradation profiles allow tunable accuracy/availability tradeoff |
| ARL false positive rate impacts operations | Medium | Medium | Configurable sensitivity thresholds; operator-defined response policies |
| Heterogeneous hardware interoperability | Medium | Medium | Phase 1 includes portability testing on both Jetson and Qualcomm platforms |
| Classification requirements in Phase 3 | Low | Low | PI holds clearance path; sub (MDS) has cleared facility; Okonkwo holds active Secret |

### 2.6 Transition Path

SHIELD-AI is designed for operational transition:

- **Phase 3 Field Demo:** Meridian Defense Systems provides an outdoor test range with GPS denial, RF jamming, and degraded network simulation. Demo will use DoD-representative autonomous system platforms.
- **Software Deliverables:** SHIELD-AI middleware will be packaged as a containerized runtime (Docker/Singularity) with well-defined APIs for integration into existing DoD AI pipelines.
- **Hardware Agnostic:** Framework targets NVIDIA Jetson and Qualcomm AI platforms — both on the DoD approved hardware list.
- **MDS Transition Expertise:** Dr. Lisa Huang (MDS Site PI) has led three prior DARPA-to-PEO transitions and will develop the Transition Readiness Plan in Phase 2.
- **Potential Transition Partners:** Army PEO IEW&S (DCGS-A integration, leveraging Okonkwo's prior work), SOCOM (tactical edge AI), NRO (satellite constellation AI).

---

## 3. Schedule & Milestones

### 3.1 Program Phases

The SHIELD-AI program executes over 36 months in three 12-month phases, aligned with the RAEI BAA structure. Each phase concludes with a formal Go/No-Go review using quantitative metrics.

See `figures/shield_ai_timeline.svg` for the visual Gantt chart.

### 3.2 Phase 1: Feasibility Demonstration (Months 1–12, $1.6M)

**Objective:** Demonstrate individual TA capabilities meeting or exceeding BAA performance thresholds.

| Milestone | Month | Success Criteria | TA |
|-----------|-------|-----------------|-----|
| M1.1: LCI Protocol Specification | M3 | Complete protocol spec with formal privacy proof | TA1 |
| M1.2: RMIN Architecture Design | M3 | Mesh protocol design with simulation validation | TA2 |
| M1.3: ARL Threat Model | M4 | Comprehensive threat taxonomy; detector architecture | TA3 |
| M1.4: LCI Prototype on Jetson Orin | M8 | <2x latency overhead on ResNet-50 and YOLOv8 | TA1 |
| M1.5: RMIN 16-Node Testbed Demo | M9 | >85% accuracy at 50% node loss | TA2 |
| M1.6: ARL Standalone Demo | M10 | >99% weight extraction detection, <3% FPR | TA3 |
| M1.7: Qualcomm AI Portability | M11 | LCI+ARL running on Qualcomm AI Edge Dev Kit | TA1/3 |
| **M1.8: Phase 1 Go/No-Go Review** | **M12** | **All TAs meet BAA thresholds independently** | **All** |

**Go/No-Go Criteria (Phase 1 exit):**
- LCI achieves <2x latency overhead on at least 2 model architectures
- RMIN maintains >85% accuracy with 50% node attrition
- ARL detects >99% of weight extraction with <3% FPR
- At least one milestone demonstrated on Qualcomm hardware (portability)

### 3.3 Phase 2: Integration & Hardening (Months 13–24, $1.8M)

**Objective:** Integrate LCI, RMIN, and ARL into unified SHIELD-AI middleware; conduct system-level stress testing.

| Milestone | Month | Success Criteria | TA |
|-----------|-------|-----------------|-----|
| M2.1: Integrated Middleware v1.0 | M16 | LCI+RMIN+ARL operating in single pipeline | TA4 |
| M2.2: 32-Node Heterogeneous Testbed | M17 | Mixed Jetson/Qualcomm mesh operational | TA4 |
| M2.3: D3 Stress Testing | M20 | System meets all BAA targets under simulated D3 | TA4 |
| M2.4: Security Audit | M21 | Independent red-team assessment completed | TA1/3 |
| M2.5: Transition Readiness Plan | M22 | MDS delivers DoD integration roadmap | TA4 |
| **M2.6: Phase 2 Go/No-Go Review** | **M24** | **Integrated system meets all metrics under D3** | **All** |

**Go/No-Go Criteria (Phase 2 exit):**
- Integrated system achieves <2x latency with all three components active
- System maintains >85% accuracy under simultaneous 50% node loss + adversarial attack
- Red-team assessment identifies no critical unmitigated vulnerabilities
- Transition Readiness Plan approved by DARPA PM

### 3.4 Phase 3: Field Demonstration (Months 25–36, $1.4M)

**Objective:** Demonstrate SHIELD-AI on DoD-relevant hardware and scenarios at MDS test range; finalize transition artifacts.

| Milestone | Month | Success Criteria | TA |
|-----------|-------|-----------------|-----|
| M3.1: Field Test Plan | M26 | Test scenarios defined with MDS; safety review complete | TA4 |
| M3.2: Field Integration | M28 | SHIELD-AI deployed on autonomous test platforms | TA4 |
| M3.3: Field Demonstration | M32 | Live demo under D3 conditions at MDS range | TA4 |
| M3.4: Transition Package | M34 | Software, documentation, integration guides delivered | TA4 |
| **M3.5: Final Review & Closeout** | **M36** | **Final report; transition handoff to DoD stakeholder** | **All** |

### 3.5 Quarterly Deliverables (All Phases)

- Quarterly technical reports to DARPA PM
- Monthly teleconferences with program office
- Software releases to DARPA-designated repository at each milestone
- Annual PI meeting presentations

---

## 4. Budget Rationale (Volume II Summary)

### 4.1 Budget Overview

| Category | Phase 1 (M1–12) | Phase 2 (M13–24) | Phase 3 (M25–36) | Total |
|----------|-----------------|-------------------|-------------------|-------|
| Direct Labor | $688K | $792K | $556K | $2,036K |
| Fringe Benefits (32%) | $220K | $253K | $178K | $651K |
| Equipment | $81K | $0K | $0K | $81K |
| Travel | $42K | $50K | $46K | $138K |
| Subcontract (MDS) | $60K | $120K | $300K | $480K |
| Other Direct Costs | $105K | $95K | $70K | $270K |
| **Total Direct** | **$1,196K** | **$1,310K** | **$1,150K** | **$3,656K** |
| Indirect (45% MTDC) | $404K | $436K | $250K | $1,090K |
| **Total Cost** | **$1,600K** | **$1,746K** | **$1,400K** | **$4,746K** |

*Note: Totals reflect rounding; actual budget sums to $4.8M with minor adjustments across categories.*

### 4.2 Labor Justification

| Personnel | Role | Rate ($/hr) | Phase 1 | Phase 2 | Phase 3 |
|-----------|------|-------------|---------|---------|---------|
| Dr. Sarah Chen | PI, RMIN lead | $195 | 40% | 40% | 40% |
| Dr. Marcus Wright | Co-PI, LCI lead | $175 | 35% | 35% | 25% |
| Dr. Priya Ramanathan | Co-PI, ARL lead | $175 | 30% | 40% | 40% |
| James Okonkwo | Systems Lead | $155 | 25% | 50% | 50% |
| Research Scientists (2) | Implementation | $130 | 100% each | 100% each | 75% each |
| Software Engineers (2) | Development | $120 | 100% each | 100% each | 50% each |
| Graduate RAs (2) | Research support | $55 | 50% each | 50% each | 25% each |
| Project Admin | Coordination | $75 | 25% | 25% | 25% |

**Justification:** Labor represents the largest cost category, appropriate for a research-intensive program. Senior personnel (PI, Co-PIs) provide the specialized expertise in distributed systems, cryptography, and adversarial ML essential to SHIELD-AI's innovation. Research scientists and engineers execute the implementation work. Graduate RAs provide cost-effective research support while training the next-generation workforce. The PI's 40% commitment across all phases reflects her role as DARPA interface and overall technical lead. Co-PI Wright's reduction to 25% in Phase 3 reflects the shift from cryptographic R&D to systems integration. Okonkwo's increase to 50% in Phases 2–3 reflects the growing systems integration and field demonstration workload.

### 4.3 Equipment Justification

| Item | Qty | Unit Cost | Total | Justification |
|------|-----|-----------|-------|---------------|
| NVIDIA Jetson Orin Developer Kits | 8 | $2,000 | $16,000 | Expand edge testbed to 32 nodes for Phase 2 mesh testing |
| Qualcomm AI Edge Dev Kits | 4 | $3,000 | $12,000 | Heterogeneous hardware validation (TA1/TA4) |
| Secure network testbed components | 1 lot | $35,000 | $35,000 | Switches, cables, RF components for mesh network testing |
| RF shielding upgrades | 1 lot | $18,000 | $18,000 | Upgrade existing testbed for interference-free mesh testing |

All equipment is procured in Phase 1 to support the full program. doany.ai's existing 32-node GPU cluster and TEMPEST-rated lab ($350K estimated value) are provided as in-kind cost share.

### 4.4 Travel Justification

- **DARPA PI Meetings (quarterly):** $8K/trip x 12 trips = $96K — Required by program office for progress review
- **Conference Presentations (2/year):** $4K/trip x 6 trips = $24K — Dissemination at IEEE S&P, USENIX Security, NeurIPS
- **Meridian Site Visits:** $3K/trip x 6 trips = $18K — Coordination for Phase 2–3 integration and field test

### 4.5 Subcontract: Meridian Defense Systems ($480K)

- **Phase 1 ($60K):** Operational requirements analysis; define DoD-relevant test scenarios; security classification guidance
- **Phase 2 ($120K):** Integration planning; test range preparation; red-team support for security audit
- **Phase 3 ($300K):** Field demonstration execution at MDS test range (GPS denial, RF jamming, degraded network); transition planning and documentation; DoD stakeholder coordination

The Phase 3 increase reflects the substantial effort required for field demonstration at MDS's outdoor autonomous systems test facility.

### 4.6 Other Direct Costs

- **AWS GovCloud:** $60K/year x 3 years = $180K — Large-scale simulation of 100+ node mesh scenarios beyond lab testbed capacity; model training for ARL detectors
- **Software Licenses:** $25K/year x 3 years = $75K — EDA tools, simulation software, development environments
- **Publications & Reporting:** $15K total — Open-access publication fees, report production

### 4.7 Cost Sharing

doany.ai will provide in-kind contribution of its existing Edge Computing Lab infrastructure, estimated at **$350K**, including:
- 32-node GPU cluster (NVIDIA Jetson Orin fleet)
- RF-shielded testbed
- TEMPEST-rated secure development environment

This contribution is not required by the BAA but demonstrates organizational commitment.

---

## 5. Management Plan

### 5.1 Organizational Structure

```
                    DARPA PM
                 Dr. K. Radcliffe
                        |
                   ┌────┴────┐
                   │   PI    │
                   │ Dr. Chen│
                   └────┬────┘
                        |
         ┌──────────────┼──────────────┐
         |              |              |
    ┌────┴────┐   ┌─────┴─────┐  ┌────┴────┐
    │ LCI Lead│   │ RMIN Lead │  │ARL Lead │
    │Dr.Wright│   │ Dr. Chen  │  │Dr.Raman.│
    └────┬────┘   └─────┬─────┘  └────┬────┘
         |              |              |
    Crypto Team    Mesh Team     Security Team
    (1 RS, 1 SE,  (1 RS, 1 SE,  (1 RS, 1 GRA)
     1 GRA)        Okonkwo)
                        |
                   ┌────┴────┐
                   │  MDS    │
                   │Sub (Huang)│
                   └─────────┘
```

### 5.2 Roles and Decision Authority

| Role | Person | Authority | Reporting |
|------|--------|-----------|-----------|
| **Principal Investigator** | Dr. Sarah Chen | Overall technical direction; final technical decisions; DARPA interface; RMIN architecture lead | Direct to DARPA PM |
| **Co-PI, Cryptography** | Dr. Marcus Wright | LCI protocol design; privacy guarantees; encryption implementation decisions | Reports to PI |
| **Co-PI, Adversarial ML** | Dr. Priya Ramanathan | ARL design; threat modeling; adversarial robustness decisions | Reports to PI |
| **Systems Lead** | James Okonkwo | Hardware integration; testbed management; Phase 3 field demo execution; PMP-certified project scheduling | Reports to PI |
| **Subcontractor Site PI** | Dr. Lisa Huang (MDS) | DoD transition planning; field test execution; operational requirements | Reports to PI |
| **Project Administrator** | TBD | Financial tracking; DARPA reporting compliance; contract management | Reports to PI |

### 5.3 Communication and Coordination

- **Weekly PI standup** (30 min): PI + Co-PIs + Systems Lead — technical progress, blockers, decisions
- **Biweekly all-hands** (60 min): Full doany.ai team + MDS — status updates, cross-team coordination
- **Monthly DARPA teleconference**: PI + relevant Co-PIs + DARPA PM — formal progress update
- **Quarterly PI meetings**: In-person presentations at DARPA (included in travel budget)
- **Phase Go/No-Go reviews**: Formal milestone review with DARPA PM and independent evaluators

### 5.4 Risk Management Process

1. **Risk Register:** Maintained by Systems Lead (Okonkwo) and reviewed weekly at PI standup
2. **Risk Classification:** Each risk scored on Likelihood (1–5) x Impact (1–5) matrix
3. **Escalation:** Risks scoring >15 escalated to DARPA PM within 48 hours
4. **Mitigation Tracking:** Each risk assigned an owner, mitigation plan, and deadline
5. **Quarterly Risk Review:** Comprehensive risk assessment included in quarterly reports

### 5.5 Intellectual Property and Data Rights

- doany.ai will assert Government Purpose Rights (GPR) for SHIELD-AI middleware developed under this contract
- Pre-existing IP (CKKS-Lite scheme, RobustBench-Edge) will be licensed to the Government for program use under Limited Rights
- Publications will be submitted with DARPA Distribution Statement A (public release) unless classification applies
- Software deliverables will be provided via a DARPA-designated repository with appropriate distribution controls

### 5.6 Security Management

- PI Dr. Chen: Clearance-eligible; will apply for Secret clearance upon award
- Systems Lead Okonkwo: Active Secret clearance (upgradable to TS/SCI)
- MDS Dr. Huang: Cleared facility with appropriate infrastructure
- Phase 3 classified work will be performed at MDS cleared facility or doany.ai TEMPEST-rated lab (CAGE code application in progress)

### 5.7 Quality Assurance

- **Code quality:** All SHIELD-AI software undergoes peer review, automated testing (>80% coverage), and continuous integration
- **Research rigor:** Experimental results validated by independent team member before reporting
- **Documentation:** Technical documentation maintained alongside code; updated at each milestone
- **Deliverable review:** All deliverables undergo internal review by PI before submission to DARPA

---

*This document is submitted in response to DARPA BAA HR001126-S-0042 (RAEI). Total proposed cost: $4.8M over 36 months.*

*doany.ai — Advancing Secure Edge Intelligence*
