# Webinar Follow-Up Cold Outreach — April 13, 2026

**Sender:** Alex Chen, Head of Growth, alex@doany.ai
**Cadence:** Email 1 (today) → Follow-up 1 (Day 3) → Follow-up 2 (Day 7) → Follow-up 3 (Day 14)

---

## 1. Priya Sharma — CTO, NovaPay (180 engineers, Fintech)

**Signals:** Asked about incident automation ROI + PagerDuty integration at webinar; recently promoted from VP Eng; Series C ($45M) March 2026

### Email 1 — Day 0
**Subject:** incident triage
**To:** Priya Sharma

Priya — your question about incident automation ROI during the webinar was sharp. That's usually the first thing engineering leaders ask after a funding round scales the team faster than the processes.

Lattice had the same challenge at 120 engineers. After deploying doany.ai's triage agents with PagerDuty, their MTTR dropped from 47 to 18 minutes — without adding headcount.

Given where NovaPay is post-Series C, would it be worth a quick look?

Alex

### Follow-up 1 — Day 3
**Subject:** re: incident triage

Priya — one thing I didn't mention: Ramp (fintech, 200 engineers) runs their entire incident workflow through doany.ai. Beyond triage, they automated escalation routing and post-incident summaries — saved their on-call team roughly 12 hours a week.

Since NovaPay is in a similar space and scale, figured that might hit closer to home than a generic case study.

Happy to share the specifics if useful.

Alex

### Follow-up 2 — Day 7
**Subject:** re: incident triage

Priya — we just published a breakdown of how high-growth fintech teams are structuring incident automation as they scale past 150 engineers. Covers the ROI framework you were asking about at the webinar.

Want me to send it over?

Alex

### Follow-up 3 — Day 14
**Subject:** re: incident triage

Priya — don't want to crowd your inbox. If incident automation isn't a priority right now, totally get it.

If it comes back around, I'm at alex@doany.ai.

Alex

---

## 2. Marcus Johnson — CTO, Greenline Analytics (90 engineers, Climate Data SaaS)

**Signals:** Deploy pipeline takes 3+ hours; hiring 2 platform engineers; asked about monorepo setups

### Email 1 — Day 0
**Subject:** deploy pipeline

Marcus — you mentioned the 3+ hour deploy pipeline during the webinar. That's a painful number, especially when you're hiring platform engineers to fix something that tooling could handle.

Teams in a similar spot have used doany.ai's pipeline orchestration to cut deploy times by 60-70% — without building custom infrastructure. One team went from 3+ hours to under 50 minutes.

Before you staff up that platform team, worth seeing if automation covers the gap?

Alex

### Follow-up 1 — Day 3
**Subject:** re: deploy pipeline

Marcus — on the monorepo question you asked: doany.ai handles monorepo setups natively. The pipeline orchestration detects which services changed and only deploys what's affected — which is usually where the 3-hour number comes from.

Ramp runs a large monorepo with 200 engineers and cut their cycle time by 62% after setting this up.

Happy to show you how it'd map to Greenline's setup if that'd be useful.

Alex

### Follow-up 2 — Day 7
**Subject:** re: deploy pipeline

Marcus — quick thought: the two platform engineering hires would cost roughly $400-500K/year loaded. Most teams using doany.ai for deploy automation get to a stable state in a few weeks for a fraction of that.

Not saying you won't need the hires — but might be worth seeing what automation can take off their plate before they start.

Want me to walk through the math?

Alex

### Follow-up 3 — Day 14
**Subject:** re: deploy pipeline

Marcus — I'll leave it here. If the deploy pipeline becomes a bigger bottleneck or you want a second opinion before finalizing those platform hires, happy to chat.

alex@doany.ai whenever.

Alex

---

## 3. Sofia Reyes — CTO, BrightPath Education (60 engineers, EdTech)

**Signals:** Posted about "developer toil" on LinkedIn last week; stayed for full webinar; no Q&A questions

### Email 1 — Day 0
**Subject:** developer toil

Sofia — saw your LinkedIn post about developer toil last week, and then noticed you stayed for the entire webinar today. Sounds like this is top of mind.

Vanta's team (80 engineers, similar size to yours) was in a comparable spot. They deployed doany.ai to automate PR reviews, dependency updates, and release notes — their engineers now ship 2.4x more PRs per week because the busywork isn't eating their days.

Worth a conversation?

Alex

### Follow-up 1 — Day 3
**Subject:** re: developer toil

Sofia — one angle that resonates with teams your size: you probably can't afford a dedicated platform/DevOps team, so the toil falls on your best engineers.

doany.ai's skill marketplace has pre-built automations that take minutes to set up — no dedicated infra team required. It's specifically designed so a 60-person team gets the same automation a 500-person team builds internally.

Curious if that matches what you're dealing with?

Alex

### Follow-up 2 — Day 7
**Subject:** re: developer toil

Sofia — we put together a short guide on the highest-ROI automations for engineering teams under 100 people. Covers the toil categories that eat the most time relative to team size.

Want me to send it your way?

Alex

### Follow-up 3 — Day 14
**Subject:** re: developer toil

Sofia — I'll stop here. If the toil conversation picks back up, you know where to find me.

alex@doany.ai

Alex

---

## 4. Daniel Kim — CTO, VaultStream (220 engineers, Cybersecurity)

**Signals:** Asked about SOC 2 and VPC deployment; follow-up in chat about data residency; very security-conscious

### Email 1 — Day 0
**Subject:** vpc deployment

Daniel — your questions about SOC 2 and data residency at the webinar were exactly right. Most automation platforms can't answer those questions well, which is why security teams end up building everything in-house.

doany.ai runs entirely inside your VPC. SOC 2 Type II certified, SSO/SAML, full audit logs, and your data never leaves your infrastructure. Vanta — a security company with 80 engineers — runs their workflows through us.

Worth a deeper technical conversation?

Alex

### Follow-up 1 — Day 3
**Subject:** re: vpc deployment

Daniel — to follow up on the data residency question from the webinar chat: doany.ai supports configurable data residency with region-specific deployments. Every agent action is audit-logged with full traceability — which matters when your customers are running security evaluations on your toolchain too.

Happy to connect you with our security team for a more detailed architecture walkthrough if that'd be useful.

Alex

### Follow-up 2 — Day 7
**Subject:** re: vpc deployment

Daniel — figured this might save you time: we published our security whitepaper covering the VPC deployment architecture, data flow diagrams, and how audit logging works end-to-end. It answers most of the questions security-minded CTOs ask during eval.

Want me to send it over?

Alex

### Follow-up 3 — Day 14
**Subject:** re: vpc deployment

Daniel — I'll leave it here. Security evaluations take time and I respect that process.

If VaultStream ever needs workflow automation that doesn't compromise your security posture, I'm at alex@doany.ai.

Alex

---

## 5. Rachel Okonkwo — CTO, LoopHealth (110 engineers, HealthTech)

**Signals:** Just acquired 30-person eng team; onboarding pain point; asked about Jira + GitHub integration

### Email 1 — Day 0
**Subject:** team onboarding

Rachel — your question about Jira + GitHub integration makes a lot more sense knowing you just brought on a 30-person engineering team. Integrating a new team's workflows is brutal when everyone's tools and norms are different.

Vanta faced something similar when they onboarded 15 engineers at once. Using doany.ai's context engine, new engineers were contributing meaningfully 3x faster — the AI basically gave them a guide to the codebase and team norms on day one.

Would that be relevant to what you're dealing with?

Alex

### Follow-up 1 — Day 3
**Subject:** re: team onboarding

Rachel — on the Jira + GitHub integration specifically: doany.ai connects both natively and orchestrates workflows across them. So when the acquired team is used to Jira and your existing team runs on GitHub Issues (or vice versa), the automation layer handles the translation.

Saves you from the "let's all migrate to one tool" conversation that nobody wants to have mid-integration.

Curious if that's part of the challenge?

Alex

### Follow-up 2 — Day 7
**Subject:** re: team onboarding

Rachel — we've seen a pattern with post-acquisition integrations: the first 90 days determine whether the new team ramps up or stays siloed. The biggest friction points are usually codebase onboarding, workflow alignment, and review norms.

We put together a playbook on how engineering leaders are using automation to accelerate post-acquisition integration. Want a copy?

Alex

### Follow-up 3 — Day 14
**Subject:** re: team onboarding

Rachel — I know acquisitions mean a hundred competing priorities. Dropping this here in case onboarding comes back to the top of the list.

alex@doany.ai whenever it makes sense.

Alex

---

## 6. James Thornton — CTO, Packwise Logistics (75 engineers, Supply Chain SaaS)

**Signals:** Registered for webinar but joined 15 min late; no engagement data

### Email 1 — Day 0
**Subject:** eng workflows

James — noticed you signed up for the doany.ai webinar today. In case you missed parts of it, the core idea: engineering teams are burning 30-40% of their time on operational work that AI agents can handle — PR reviews, deploy pipelines, incident triage, cross-team handoffs.

Teams our size (50-100 engineers) typically see the fastest ROI because the toil-to-headcount ratio is highest. Our customers average 2.4x more PRs shipped per engineer per week.

Worth a quick look to see if it's relevant to Packwise?

Alex

### Follow-up 1 — Day 3
**Subject:** re: eng workflows

James — one example that might click for a supply chain engineering team: Ramp (200 engineers) was spending most of their platform team's time on deploy pipeline management and release coordination. After automating those with doany.ai, they freed up 40 engineering hours per week — which went straight back to product work.

Happy to show you which workflows would have the biggest impact at Packwise.

Alex

### Follow-up 2 — Day 7
**Subject:** re: eng workflows

James — we recorded the webinar if you want to catch the parts you missed. It covers the ROI framework and a live demo of the workflow builder.

Want me to send the link?

Alex

### Follow-up 3 — Day 14
**Subject:** re: eng workflows

James — I'll leave it here. If engineering operations become a bigger focus for Packwise, happy to pick this up.

alex@doany.ai

Alex

---

## 7. Wei Zhang — CTO, Canopy Insurance (140 engineers, InsurTech)

**Signals:** Loses 2 days per sprint on code review; asked about PR review bottlenecks

### Email 1 — Day 0
**Subject:** pr review time

Wei — two days per sprint on code review for a 140-person team is roughly 1,100 engineering hours per month. That number probably keeps you up at night.

Ramp had the same problem at 200 engineers. After deploying doany.ai's AI review agents, they cut PR review cycle time by 62% — without lowering review quality. The agents handle the mechanical checks so your engineers focus on the logic and architecture.

Worth seeing how it'd work for Canopy's codebase?

Alex

### Follow-up 1 — Day 3
**Subject:** re: pr review time

Wei — to put a finer point on it: at 140 engineers losing 2 days/sprint, you're looking at roughly $2-3M annually in engineering time going to reviews. Even a 40% reduction would free up the equivalent of 8-10 full-time engineers.

That's not a tooling decision — that's a headcount decision. doany.ai's review agents typically pay for themselves in the first month.

Happy to walk through the math with your specific numbers.

Alex

### Follow-up 2 — Day 7
**Subject:** re: pr review time

Wei — we published a breakdown of what actually makes PR reviews slow (spoiler: 70% of review time goes to style, formatting, and mechanical checks that AI handles well). Includes the framework Ramp used to prioritize which review workflows to automate first.

Want me to send it?

Alex

### Follow-up 3 — Day 14
**Subject:** re: pr review time

Wei — last note from me. If code review bottlenecks are still eating your sprints, the offer stands.

alex@doany.ai

Alex

---

## 8. Aisha Patel — CTO, TrueNorth CRM (95 engineers, Sales SaaS)

**Signals:** Moving from Jenkins to GitHub Actions; asked about skill marketplace pricing

### Email 1 — Day 0
**Subject:** jenkins migration

Aisha — migrating from Jenkins to GitHub Actions is the right move, but it's also the perfect time to rethink what gets automated beyond CI/CD. Most teams do a 1:1 port of their Jenkins jobs and miss the opportunity.

doany.ai's skill marketplace plugs directly into GitHub Actions. Pre-built automations for PR reviews, release management, and incident routing — install in minutes, not sprints. And pricing scales with your team, not your pipeline count.

Worth exploring while you're mid-migration?

Alex

### Follow-up 1 — Day 3
**Subject:** re: jenkins migration

Aisha — on marketplace pricing since you asked: it's usage-based, tied to your team size. No per-pipeline or per-workflow charges. Teams at 95 engineers typically land in a range that's less than a single engineer's cost — for automation that replaces dozens of custom scripts.

Happy to send over the pricing details so you can factor it into the migration budget.

Alex

### Follow-up 2 — Day 7
**Subject:** re: jenkins migration

Aisha — we put together a guide on what high-performing teams automate during a CI/CD migration (beyond just porting existing jobs). Covers the workflows that have the highest ROI when you're already rethinking your toolchain.

Want a copy?

Alex

### Follow-up 3 — Day 14
**Subject:** re: jenkins migration

Aisha — I'll drop this here. If the GitHub Actions migration opens up questions about what else to automate, happy to chat.

alex@doany.ai

Alex

---

## 9. Tomás Rivera — CTO, Fidelio Finance (300 engineers, Banking Platform)

**Signals:** Publicly traded; compliance and auditability top priorities; asked about audit logging and compliance features

### Email 1 — Day 0
**Subject:** audit logging

Tomás — your questions about audit logging at the webinar made sense — at a publicly traded bank, every automated workflow needs a paper trail that holds up to examination.

doany.ai logs every agent action with full traceability: who triggered it, what changed, when, and why. SOC 2 Type II certified, SSO/SAML, and everything runs inside your VPC. Ramp (fintech, 200 engineers) uses us for exactly this reason — full automation with a compliance-ready audit trail.

Worth a deeper conversation?

Alex

### Follow-up 1 — Day 3
**Subject:** re: audit logging

Tomás — one thing that matters at your scale: with 300 engineers, manual compliance workflows don't just slow things down — they become a risk. The more human steps in the chain, the more audit findings.

doany.ai's audit logs are structured for regulatory review. Every workflow change is versioned, every approval is captured, and the entire history is exportable for your compliance team.

Happy to connect you with our team that works with regulated financial institutions.

Alex

### Follow-up 2 — Day 7
**Subject:** re: audit logging

Tomás — we have a technical architecture doc that covers how our audit logging works for regulated industries — data flow, retention policies, export formats, and how it maps to common compliance frameworks.

Figured it might save time versus a call if your compliance team wants to do a preliminary review. Want me to send it?

Alex

### Follow-up 3 — Day 14
**Subject:** re: audit logging

Tomás — compliance evaluations move at their own pace, so I'll leave this here. When Fidelio is ready to look at workflow automation that meets your audit requirements, I'm at alex@doany.ai.

Alex

---

## 10. Elena Volkov — CTO, Mosaic Design (45 engineers, Design Collaboration)

**Signals:** Uses GitLab not GitHub; asked about GitLab compatibility

### Email 1 — Day 0
**Subject:** gitlab workflows

Elena — good question at the webinar about GitLab. Most automation tools treat GitLab as an afterthought, which probably means you've been burned before.

doany.ai has native GitLab integration — same depth as GitHub. Pipeline orchestration, MR automation, and CI/CD workflows all work out of the box. Vanta (80 engineers) started on a similar setup and was running automated workflows within a week.

Worth a look for Mosaic's GitLab setup?

Alex

### Follow-up 1 — Day 3
**Subject:** re: gitlab workflows

Elena — at 45 engineers, the automation ROI hits differently. You don't have the headcount to build internal tooling, so every hour your engineers spend on operational work is an hour they're not shipping product.

Teams your size typically start with 2-3 skills from the marketplace — PR review automation, release notes, dependency updates — and see results in the first week. No dedicated DevOps team required.

Curious if that matches what Mosaic needs?

Alex

### Follow-up 2 — Day 7
**Subject:** re: gitlab workflows

Elena — we just updated our GitLab integration docs with setup walkthroughs for teams running GitLab CI. Covers MR automation, pipeline triggers, and how the context engine indexes GitLab repos.

Want me to send the link?

Alex

### Follow-up 3 — Day 14
**Subject:** re: gitlab workflows

Elena — I'll leave it here. If Mosaic ever needs workflow automation that actually works with GitLab, I'm at alex@doany.ai.

Alex

---

## 11. Chris Nakamura — CTO, PulsePoint Media (130 engineers, AdTech)

**Signals:** Tweeted about "alert fatigue killing my team" 2 weeks ago; asked about on-call triage automation

### Email 1 — Day 0
**Subject:** alert fatigue

Chris — your tweet about alert fatigue was painfully relatable, and then you asked about on-call triage at the webinar today. Sounds like this is past the "we should fix this" stage.

Lattice had a similar problem at 120 engineers. After deploying doany.ai's triage agents, their MTTR went from 47 to 18 minutes — not because incidents decreased, but because the noise got filtered and routing got automatic.

Worth seeing how it'd work for PulsePoint's alert setup?

Alex

### Follow-up 1 — Day 3
**Subject:** re: alert fatigue

Chris — the math on alert fatigue is ugly. At 130 engineers, if your on-call rotation is handling even 30% false positives, that's hundreds of hours per month of your best engineers getting paged for nothing.

doany.ai's triage agents learn your alert patterns and auto-resolve the noise — only human-required incidents get escalated. Most teams see false positive pages drop by 60-70% in the first month.

Curious if that's the kind of relief your team needs?

Alex

### Follow-up 2 — Day 7
**Subject:** re: alert fatigue

Chris — we published a breakdown of how engineering teams are restructuring on-call to reduce burnout without sacrificing response times. Covers the triage automation framework and how teams like Lattice implemented it.

Want me to send it over?

Alex

### Follow-up 3 — Day 14
**Subject:** re: alert fatigue

Chris — last one from me. If alert fatigue is still killing your team, the offer stands.

alex@doany.ai

Alex

---

## 12. Lisa Brennan — CTO, Meadow Robotics (70 engineers, Robotics / Embedded)

**Signals:** No Q&A engagement at webinar; embedded systems background

### Email 1 — Day 0
**Subject:** eng automation

Lisa — thanks for joining the webinar today. I know most workflow automation tools are built for web/SaaS teams, so I wanted to reach out directly — the core value applies to any engineering team dealing with operational overhead.

At 70 engineers, the toil-to-headcount ratio means automation has outsized impact. Teams our size typically save 15-20 engineering hours per week on PR reviews, release coordination, and cross-team handoffs alone.

Curious if any of that resonates with how Meadow's team works?

Alex

### Follow-up 1 — Day 3
**Subject:** re: eng automation

Lisa — one area where embedded teams often get the most value: code review automation. When your engineers are context-switching between hardware and software, losing hours to review cycles hits harder than it does for a typical SaaS team.

doany.ai's review agents handle the mechanical checks — style, formatting, dependency issues — so your reviewers focus on the logic that actually matters.

Would that kind of workflow be useful for Meadow?

Alex

### Follow-up 2 — Day 7
**Subject:** re: eng automation

Lisa — we've been talking to more hardware-adjacent engineering teams lately about which automations translate best outside the typical SaaS workflow.

If you have 15 minutes, I'd love to get your perspective on what operational work eats the most time for your team. Happy to share what we're seeing from similar-sized teams in return.

Alex

### Follow-up 3 — Day 14
**Subject:** re: eng automation

Lisa — I'll leave it here. If engineering operations become a focus for Meadow, I'm at alex@doany.ai.

Alex

---

## Quick Reference: Sequence Timing

| Day | Action |
|-----|--------|
| Apr 13 (Mon) | Send all Email 1s |
| Apr 16 (Thu) | Send Follow-up 1 (non-openers first, then all) |
| Apr 20 (Mon) | Send Follow-up 2 |
| Apr 27 (Mon) | Send Follow-up 3 (breakup) |

## Priority Tiers for Sales

**Tier 1 — Strongest signals (load first):**
- Wei Zhang (Canopy) — quantified pain, high urgency
- Chris Nakamura (PulsePoint) — public pain signal + webinar engagement
- Priya Sharma (NovaPay) — post-funding, specific questions
- Rachel Okonkwo (LoopHealth) — acquisition trigger, specific questions

**Tier 2 — Strong signals:**
- Marcus Johnson (Greenline) — quantified pain + hiring signal
- Daniel Kim (VaultStream) — engaged, specific security questions
- Aisha Patel (TrueNorth) — active migration = buying window
- Tomás Rivera (Fidelio) — enterprise, compliance-focused

**Tier 3 — Nurture:**
- Sofia Reyes (BrightPath) — interested but no direct questions
- Elena Volkov (Mosaic) — compatibility question, smaller team
- James Thornton (Packwise) — low engagement
- Lisa Brennan (Meadow) — no engagement, different domain fit
