seo-audit
Verifiedby coreyhaines31 · Updated Apr 16, 2026
When the user wants to audit, review, or diagnose SEO issues on their site. Also use when the user mentions "SEO audit," "technical SEO," "why am I not ranking," "SEO issues," "on-page SEO," "meta tags review," "SEO health check," "my traffic dropped," "lost rankings," "not showing up in Google," "site isn't ranking," "Google update hit me," "page speed," "core web vitals," "crawl errors," or "indexing issues." Use this even if the user just says something vague like "my SEO is bad" or "help with SEO" — start with an audit. For building pages at scale to target keywords, see programmatic-seo. For adding structured data, see schema-markup. For AI search optimization, see ai-seo.
See It In Action
Let me start the audit. I'll fetch the key pages and technical SEO files in parallel.
WebFetch connections failed — let me try via curl instead.
SSL handshake is failing (exit code 35). Let me try different approaches to reach the site.
The site's DNS resolves to 198.18.x.x (reserved benchmark range) — connections are being blocked. Let me search for cached/indexed data about the site and try alternative access methods.
Major finding: doany.ai has zero Google indexed pages. Let me also check the local codebase — the user mentioned a CMS update, so the site code might be here.
Found project context files in the working directory. Let me read them all.
Now I have the full picture. Here's your audit.
SEO Audit: doany.ai — Post-CMS-Migration Emergency
Executive Summary
The Contentful-to-Sanity migration deployed at 15:30 UTC on April 11 has caused a severe indexation and ranking collapse. Your two most critical pages — /product and /pricing — have lost ~42% of organic sessions in under 48 hours, and positions are sliding fast. Campaigns hitting those pages go live tomorrow (April 13). Several issues flagged as pre-migration risks were never verified post-deploy.
Worst finding: site:doany.ai returns zero results in Google — the entire domain appears to be de-indexed or blocked from indexing right now.
Traffic Impact (from your Search Console data)
| Page | Sessions (Apr 10) | Sessions (Apr 12) | Drop | Avg Position (Apr 10 → 12) |
|---|---|---|---|---|
Homepage / | 300 | 245 | -18% | 4.3 → 5.1 |
Product /product | 185 | 108 | -42% | 6.2 → 9.4 |
Pricing /pricing | 101 | 58 | -43% | 8.8 → 11.8 |
Blog /blog | 56 | 51 | -9% | 12.5 → 12.8 |
Product dropped from page 1 (position 6.2) to nearly page 2 (9.4). Pricing fell off page 1 entirely (11.8). These are the exact pages your paid campaigns will send traffic to.
Critical Findings (Fix Before Campaigns Launch)
1. CRITICAL — Probable Full De-Indexation
- Issue:
site:doany.aireturns zero indexed pages in Google. - Impact: HIGH — No organic visibility at all. This alone explains the traffic cliff.
- Likely cause: The Sanity migration introduced one or more of:
- A
noindexmeta tag orX-Robots-Tag: noindexheader (common when CMS environments are misconfigured — Sanity preview/draft mode can inject noindex) robots.txtblocking Googlebot (new build may have generated a restrictive default)- SSL/TLS errors that prevent Googlebot from rendering the page (I'm seeing SSL handshake failures from outside; Googlebot may be hitting the same wall)
- A
- Fix (urgent):
- Open each page in Chrome → View Source → search for
noindex. Check both<meta name="robots"and anyX-Robots-TagHTTP header (DevTools > Network > response headers). - Check
https://doany.ai/robots.txtin a browser. Confirm it does NOT containDisallow: /for Googlebot. - Check Google Search Console > Settings > Crawl stats — look for spike in "Not modified" or 5xx errors since April 11.
- If noindex is found, it's almost certainly the Sanity environment config leaking draft/preview noindex into production. Remove it immediately.
- Open each page in Chrome → View Source → search for
2. CRITICAL — Sitemap Likely Missing or Empty
- Issue: The pre-migration notes flag that "sitemap was generated statically from Contentful." The new build "may not regenerate it correctly," and nobody verified post-deploy.
- Impact: HIGH — Without a valid sitemap, Googlebot can't efficiently discover or re-crawl your pages after the migration. Combined with potential robots.txt issues, this is a double block.
- Fix:
- Check
https://doany.ai/sitemap.xmlin a browser. If it 404s or is empty, that confirms the issue. - Ensure your Next.js build generates a sitemap (e.g.,
next-sitemappackage or customgetServerSideSitemap). - The sitemap must include at minimum:
/,/product,/pricing,/blog, and all blog post URLs. - Re-submit to Google Search Console immediately after fixing.
- Check
3. HIGH — SSL/TLS Connection Failures
- Issue: All connection attempts from outside fail with SSL handshake errors (curl exit code 35). DNS resolves to
198.18.x.x(reserved IP range, likely Cloudflare or Vercel edge). - Impact: HIGH — If Googlebot encounters the same TLS errors, it cannot crawl or index any page. This alone would explain full de-indexation.
- Fix:
- Check Vercel deployment logs for SSL certificate status.
- Verify the custom domain
doany.aiSSL cert is properly provisioned in Vercel dashboard (Vercel > Project > Domains). - Test with SSL Labs to check for certificate chain issues.
4. HIGH — Heading Hierarchy Likely Broken
- Issue: Pre-migration notes flag that "Sanity's portable text renderer may output different heading hierarchy than Contentful's rich text." This was never verified.
- Impact: MEDIUM-HIGH — If H1 tags are missing or duplicated, or if the hierarchy skips levels (H1 → H3), Google's understanding of page structure and keyword relevance degrades. This contributes to the position drops.
- Fix:
- View source on
/productand/pricing. Confirm exactly one<h1>per page containing the primary keyword. - Check that headings follow logical order (H1 → H2 → H3, no skipped levels).
- Compare the rendered heading structure against the Contentful version (if you have a cached copy or Wayback Machine snapshot).
- View source on
5. HIGH — OG Images Broken (Pointing to Dead Contentful CDN)
- Issue: Pre-migration notes confirm "OG image URLs pointed to Contentful CDN — needed remapping to new asset pipeline." This was flagged but not confirmed fixed.
- Impact: MEDIUM for SEO, but HIGH for campaigns — when your paid campaigns drive shares and social clicks tomorrow, broken OG images mean no rich previews on social platforms. This tanks CTR on shared links.
- Fix:
- Check
og:imagemeta tags on/productand/pricing. If they still referenceimages.ctfassets.net, they're broken. - Update to Sanity's CDN URLs (
cdn.sanity.io). - Validate with the Facebook Sharing Debugger and Twitter Card Validator.
- Check
6. MEDIUM — Canonical URLs May Be Incorrect
- Issue: Migration notes say "canonical URLs should stay the same" but this was not verified.
- Impact: MEDIUM — If canonicals are pointing to old Contentful preview URLs, Sanity studio URLs, or are missing entirely, Google will either ignore the page or index the wrong URL.
- Fix: View source on each page, search for
<link rel="canonical". Confirm it points to the correct production URL (e.g.,https://doany.ai/product, not a preview/draft URL).
Pre-Campaign Checklist (Priority Order)
Do these today, before the April 13 launch:
| # | Action | Owner | Est. Time |
|---|---|---|---|
| 1 | Check for noindex on all pages (meta tag + HTTP header) — remove if found | Mira | 15 min |
| 2 | Verify SSL cert is properly provisioned in Vercel for doany.ai | Mira | 10 min |
| 3 | Check robots.txt — confirm Googlebot is not blocked | Mira | 5 min |
| 4 | Verify sitemap.xml exists, contains correct URLs, regenerates on build | Mira | 30 min |
| 5 | Confirm canonical tags are correct on /, /product, /pricing | Dex | 10 min |
| 6 | Verify OG image URLs point to Sanity CDN, not Contentful | Dex | 15 min |
| 7 | Check heading hierarchy (one H1, logical H2/H3 order) on product + pricing | Dex | 10 min |
| 8 | Re-submit sitemap in Google Search Console | Dex | 5 min |
| 9 | Request indexing for /product and /pricing via GSC URL Inspection tool | Dex | 5 min |
| 10 | Run Vercel vercel.json redirects check — confirm no broken preview paths | Mira | 15 min |
What I Could NOT Verify Remotely
The live site is unreachable from this environment (SSL handshake failures). This means the following checks need to be done in-browser by your team:
- Actual rendered meta tags (title, description, robots)
- JSON-LD structured data (WebFetch can't see JS-injected schema — use Rich Results Test)
- Core Web Vitals / page speed
- Actual robots.txt and sitemap.xml content
- Whether the Sanity draft/preview environment is leaking into production
Bottom Line
The most likely root cause is a noindex directive or robots.txt block introduced by the Sanity migration — this matches the GSC "Crawled - currently not indexed" spike at exactly 16:00 UTC on April 11. Combined with a possibly missing sitemap and potential SSL issues, Googlebot may be completely locked out.
The #1 thing to do right now: Have Mira open Chrome DevTools on https://doany.ai/product, check the <head> for any noindex tag, and check the Network tab response headers for X-Robots-Tag. If either contains noindex, that's your smoking gun — fix it, then request re-indexing in GSC.
Want me to help draft the specific Next.js config fixes (sitemap generation, meta tag checks, Sanity environment config) once your team confirms which of these issues are present on the live site?
Let me check if there's actual Next.js code in the project first.
No code in the repo — I'll write the fixes based on the Next.js + Sanity stack from the migration notes.
Top Risks for April 13 Launch — Specific Fixes
Five issues, ordered by blast radius. All are direct consequences of the Contentful → Sanity migration.
Risk 1 — noindex Leaking from Sanity Draft/Preview Mode
Probability: High. Impact: Total de-indexation.
Sanity preview mode sets a cookie that Next.js reads. If the production deployment accidentally has SANITY_API_READ_TOKEN without preview guard logic, the robots field from Sanity's draft document can leak into production <head>.
Check first (in browser on live site):
View source on https://doany.ai/product
Ctrl+F: "noindex"
Also check: Network tab → product request → Response Headers → X-Robots-Tag
Common offenders in Next.js App Router:
app/product/page.tsx or app/pricing/page.tsx:
// BROKEN: robots field comes from Sanity — empty/null on re-import = noindex
export async function generateMetadata(): Promise<Metadata> {
const page = await sanityFetch({ query: productPageQuery })
return {
robots: page.seoRobots, // ← if this is null or "noindex" you're dead
}
}
// FIXED: explicit production default, never trust a nullable Sanity field
export async function generateMetadata(): Promise<Metadata> {
const page = await sanityFetch({ query: productPageQuery })
return {
robots: { index: true, follow: true }, // ← hardcode for critical pages
}
}
Also check app/layout.tsx for a global robots default. A single robots: 'noindex' there nukes the whole site.
For Pages Router (older Next.js), check _app.tsx or individual pages for:
<meta name="robots" content="noindex" />
// or via next-seo:
<DefaultSeo noindex={true} />
Risk 2 — Sitemap Not Regenerating
Probability: High. Impact: Googlebot can't rediscover pages after migration.
The old Contentful-sourced sitemap is gone. The new build likely output nothing at /sitemap.xml or a sitemap with 0 URLs.
Check: Open https://doany.ai/sitemap.xml in browser. If 404 or empty, proceed.
Most likely setup (next-sitemap):
next-sitemap.config.js — Verify this exists and is correct post-migration:
/** @type {import('next-sitemap').IConfig} */
module.exports = {
siteUrl: 'https://doany.ai',
generateRobotsTxt: true, // also regenerates robots.txt
exclude: ['/studio/*'], // exclude Sanity Studio routes
robotsTxtOptions: {
additionalSitemaps: [],
},
}
package.json — The postbuild hook must exist:
{
"scripts": {
"build": "next build",
"postbuild": "next-sitemap" // ← if missing, sitemap never generates
}
}
If using Next.js App Router native sitemap (app/sitemap.ts):
// app/sitemap.ts
import { MetadataRoute } from 'next'
import { client } from '@/sanity/lib/client'
export default async function sitemap(): Promise<MetadataRoute.Sitemap> {
const blogSlugs = await client.fetch<string[]>(
`*[_type == "post" && defined(slug.current)][].slug.current`
)
const blogUrls = blogSlugs.map((slug) => ({
url: `https://doany.ai/blog/${slug}`,
lastModified: new Date(),
changeFrequency: 'weekly' as const,
priority: 0.7,
}))
return [
{ url: 'https://doany.ai', priority: 1.0, changeFrequency: 'daily' },
{ url: 'https://doany.ai/product', priority: 0.9, changeFrequency: 'weekly' },
{ url: 'https://doany.ai/pricing', priority: 0.9, changeFrequency: 'weekly' },
{ url: 'https://doany.ai/blog', priority: 0.8, changeFrequency: 'daily' },
...blogUrls,
]
}
After fixing: re-deploy, then submit the sitemap URL in GSC (Search Console → Sitemaps → Add).
Risk 3 — Canonical Tags Drifted
Probability: Medium-High. Impact: Google ignores /product and /pricing or indexes wrong URL.
After re-import into Sanity, any seoCanonicalUrl field that was populated from Contentful content may now be empty, null, or pointing to a Contentful preview URL.
Most dangerous patterns:
// BROKEN: empty canonical = Google picks its own canonical (often wrong)
alternates: {
canonical: page.canonicalUrl, // null after Sanity re-import
}
// BROKEN: canonical pointing to Sanity preview
canonical: `https://doany.sanity.studio/preview/${page.slug}`
// FIXED: derive canonical from slug, never trust a Sanity field for this
alternates: {
canonical: `https://doany.ai/${page.slug.current}`,
}
// For fixed pages:
alternates: {
canonical: 'https://doany.ai/product',
}
Quick verification without code access:
curl -sI https://doany.ai/product | grep -i "link:"
Should return: Link: <https://doany.ai/product>; rel="canonical"
Risk 4 — OG Images Still Pointing at Contentful CDN
Probability: High. Impact: Broken previews when paid campaigns drive social shares.
Contentful assets use images.ctfassets.net. Those URLs are now dead (Contentful webhook disabled). This affects paid campaigns directly — broken OG image = no thumbnail on LinkedIn/Twitter/Facebook shares.
Sanity image URL pattern:
import imageUrlBuilder from '@sanity/image-url'
import { client } from '@/sanity/lib/client'
const builder = imageUrlBuilder(client)
const urlFor = (source: any) => builder.image(source)
// In generateMetadata:
openGraph: {
images: [
{
// BROKEN (old Contentful):
// url: page.ogImage.url, // "https://images.ctfassets.net/..."
// FIXED (Sanity):
url: urlFor(page.ogImage).width(1200).height(630).url(),
width: 1200,
height: 630,
}
],
}
Verify immediately: View source on /product, search for og:image. If the content value contains ctfassets.net, this is broken.
Risk 5 — robots.txt Blocking Googlebot
Probability: Medium. Impact: Full de-indexation.
If next-sitemap regenerated public/robots.txt during the Sanity build but the Sanity Studio lives at /studio, an overly broad Disallow could block everything.
Check public/robots.txt or app/robots.ts:
# BAD — blocks everything
User-agent: *
Disallow: /
# BAD — sitemap reference pointing to non-existent file
Sitemap: https://doany.ai/sitemap.xml ← only valid if this file actually exists
# GOOD
User-agent: *
Disallow: /studio/
Disallow: /api/
Sitemap: https://doany.ai/sitemap.xml
If the file doesn't exist at all, Next.js serves a default — usually permissive, but confirm it.
Launch-Day Sequence
Do these in order. Each step unblocks the next.
1. Mira: check /product and /pricing source for noindex → 15 min
2. Mira: check public/robots.txt or app/robots.ts → 5 min
3. Mira: verify next-sitemap postbuild hook exists, redeploy → 30 min
4. Dex: confirm /sitemap.xml has all 4+ URLs after redeploy → 5 min
5. Dex: submit sitemap in GSC → request indexing for /product, /pricing → 10 min
6. Dex: view source on /product, /pricing — check og:image for ctfassets.net → 5 min
7. Mira: fix OG images to use urlFor() from @sanity/image-url → 20 min
8. Dex: validate OG images with Facebook Sharing Debugger → 5 min
GSC URL Inspection is your fastest feedback loop — after fixes are deployed, use it to test live URL and request indexing. You should see "URL is on Google" within hours if the noindex issue is cleared.
External Tools
| Tool | Type |
|---|---|
| curl | binary |
Permissions
| Scope | Description |
|---|---|
| filesystem:read | |
| network:outbound | |
| process:spawn |
SKILL.md
SEO Audit
You are an expert in search engine optimization. Your goal is to identify SEO issues and provide actionable recommendations to improve organic search performance.
Initial Assessment
Check for product marketing context first:
If .agents/product-marketing-context.md exists (or .claude/product-marketing-context.md in older setups), read it before asking questions. Use that context and only ask for information not already covered or specific to this task.
Before auditing, understand:
-
Site Context
- What type of site? (SaaS, e-commerce, blog, etc.)
- What's the primary business goal for SEO?
- What keywords/topics are priorities?
-
Current State
- Any known issues or concerns?
- Current organic traffic level?
- Recent changes or migrations?
-
Scope
- Full site audit or specific pages?
- Technical + on-page, or one focus area?
- Access to Search Console / analytics?
Audit Framework
Schema Markup Detection Limitation
web_fetch and curl cannot reliably detect structured data / schema markup.
Many CMS plugins (AIOSEO, Yoast, RankMath) inject JSON-LD via client-side JavaScript — it won't appear in static HTML or web_fetch output (which strips <script> tags during conversion).
To accurately check for schema markup, use one of these methods:
- Browser tool — render the page and run:
document.querySelectorAll('script[type="application/ld+json"]') - Google Rich Results Test — https://search.google.com/test/rich-results
- Screaming Frog export — if the client provides one, use it (SF renders JavaScript)
Reporting "no schema found" based solely on web_fetch or curl leads to false audit findings — these tools can't see JS-injected schema.
Priority Order
- Crawlability & Indexation (can Google find and index it?)
- Technical Foundations (is the site fast and functional?)
- On-Page Optimization (is content optimized?)
- Content Quality (does it deserve to rank?)
- Authority & Links (does it have credibility?)
Technical SEO Audit
Crawlability
Robots.txt
- Check for unintentional blocks
- Verify important pages allowed
- Check sitemap reference
XML Sitemap
- Exists and accessible
- Submitted to Search Console
- Contains only canonical, indexable URLs
- Updated regularly
- Proper formatting
Site Architecture
- Important pages within 3 clicks of homepage
- Logical hierarchy
- Internal linking structure
- No orphan pages
Crawl Budget Issues (for large sites)
- Parameterized URLs under control
- Faceted navigation handled properly
- Infinite scroll with pagination fallback
- Session IDs not in URLs
Indexation
Index Status
- site:domain.com check
- Search Console coverage report
- Compare indexed vs. expected
Indexation Issues
- Noindex tags on important pages
- Canonicals pointing wrong direction
- Redirect chains/loops
- Soft 404s
- Duplicate content without canonicals
Canonicalization
- All pages have canonical tags
- Self-referencing canonicals on unique pages
- HTTP → HTTPS canonicals
- www vs. non-www consistency
- Trailing slash consistency
Site Speed & Core Web Vitals
Core Web Vitals
- LCP (Largest Contentful Paint): < 2.5s
- INP (Interaction to Next Paint): < 200ms
- CLS (Cumulative Layout Shift): < 0.1
Speed Factors
- Server response time (TTFB)
- Image optimization
- JavaScript execution
- CSS delivery
- Caching headers
- CDN usage
- Font loading
Tools
- PageSpeed Insights
- WebPageTest
- Chrome DevTools
- Search Console Core Web Vitals report
Mobile-Friendliness
- Responsive design (not separate m. site)
- Tap target sizes
- Viewport configured
- No horizontal scroll
- Same content as desktop
- Mobile-first indexing readiness
Security & HTTPS
- HTTPS across entire site
- Valid SSL certificate
- No mixed content
- HTTP → HTTPS redirects
- HSTS header (bonus)
URL Structure
- Readable, descriptive URLs
- Keywords in URLs where natural
- Consistent structure
- No unnecessary parameters
- Lowercase and hyphen-separated
On-Page SEO Audit
Title Tags
Check for:
- Unique titles for each page
- Primary keyword near beginning
- 50-60 characters (visible in SERP)
- Compelling and click-worthy
- No brand name placement (SERPs include brand name above title already)
Common issues:
- Duplicate titles
- Too long (truncated)
- Too short (wasted opportunity)
- Keyword stuffing
- Missing entirely
Meta Descriptions
Check for:
- Unique descriptions per page
- 150-160 characters
- Includes primary keyword
- Clear value proposition
- Call to action
Common issues:
- Duplicate descriptions
- Auto-generated garbage
- Too long/short
- No compelling reason to click
Heading Structure
Check for:
- One H1 per page
- H1 contains primary keyword
- Logical hierarchy (H1 → H2 → H3)
- Headings describe content
- Not just for styling
Common issues:
- Multiple H1s
- Skip levels (H1 → H3)
- Headings used for styling only
- No H1 on page
Content Optimization
Primary Page Content
- Keyword in first 100 words
- Related keywords naturally used
- Sufficient depth/length for topic
- Answers search intent
- Better than competitors
Thin Content Issues
- Pages with little unique content
- Tag/category pages with no value
- Doorway pages
- Duplicate or near-duplicate content
Image Optimization
Check for:
- Descriptive file names
- Alt text on all images
- Alt text describes image
- Compressed file sizes
- Modern formats (WebP)
- Lazy loading implemented
- Responsive images
Internal Linking
Check for:
- Important pages well-linked
- Descriptive anchor text
- Logical link relationships
- No broken internal links
- Reasonable link count per page
Common issues:
- Orphan pages (no internal links)
- Over-optimized anchor text
- Important pages buried
- Excessive footer/sidebar links
Keyword Targeting
Per Page
- Clear primary keyword target
- Title, H1, URL aligned
- Content satisfies search intent
- Not competing with other pages (cannibalization)
Site-Wide
- Keyword mapping document
- No major gaps in coverage
- No keyword cannibalization
- Logical topical clusters
Content Quality Assessment
E-E-A-T Signals
Experience
- First-hand experience demonstrated
- Original insights/data
- Real examples and case studies
Expertise
- Author credentials visible
- Accurate, detailed information
- Properly sourced claims
Authoritativeness
- Recognized in the space
- Cited by others
- Industry credentials
Trustworthiness
- Accurate information
- Transparent about business
- Contact information available
- Privacy policy, terms
- Secure site (HTTPS)
Content Depth
- Comprehensive coverage of topic
- Answers follow-up questions
- Better than top-ranking competitors
- Updated and current
User Engagement Signals
- Time on page
- Bounce rate in context
- Pages per session
- Return visits
Common Issues by Site Type
SaaS/Product Sites
- Product pages lack content depth
- Blog not integrated with product pages
- Missing comparison/alternative pages
- Feature pages thin on content
- No glossary/educational content
E-commerce
- Thin category pages
- Duplicate product descriptions
- Missing product schema
- Faceted navigation creating duplicates
- Out-of-stock pages mishandled
Content/Blog Sites
- Outdated content not refreshed
- Keyword cannibalization
- No topical clustering
- Poor internal linking
- Missing author pages
Local Business
- Inconsistent NAP
- Missing local schema
- No Google Business Profile optimization
- Missing location pages
- No local content
Output Format
Audit Report Structure
Executive Summary
- Overall health assessment
- Top 3-5 priority issues
- Quick wins identified
Technical SEO Findings For each issue:
- Issue: What's wrong
- Impact: SEO impact (High/Medium/Low)
- Evidence: How you found it
- Fix: Specific recommendation
- Priority: 1-5 or High/Medium/Low
On-Page SEO Findings Same format as above
Content Findings Same format as above
Prioritized Action Plan
- Critical fixes (blocking indexation/ranking)
- High-impact improvements
- Quick wins (easy, immediate benefit)
- Long-term recommendations
References
- AI Writing Detection: Common AI writing patterns to avoid (em dashes, overused phrases, filler words)
- For AI search optimization (AEO, GEO, LLMO, AI Overviews), see the ai-seo skill
Tools Referenced
Free Tools
- Google Search Console (essential)
- Google PageSpeed Insights
- Bing Webmaster Tools
- Rich Results Test (use this for schema validation — it renders JavaScript)
- Mobile-Friendly Test
- Schema Validator
Note on schema detection:
web_fetchstrips<script>tags (including JSON-LD) and cannot detect JS-injected schema. Use the browser tool, Rich Results Test, or Screaming Frog instead — they render JavaScript and capture dynamically-injected markup. See the Schema Markup Detection Limitation section above.
Paid Tools (if available)
- Screaming Frog
- Ahrefs / Semrush
- Sitebulb
- ContentKing
Task-Specific Questions
- What pages/keywords matter most?
- Do you have Search Console access?
- Any recent changes or migrations?
- Who are your top organic competitors?
- What's your current organic traffic baseline?
Related Skills
- ai-seo: For optimizing content for AI search engines (AEO, GEO, LLMO)
- programmatic-seo: For building SEO pages at scale
- site-architecture: For page hierarchy, navigation design, and URL structure
- schema-markup: For implementing structured data
- page-cro: For optimizing pages for conversion (not just ranking)
- analytics-tracking: For measuring SEO performance
FAQ
What does seo-audit do?
When the user wants to audit, review, or diagnose SEO issues on their site. Also use when the user mentions "SEO audit," "technical SEO," "why am I not ranking," "SEO issues," "on-page SEO," "meta tags review," "SEO health check," "my traffic dropped," "lost rankings," "not showing up in Google," "site isn't ranking," "Google update hit me," "page speed," "core web vitals," "crawl errors," or "indexing issues." Use this even if the user just says something vague like "my SEO is bad" or "help with SEO" — start with an audit. For building pages at scale to target keywords, see programmatic-seo. For adding structured data, see schema-markup. For AI search optimization, see ai-seo.
When should I use seo-audit?
Use it when you need a repeatable workflow that produces text response.
What does seo-audit output?
In the evaluated run it produced text response.
How do I install or invoke seo-audit?
Ask the agent to use this skill when the task matches its documented workflow.
Which agents does seo-audit support?
Agent support is inferred from the source, but not explicitly declared.
What tools, channels, or permissions does seo-audit need?
It uses curl; channels commonly include text; permissions include filesystem:read, network:outbound, process:spawn.
Is seo-audit safe to install?
Static analysis marked this skill as low risk; review side effects and permissions before enabling it.
How is seo-audit different from an MCP or plugin?
A skill packages instructions and workflow conventions; tools, MCP servers, and plugins are dependencies the skill may call during execution.
Does seo-audit outperform not using a skill?
About seo-audit
When to use seo-audit
When a site has lost rankings or organic traffic and needs diagnosis. When reviewing a site for crawlability, indexation, metadata, and page quality issues. When preparing a prioritized SEO health check for specific pages or an entire site.
When seo-audit is not the right choice
When the goal is to implement schema markup rather than audit SEO issues. When the task is building large-scale keyword-targeted landing pages instead of diagnosing SEO performance.
What it produces
Produces text response.