AEO Checklist

131 items across 9 categories. Progress saved automatically.

0%
Complete
0
Done
0
To Improve
Item
Evaluation
Impact
Cost
Priority

Brand basics captured (name, category, pricing tier, location)

LLMs need clear entity data to correctly categorize and describe your brand. Without basics, responses are generic or wrong.

Target markets identified (local, regional, global)

LLMs tailor responses by geography. Knowing your markets ensures optimization for the right regional queries.

Dream queries collected (5-10 with 'Why It Matters')

These are the searches that matter most to your business. Without them, you optimize for vanity queries not revenue.

7 dream query types covered (outcome, problem, objection, attribute, decision, fear, lifestyle)

Different query intents require different content strategies. Covering all types ensures full-funnel LLM visibility.

Primary positioning confirmed (pick one)

LLMs can only store one clear positioning. Vague or multiple positions lead to confused or missing recommendations.

Top 3 competitors listed with tier mapping

Knowing your competitive tier ensures you compare to the right brands. Comparing down damages premium positioning.

Key facts documented (3 core claims to triangulate)

LLMs verify claims across sources. Having 3 triangulatable facts increases citation confidence and accuracy.

Truth file created (YAML with pricing, features, company facts)

A single source of truth prevents inconsistencies across platforms that confuse LLMs and reduce trust signals.

Baseline scorecard completed before optimization

Without a baseline, you can't measure improvement. Track mention rate, accuracy, and positioning per LLM.

Intake questionnaire completed before any audit work begins

Rushing to audit without intake wastes effort. Proper intake ensures audits test the queries that actually matter.

Item
Evaluation
Impact
Cost
Priority

robots.txt allows AI crawlers (GPTBot, OAI-SearchBot, ChatGPT-User, ClaudeBot, PerplexityBot, Applebot, Bytespider)

Blocked crawlers means invisible to those LLMs. Many sites accidentally block AI bots thinking they're spam.

Critical pages server-side rendered (SSR)

AI crawlers often don't execute JavaScript. Client-rendered content may be invisible to them entirely.

Page load under 3 seconds

Slow pages get abandoned by crawlers before content loads. Speed affects both indexing and user experience.

Schema.org markup on key pages (Organization, Product, FAQ, HowTo, SameAs)

Structured data helps LLMs understand entity relationships and verify facts across sources.

Sitemap.xml submitted to Google Search Console and Bing Webmaster Tools

Sitemaps tell search engines (and the LLMs that use them) which pages exist and when they were updated.

HTML tables on key pages for Gemini grounding

Gemini specifically uses tables for grounding verification. Tables receive higher trust scores than prose.

HTTPS everywhere

HTTP sites are penalized by search engines and flagged as untrustworthy. LLMs inherit this trust signal.

Clean URL structure

Human-readable URLs help LLMs understand page purpose and improve citation formatting in responses.

Canonical tags configured on all pages

Prevents duplicate content confusion. LLMs may cite the wrong URL version without proper canonicals.

llms.txt file created at site root

A dedicated file for LLMs to read, containing structured brand information in an easy-to-parse format.

No critical content hidden behind JS tabs or accordions

Content inside collapsed elements may not be crawled. Key information must be visible in initial HTML.

Mobile rendering verified for AI crawlers

Some crawlers use mobile user-agents. Ensure mobile view contains the same critical information.

AI crawler log monitoring configured

Track when AI bots visit your site. Spikes after content updates indicate successful cache refreshes.

Cache-busting parameters ready for content updates

LLMs cache aggressively. URL parameters can force fresh fetches after important content changes.

Item
Evaluation
Impact
Cost
Priority

First 50 words state WHO / WHAT / WHERE / PRICE

LLMs extract the first 50 words for summaries. If basics aren't there, responses will be vague or wrong.

No rhetorical questions as page openers

Rhetorical questions waste the critical first 50 words. LLMs interpret them as content, not marketing hooks.

No vague promises in opening copy

"Unlock potential" tells LLMs nothing. Specific claims become extractable facts for recommendations.

TL;DR or Key Takeaways at top of long-form pages

LLMs prefer scannable summaries. A TL;DR gives them the perfect extraction point for accurate responses.

Pricing hard-coded in HTML text (not behind Contact Us)

"Contact for pricing" means LLMs can't answer price questions. Visible pricing enables accurate recommendations.

Specific numbers replace vague claims

"Thousands of customers" can't be verified. "2,400+ businesses" is specific, citable, and triangulatable.

HTML tables for structured data (not images or PDFs)

LLMs can't read images or PDFs inline. Tables are the highest-fidelity data format for extraction.

FAQ section on every key page in Q&A format

FAQs match how users query LLMs. Pre-answered questions become direct response material.

Hybrid page structure: LLM-optimized first 150 words then sales copy

Give LLMs what they need upfront, then write persuasive copy for humans below the fold.

H2/H3 hierarchy covers features, pricing, use cases

Clear heading structure helps LLMs parse page sections. They can extract specific topics more accurately.

Alt text on images includes full data context

LLMs read alt text when they can't see images. Descriptive alt text makes visual data accessible.

Comparison content vs top 3 competitors

Users ask LLMs "X vs Y" constantly. Having comparison pages ensures your perspective is included.

Best X and category pages created

"Best CRM for startups" queries need content to match. Own your category with honest comparison pages.

Zero-volume keyword content created

People ask LLMs questions they'd never Google. Forum and support ticket language reveals these queries.

Content updated quarterly for freshness signals

LLMs deprioritize stale content. Regular updates signal that information is current and maintained.

Conversion phrases replaced with specifics

"Get started" is meaningless to LLMs. Specific CTAs like "14-day free trial" become extractable features.

Direct customer quotes included on key pages

Named testimonials with companies add social proof LLMs can cite as third-party validation.

Statistics cited with verifiable sources

Unsourced stats get discarded. Cited sources ("According to Forrester") increase citation confidence.

Grounding chunks optimized: tables > lists > paragraphs

Gemini assigns trust scores by format. Tables get 5 stars, lists get 3, paragraphs get 2.

Visual + table pairing for key data points (dual verification)

Gemini cross-references visuals with data. Pairing charts with tables enables dual verification.

Item
Evaluation
Impact
Cost
Priority

Google Business Profile claimed and fully completed

GBP is Gemini's primary local entity source. Incomplete profiles lead to inaccurate or missing local recommendations.

Crunchbase profile created with accurate data

Crunchbase is a high-authority source LLMs trust for company data. Matches their verification patterns.

LinkedIn company page created with aligned phrasing

LinkedIn is indexed heavily by Bing (ChatGPT's source). Consistent phrasing reinforces entity understanding.

G2 profile created and optimized (B2B)

G2 is the primary B2B software review source LLMs cite. Category placement affects discovery queries.

Capterra profile created with accurate info

Capterra provides additional triangulation for B2B software. Multiple review sources increase trust.

Trustpilot profile claimed

Trustpilot reviews are cited in LLM responses about reputation. Active profiles show engagement.

Product Hunt listing created

Product Hunt provides launch context and early adopter validation LLMs can cite.

AlternativeTo listing created

AlternativeTo is specifically structured for comparison queries LLMs receive frequently.

Wikipedia or Wikidata presence (if brand is notable)

Wikipedia and Wikidata are highest-trust sources. Even Wikidata entries improve entity recognition.

Phrase alignment verified across all external platforms

LLMs look for consensus. Inconsistent descriptions across platforms reduce confidence in any single claim.

3+ external sources repeat core brand claim (Consensus Campaign)

LLMs verify claims by triangulation. Three matching sources create high-confidence consensus.

Press release distributed with brand positioning language

PR wire services are crawled by news aggregators. Creates authoritative external validation.

Review acquisition workflow active (7-14 day post-success trigger)

Fresh reviews signal active business. Review recency affects LLM confidence in recommendations.

Author credentials displayed on content pages

E-E-A-T signals matter. Named authors with credentials increase content trust for Gemini especially.

Backlinks from topically-relevant authoritative sites

Quality backlinks signal domain authority. LLMs inherit trust signals from the sources they rely on.

Phrase alignment matrix documented (which platform says what)

Tracking exact phrasing across platforms reveals inconsistencies that need fixing for consensus.

Entity establishment priority tiers assigned

Not all platforms matter equally. Focus effort on Tier 1 sources that LLMs actually use.

Item
Evaluation
Impact
Cost
Priority

ChatGPT: Bing Webmaster Tools verified

ChatGPT's web search uses Bing. If Bing doesn't index you, ChatGPT's real-time search won't find you.

ChatGPT: OAI-SearchBot crawling confirmed in server logs

OAI-SearchBot is OpenAI's crawler. Log presence confirms they're indexing your content directly.

ChatGPT: Key pages force-fetched via prompt

ChatGPT can fetch URLs on demand. Prompting it to read specific pages primes the cache.

ChatGPT: Cache fingerprint checked (utm_source=openai = stale)

URLs with utm_source=openai indicate old Bing cache, not fresh fetches. Helps diagnose staleness.

ChatGPT: Cache-busting used after content updates

ChatGPT caches aggressively. URL parameters can force fresh fetches when content changes.

ChatGPT: 3-layer cache architecture understood and documented

Understanding where ChatGPT pulls data from helps diagnose why information is stale or wrong.

Gemini: Google Search Console verified

Gemini grounds against Google Search results. GSC verification ensures proper indexing.

Gemini: E-E-A-T signals strong across site

Gemini heavily weights E-E-A-T (Experience, Expertise, Authority, Trust). These affect grounding confidence.

Gemini: Grounding Blocks on key pages (HTML tables with stats)

Gemini specifically looks for tables and structured data to verify claims. Tables get highest trust.

Gemini: Top 10 Google ranking for entity queries

Gemini grounds against top Google results. If you don't rank, you won't be cited.

Gemini: Statistics with verifiable sources on key pages

Gemini's grounding flow verifies stats against sources. Unsourced numbers get discarded.

Gemini: Grounding flow understood (Generate > Verify > Correct > Cite)

Understanding Gemini's 4-step process helps optimize for what it actually checks.

Perplexity: Top Google results for target queries

Perplexity pulls from top Google results. Google SEO directly impacts Perplexity visibility.

Perplexity: Citation-friendly content structure

Perplexity heavily cites sources. Clear factual statements with attributions get cited more.

Claude: Brave Search presence verified

Claude uses Brave Search for web results. Ensure key pages appear in Brave search results.

All LLMs: Force-fetch each updated page individually

LLMs don't automatically re-crawl. You must prompt them to read updated pages specifically.

All LLMs: Seeding prompts used after major launches

Asking LLMs about your brand primes their cache for future users asking similar questions.

All LLMs: Branded query returns accurate information

Baseline test: if "What is [Brand]?" is wrong, everything else will be wrong too.

All LLMs: Non-branded discovery queries tested

Discovery queries ("best X for Y") are where you gain new customers. Test if you appear.

All LLMs: Per-page force-fetch after ANY content update

Every content change requires manual cache refresh. Build this into your publishing workflow.

Item
Evaluation
Impact
Cost
Priority

Weekly LLM citation check across all major LLMs

LLM responses change frequently. Weekly checks catch regressions before they cost you customers.

10-run consistency test per key query (monthly)

LLMs give different answers each time. Single tests are unreliable. 10 runs show true mention rate.

6-dimension tracking per 10-run test

Mention rate alone isn't enough. Track position, accuracy, citations, sentiment for full picture.

Consistency scoring scale applied

Standardized scoring enables comparison over time and across clients. Makes progress measurable.

Crawler log monitoring (OAI-SearchBot, ClaudeBot, PerplexityBot, GPTBot)

Crawler visits indicate active indexing. Sudden drops may signal blocking or technical issues.

Cache freshness check using utm_source=openai fingerprint

ChatGPT's utm_source parameter reveals cache age. Helps diagnose why updates aren't appearing.

GA4 AI referral tracking configured

Track traffic from LLM platforms. Proves ROI and shows which platforms drive actual visitors.

Competitor LLM presence monitored monthly

Know who you're competing against in AI responses. Competitors may appear while you don't.

Hallucination baseline documented

Before fixing hallucinations, document what's wrong. Baseline enables measuring improvement.

Accuracy scoring per query tracked

Mentions aren't valuable if inaccurate. Track factual correctness alongside visibility.

Citation source analysis completed

Knowing which sources LLMs cite reveals where to focus optimization efforts.

Sentinel system documented (Monitor > Alert > Refetch > Report)

Automated monitoring catches issues faster than manual checks. Essential for ongoing AEO.

Stale page force-fetch workflow documented

Team needs clear steps for cache refresh. Documented runbooks ensure consistent execution.

Quarterly strategy review scheduled

AEO landscape changes rapidly. Quarterly reviews catch strategy drift and new opportunities.

Sentinel alerts configured for misinformation or competitor dominance

Automated alerts when LLMs start giving wrong info or competitors overtake you in responses.

Item
Evaluation
Impact
Cost
Priority

Dream queries identified (5-10 per client)

Dream queries are the searches that matter most to your business. Content strategy flows from these.

Pillar pages created for main topics

Comprehensive pillar pages establish topical authority. LLMs prefer citing authoritative sources.

Comparison pages for top 5 competitors

"X vs Y" is one of the most common LLM query patterns. Own the comparison or competitors will.

Feature and use case pages created

LLMs answer feature-specific questions. Dedicated pages ensure accurate, citable responses.

/pricing page with clear tiers in HTML text

Pricing questions are extremely common. Without a clear pricing page, LLMs can't help buyers.

/features overview page with structured sections

A structured features page helps LLMs answer "what can X do?" questions accurately.

Glossary terms for industry keywords

Definition pages establish expertise. LLMs often cite glossary content for "what is X?" queries.

Zero-volume keywords researched (forums, support tickets, Reddit)

People ask LLMs questions they'd never Google. Forums reveal these hidden query patterns.

Content lifecycle phases defined with actions per phase

Content has a lifecycle. Knowing what to create, grow, refresh, or retire prevents wasted effort.

Content decay review scheduled quarterly

Outdated content hurts more than no content. LLMs may cite old info as current fact.

Data study planned with original research (N=X study)

Original research is extremely citable. LLMs love citing specific studies with sample sizes.

Integration partner pages created

"Does X integrate with Y?" is common. Integration pages answer these and capture partner traffic.

Dictionary definition pages for key industry terms

"What is X?" queries need authoritative definitions. Own the definition and you own the category.

N=X study specs documented (sample size, URL, refresh cycle)

Studies need credible specs. Sample size, methodology, and refresh dates affect citation trust.

Item
Evaluation
Impact
Cost
Priority

Top 5 competitors identified via LLM queries

Your assumed competitors may differ from who LLMs recommend. Let AI show you who you're really competing with.

Competitor LLM visibility audited

Know where you stand. If competitors have 80% mention rate and you have 10%, you have work to do.

Keyword gap analysis completed (DataForSEO or similar)

Keywords competitors rank for reveal content opportunities. Fill gaps to capture their traffic.

vs comparison pages created for each competitor

Own the comparison narrative. If you don't create vs pages, competitors or third parties will.

Competitor weaknesses documented from LLM responses

LLMs sometimes mention competitor limitations. These are opportunities to differentiate.

Content gaps identified and prioritized

Topics competitors cover that you don't are missed opportunities. Prioritize by business impact.

Share of voice tracked across LLMs

Share of voice measures competitive standing. Track monthly to see if you're gaining or losing ground.

Competitor new pages and features monitored

Competitors adapt too. Monitor their content changes to stay ahead and respond to positioning shifts.

Sources competitors get cited from identified

If LLMs cite G2 for competitors, you need a strong G2 presence. Follow the citation trail.

Counter-content created for competitor-dominated queries

Don't concede queries to competitors. Create superior content to take back share of voice.

Forum/Reddit research completed for real competitors

Forums reveal who users actually compare you to, not who you think you compete with.

Competitor tier mapping from forum language

User language reveals competitive tiers. "Premium like X" vs "budget like Y" shows positioning.

Premium positioning rules applied (never compare down)

Comparing to budget alternatives signals you're budget. Only compare to equals or aspirational brands.

Item
Evaluation
Impact
Cost
Priority

Satellite domain strategy planned

Satellite domains can rank for category queries and link to you. Controls more search real estate.

Listicle seeding targets identified (top-3 ranking listicles)

Top-ranking listicles are cited by LLMs. Getting included means getting recommended.

Per-page force-fetch runbook created

Without a documented process, cache refreshes are ad-hoc and inconsistent. Runbooks ensure execution.

Seeding prompts documented per LLM

Each LLM responds to different prompt patterns. Document what works for consistent seeding.

Cache-busting workflow tested after content updates

Not all cache-busting works. Test your workflow to verify LLMs actually fetch fresh content.

LinkedIn article published with brand positioning

Bing heavily indexes LinkedIn. Long-form articles with brand positioning feed ChatGPT directly.

YouTube video transcript includes brand entity data

Gemini indexes YouTube transcripts. Video content becomes another triangulation source.

Reddit/forum presence established for brand queries

Forums are cited by LLMs for authentic user opinions. Organic presence builds credibility.