← Back to Blog

AI TOOLS

AI Competitive Intelligence: How to Track What AI Says About You and Your Competitors

2026-04-06

AI Competitive Intelligence: How to Track What AI Says About You and Your Competitors

Most competitive intelligence stops at Google. But Google rank has near-zero correlation with AI citations (rho = -0.02 to 0.11 across platforms), and only 1.4% of cited URLs overlap between ChatGPT, Perplexity, Claude, and Gemini for the same query (Lee, 2026a). If you are not tracking what AI says about your brand and your competitors on every platform separately, you are missing almost the entire picture.

AI platforms are reshaping how buyers discover and choose vendors. A brand dominating Google's top three could be invisible on Perplexity. A competitor on page two of Google could be the top recommendation in ChatGPT. The only way to know who wins where is to check each platform yourself.

This guide covers every step: from free manual checks you can run in 70 minutes to full competitive audit frameworks that track citation frequency, sentiment, and the structural page features that explain why one brand gets cited over another. Every claim is grounded in published research spanning 19,556 queries and 10,293 crawled pages across four AI platforms.

🖥️ HOW TO CHECK IF AI CITES YOUR WEBSITE (MANUAL METHOD)

The simplest starting point is to ask each platform directly. No special tools are needed. Just a browser and a spreadsheet.

ChatGPT (with Search Enabled)

  1. Open ChatGPT in a browser (web UI, not API)
  2. Make sure web search is enabled (the globe icon should be active)
  3. Type queries your target audience would ask
  4. Look for inline citations or "Sources" links at the bottom
  5. Click through to verify if any link to your domain

Important caveat: API and web UI produce different citation behavior. Reddit appears in 17% to 44% of web UI responses but 0% of API responses (Lee, 2026a). If you monitor through the API, your data will not match what real users see.

Perplexity

  1. Go to perplexity.ai
  2. Run the same queries
  3. Perplexity shows numbered source citations inline, making it the easiest platform to audit
  4. Check the sidebar for the full source list

Perplexity has a strong freshness bias, pulling from its pre-built index. Recently published or updated content may get picked up faster here than on other platforms.

Claude

  1. Open claude.ai in a browser
  2. Run queries (Claude performs live fetches using the Claude-User bot)
  3. Look for cited URLs in the response
  4. Claude respects robots.txt strictly. If you block the Claude-User bot, you will never appear.

Google AI Mode

  1. Open Google Search
  2. Look for AI-generated summaries at the top of results
  3. Check the cited sources within the AI overview
  4. Google AI Mode inherits traditional Google ranking signals, so your existing SEO foundation matters more here than on other platforms

What a Manual Check Session Looks Like

Step Action Time
1 List 10 to 15 target queries 10 min
2 Run each query on ChatGPT 15 min
3 Run each query on Perplexity 10 min
4 Run each query on Claude 15 min
5 Run each query on Google AI Mode 10 min
6 Log results in a spreadsheet 10 min
Total ~70 min

For a free initial assessment of where you stand, use our AI Visibility Quick Check.

The Bottom Line: Manual checking works for a baseline audit. But running this weekly across 15 queries and 4 platforms means nearly 5 hours per month. For ongoing monitoring, automation is necessary.

⚠️ THE 1.4% OVERLAP PROBLEM (WHY YOU MUST CHECK ALL PLATFORMS)

This is the single most important number in AI citation monitoring. Lee (2026a) tested 19,556 queries across ChatGPT, Claude, Perplexity, and Gemini. The Jaccard similarity of cited URLs across platforms was 0.014. That means 98.6% of the URLs cited by one platform were not cited by any other platform for the same query.

Each platform uses a fundamentally different retrieval pipeline. ChatGPT fetches pages live through Bing. Perplexity pre-crawls with its own bot and serves from an index. Claude fetches on demand. Google AI Mode inherits from Google Search.

Monitoring Approach Coverage Risk
Check ChatGPT only ~25% of AI search Missing 75% of citations
Check ChatGPT + Perplexity ~50% Missing half the picture
Check all 4 major platforms ~95%+ Time-intensive but comprehensive
Automated multi-platform ~95%+ Requires tooling investment

The overlap problem also affects competitive intelligence. A competitor could dominate Perplexity for your target queries and be completely absent from ChatGPT. A "competitive audit" that only checks one platform captures less than 2% of the full picture. A competitive audit that only checks Google captures almost none of the AI picture.

For a deeper look at how platforms differ, see our AI platform comparison.

The Bottom Line: Either monitor all platforms or accept that your data has massive blind spots. There is no shortcut around the 1.4% overlap.

🤖 AUTOMATED MONITORING APPROACHES

Manual checks do not scale. Here are the automated approaches available in 2026, from free to enterprise.

DIY Script Approach (Free)

Build a basic monitoring script using platform APIs:

  1. Maintain a list of target queries in a spreadsheet or JSON file
  2. Send each query to each platform's API on a weekly schedule
  3. Parse responses for your domain in cited URLs
  4. Log results to a database or spreadsheet
  5. Track changes over time

Limitation: API responses do not match what real users see. Reddit citations drop to 0% through APIs on every platform except Claude (which shows 0% in both channels). A script-based approach will undercount citations for any platform with web-only search features.

Dedicated Monitoring Tools ($50 to $500/month)

Purpose-built monitoring platforms are emerging. Look for tools that offer multi-platform query testing, scheduled automated checks (weekly minimum), historical trend tracking, competitor citation comparison, and alert systems for citation gains and losses.

Browser Automation (CitationScraper Approach)

For the most accurate data, browser automation on web UIs captures exactly what real users see. This involves spawning isolated browser sessions, extracting cited URLs and brand mentions, tagging recommendation sentiment, and outputting a structured citation dataset per query, platform, and session.

Platform Web UI Reddit Citation Rate API Reddit Citation Rate Divergence
ChatGPT 17% 0% Significant
Perplexity 20% 0% Significant
Google AI Mode 44% N/A (no public API) N/A
Claude 0% 0% Minimal

Bot Crawl Monitoring (Leading Indicator)

Before any platform can cite your page, it must access your content. Monitoring AI bot crawl activity works as a leading indicator of future citations:

Bot Platform What It Signals
OAI-SearchBot ChatGPT Someone asked about your topic
PerplexityBot Perplexity Your content is being indexed
Claude-User Claude Active citation consideration
Googlebot Google AI Mode Foundation for AI Mode inclusion

Rising bot visits to a specific page suggest it will appear in results soon. Zero bot visits means your content is invisible to AI platforms, regardless of Google rank.

Method Comparison

Method Cost Coverage Effort Best For
Manual queries (browser) Free All platforms, web UI behavior High (5+ hrs/month) Baseline audits, small sites
API scripts (DIY) Low (API costs) API behavior only Medium (setup + maintenance) Technical teams
Crawl log monitoring Free to low Leading indicator only Low (once configured) Early warning, accessibility
Dedicated monitoring tools $50 to $500/month Multi-platform, historical Low (automated) Growing businesses, agencies
Managed AI visibility service $1,000+/month Full coverage + optimization Minimal Enterprise, revenue-critical

For ongoing monitoring support, see our AI SEO services.

The Bottom Line: Use browser automation on web UIs (not API calls) to capture what real users actually see. Supplement with crawl log monitoring as an early warning system.

📋 COMPETITIVE AUDIT METHODOLOGY (STEP BY STEP)

A full AI competitive audit goes beyond checking your own citations. It maps the entire competitive landscape across all platforms. Here is the step-by-step framework.

Step 1: Identify Your Competitor Set

AI search surfaces a different competitive set than Google. Start with three tiers:

Tier Definition How to Identify
Known competitors Brands you already compete against Internal knowledge, sales team input
SEO competitors Domains ranking for your target keywords Traditional keyword tracking tools
AI-emergent competitors Brands appearing in AI responses Run 10 to 15 discovery queries across platforms

The third tier is the one most teams miss. Run broad queries like "best [your category] for [use case]" across all four platforms and record every brand mentioned.

Step 2: Map Target Queries by Intent

Query intent is the strongest aggregate predictor of which sources get cited (Lee, 2026a). Your audit queries need to span every intent type.

Intent Type Share of Queries What Gets Cited Example
Informational 61.3% of autocomplete queries Wikipedia, .gov/.edu, tutorials "how does [technology] work"
Discovery 31.2% Review aggregators, YouTube, listicles "best [category] for [use case]"
Validation 3.2% Brand sites, Reddit (web UI) "is [brand] worth it"
Comparison 2.3% Publisher/media, review sites "[brand A] vs [brand B]"
Review-seeking 2.0% YouTube, tech review sites, Reddit "[brand] reviews 2026"

For each competitor, create 8 to 12 queries distributed across intent types. Weight discovery and comparison queries more heavily since these are where buying decisions happen. For a detailed look at how query intent drives citations, see what gets you cited by AI explained.

Step 3: Run Queries Across 4 Platforms (40 Sessions Per Query)

AI responses are non-deterministic. Within-platform citation consistency for ChatGPT is 61.9% (Lee, 2026a). Roughly 38% of cited sources change between sessions for the same query. A single session gives a noisy snapshot.

Run 10 sessions per platform per query. This produces a citation frequency distribution that shows how often each competitor appears, how stable their position is, and which pages get cited. Across 4 platforms, that is 40 sessions per query. For 10 queries, that is 400 total sessions.

Step 4: Build a Citation Frequency Table

For each competitor, across each platform, calculate citation rates:

Competitor ChatGPT Rate Perplexity Rate Claude Rate Google AI Mode Rate
Competitor A 70% (7/10) 40% (4/10) 20% (2/10) 80% (8/10)
Competitor B 30% (3/10) 60% (6/10) 50% (5/10) 20% (2/10)
Your Brand 10% (1/10) 30% (3/10) 10% (1/10) 40% (4/10)

Citation frequency alone is not enough. Also track recommendation sentiment (positive, neutral, or negative framing) and positioning (mention order, framing role, feature associations). A competitor cited 80% of the time with negative sentiment is in a worse position than one cited 40% of the time with consistently positive framing.

Step 5: Diagnose Gaps and Prioritize Actions

Organize findings into four quadrants:

Quadrant Your Brand Competitor Priority
Citation gap Low citation rate High citation rate High
Sentiment gap Neutral/negative Positive recommendation High
Platform gap Strong on one platform Strong on a different platform Medium
Reddit gap Low Reddit presence Strong Reddit consensus Medium-high

The audit will reveal three opportunity types: (1) unclaimed queries where no competitor is consistently cited, (2) platform-specific weaknesses where a competitor dominates one platform but is absent from another, and (3) structural vulnerabilities where competitor pages score poorly on the 6 predictors despite current high citation rates.

For help building your monitoring dashboard, see our AI visibility monitoring guide.

The Bottom Line: A full competitive audit is not a one-time report. AI responses shift as models update, content changes, and Reddit sentiment evolves. Build it as a repeatable process and run it quarterly at minimum.

🔍 ANALYZING WHY COMPETITORS GET CITED (THE 6 PAGE FEATURES)

When a competitor gets cited and you do not, the answer is usually in the page structure, not the content quality. Lee (2026c) tested 10,293 pages across 250 queries on 3 AI platforms, controlling for Google rank position, and identified 6 page-level features that predict AI citation across all four Google position bands.

# Feature Effect What to Check
1 Word count Cited median = 1,799 (39% more than uncited) Measure word count gap between competitor pages and yours
2 Content-to-HTML ratio 0.086 cited vs. 0.065 uncited Run a ratio check on both sides
3 Self-referencing canonical OR = 1.92 Verify clean canonical tags on your pages
4 Schema markup type Product (OR = 3.09), Review (OR = 2.24), FAQPage (OR = 1.39) Compare schema types and completeness
5 Internal link architecture r = 0.127 (fewer nav links = cited) Count navigation links on competitor cited pages vs. yours
6 Link ratio OR = 0.47 when external-heavy Pages with 70%+ internal links hit 59.7% citation rate vs. 21.4% for external-dominant

For each competitor page that gets cited, fetch the HTML, extract these 6 values, compare against your equivalent page, and prioritize fixes by odds ratio. Product schema (OR = 3.09) delivers the biggest single lift. Link ratio and word count are the easiest to fix at scale.

For a complete diagnostic checklist, see what gets you cited by AI explained.

The Bottom Line: When a competitor outranks you in AI citations, these 6 features tell you exactly where you fall short. The fix is structural, not creative.

👻 THE REDDIT SHADOW CORPUS EFFECT ON BRAND RECOMMENDATIONS

There is a layer of competitive intelligence that does not show up in citation data at all. Lee (2026a) found that Reddit brand consensus correlates with AI recommendations at rho = 0.554 across 12 consumer categories. Eight of 12 categories survived Bonferroni correction for multiple testing.

This happens because Reddit content is absorbed into LLM training data during pre-training. The brand preferences embedded in upvoted subreddit threads shape recommendations even when Reddit is never cited as a source. A competitor with strong Reddit sentiment gets an invisible boost across all AI platforms.

Reddit Signal AI Impact Audit Action
Consistent positive mentions in subreddit threads Higher recommendation probability Monitor competitor Reddit sentiment in relevant subreddits
Upvote-weighted brand consensus rho = 0.554 correlation with AI brand ranking Compare Reddit presence using upvote-weighted scoring
Category-specific subreddit dominance Strongest in Office/Workspace (rho = 0.746) and Outdoor/Camping (rho = 0.674) Identify which subreddits matter for your industry

How to Audit Reddit Competitive Position

  1. Identify the 5 to 10 subreddits where your category gets discussed
  2. Search each subreddit for your brand name and each competitor's brand name
  3. Record mention frequency, average upvote count, and sentiment (positive/neutral/negative)
  4. Calculate an upvote-weighted brand score for each competitor
  5. Compare scores against AI citation frequency from your audit

If a competitor with mediocre page structure still dominates AI recommendations, Reddit sentiment is likely the explanation. No amount of schema optimization will overcome a strong negative Reddit consensus.

The Bottom Line: Reddit is the invisible hand shaping AI brand recommendations. Include subreddit sentiment analysis in every competitive audit, especially in consumer categories.

📝 BUILDING A COMPETITIVE INTELLIGENCE REPORT

For teams or agencies delivering AI competitive audits, here is a standardized report structure that synthesizes all the data from the steps above.

Section Contents
1. Executive Summary Your AI citation share vs. top 3 competitors (one table, four platforms); biggest gap; top 3 actions
2. Platform Citation Analysis Citation frequency, sentiment, and positioning per competitor per platform
3. Competitor Page Diagnostics 6-feature comparison for competitor pages vs. your equivalents; schema type comparison
4. Reddit Shadow Corpus Assessment Competitor vs. your Reddit mention frequency; upvote-weighted scoring; correlation with AI recommendations
5. Query Intent Mapping Full query matrix with intent classification; queries where you have zero presence
6. Prioritized Action Plan Structural fixes ranked by odds ratio; content gaps ranked by volume; platform-specific recommendations
7. Monitoring Cadence Re-audit schedule (quarterly minimum); leading indicators (bot crawl activity, content index status)

Recommended Monitoring Cadence

Business Type Minimum Cadence Ideal Cadence
E-commerce Weekly 2x/week
B2B SaaS Bi-weekly Weekly
Local services Monthly Bi-weekly
Content publishers Weekly 2x/week

For a single client with 5 competitors and 10 queries, expect 400 total sessions, 15 competitor page audits, and 5 to 10 subreddits to monitor. The ultimate deliverable is an AI share-of-voice metric: what percentage of AI recommendations in your category mention your brand vs. competitors.

The Bottom Line: The competitive intelligence report is a living document. Set quarterly re-audit cycles and track whether structural fixes translate into citation gains at the next measurement point.

❓ FREQUENTLY ASKED QUESTIONS

How do I check if ChatGPT is citing my website right now?

Open ChatGPT in a browser with web search enabled (globe icon active). Type 5 to 10 queries your customers would ask. Look for your domain in the inline citations or "Sources" section. Then repeat on Perplexity, Claude, and Google AI Mode. Only 1.4% of cited URLs overlap across platforms (Lee, 2026a), so checking one platform tells you almost nothing about the others.

Why does my page show up on Perplexity but not ChatGPT?

Each platform uses a different retrieval pipeline. Perplexity pre-crawls the web with its own bot and serves from a built index. ChatGPT fetches pages live through Bing during conversations. A page well-indexed by PerplexityBot may not appear in Bing's index or match ChatGPT's retrieval logic. The 1.4% overlap confirms this is normal behavior, not a bug.

How often should I run a competitive audit?

Quarterly at minimum. The initial audit establishes a baseline; subsequent audits track movement. Between full audits, monitor leading indicators like AI bot crawl frequency. The 61.9% within-platform consistency rate means month-to-month variation is normal, so do not overreact to single-session changes. For commercial pages in fast-moving categories, weekly spot checks supplement the quarterly cadence.

Can I use Google Search Console data to predict AI citations?

No. The Spearman correlation between Google rank and AI citation ranged from rho = -0.02 to 0.11 across 19,556 queries (Lee, 2026a). All values were statistically non-significant. Google Search Console tracks traditional search performance, not AI search performance. The two require completely separate monitoring systems.

How does the Reddit shadow corpus affect B2B competitive audits?

The rho = 0.554 correlation was measured across consumer categories. B2B categories may show a weaker effect, but many B2B verticals (SaaS, marketing tools, dev tools) have active subreddit communities. Check whether relevant subreddits exist before assuming Reddit does not apply to your market.

What is the fastest way to tell if AI bots can even access my site?

Check your server logs or crawl monitoring tools for visits from OAI-SearchBot (ChatGPT), PerplexityBot (Perplexity), Claude-User (Claude), and Googlebot (Google AI Mode). If none appear, AI platforms cannot access your content and will never cite it. Also verify your robots.txt does not block these user agents. A page with zero AI bot visits has an accessibility problem that must be fixed before any optimization matters.

📚 REFERENCES

  • Aggarwal, P., Murahari, V., Rajpurohit, T., Kalyan, A., Narasimhan, K., & Deshpande, A. (2024). "GEO: Generative Engine Optimization." KDD 2024. DOI
  • Lee, A. (2026a). "Query Intent, Not Google Rank: What Best Predicts AI Citation Behavior." Preprint v5. DOI
  • Lee, A. (2026c). "I Rank on Page 1: What Gets Me Cited by AI? Position-Controlled Analysis of Page-Level and Domain-Level Predictors of AI Search Citation." Preprint. Paper | Dataset