Most people doing A.I. SEO are operating on a massive, expensive assumption. They think, "If I'm the best source, the AI will cite me." They spend thousands on "quality content" and "Domain Authority," expecting ChatGPT or Perplexity to recognize their brilliance.
But I had a different question. I wanted to know if the AI even sees the same internet we do. Does ChatGPT value the same things as Google? Does Perplexity agree with Gemini? Or are we all just shouting into different voids?
I just finished running a massive experiment on Consensus Scoring, and the data was wild. It turns out that AI "truth" isn't about quality as we know it. It’s about Repeatability.
In this post, I’m going to tear down the "quality content" myth and show you exactly how AI engines choose what to cite -and how you can use "Consensus Mode" to actually get picked.
🧪 THE EXPERIMENT: 60 RUNS, 20 TOPICS, 3 ADVERSARIES
I didn't want "vibes." I wanted hard data. So, I wrote a set of Python scripts 🐍 to track how different AI engines behave when things get controversial.
I picked 20 highly polarized research questions - the kind of topics where there is no easy answer and everyone is fighting for the top spot. We're talking about things like vaccine policy, immigration impacts, and content moderation.
Here was the setup:
• The Engines: ChatGPT (via Bing search), Perplexity (proprietary index), and Google search (via custom search engine).
• The Repetition: I ran every single query 3 times each to measure stability. That’s 60 total runs.
• The Goal: I wanted to see if they cited the same domains, how often they relied on government sources, and how much they changed their minds.
I built a custom classification system to label the behaviors I saw. I wasn't just looking for URLs; I was looking for patterns. Was the AI in "Authority Mode" (blindly trusting .gov sites)? Was it in "Consensus Mode" (picking whatever was repeated most)? Or was it just "Volatile" (spinning a new web every time I hit enter)?
The results were NOT what I expected. Not by a long shot.
📊 THE RESULTS: THE ENGINES DON’T AGREE (ON ANYTHING)
If you think winning at Google SEO means you're winning at Perplexity, I have some bad news for you.
The highlight of the entire study was the Jaccard similarity index -a fancy way of saying "how much do these sets overlap?" When I compared the citations from ChatGPT vs. Perplexity, the overlap score was a staggering 0.014.
The Insight: ChatGPT and Perplexity almost never cite the same domains. They are looking at two different versions of reality.
| Metric | Findings |
|---|---|
| ChatGPT vs. Perplexity Overlap | 1.4% |
| ChatGPT vs. Google Top 10 Overlap | 8.9% |
| Avg Citation Stability | 19.5% |
| Highest Hedging Rate | 8.65 (Vaccine Policy) |
Here's what else the data revealed:
1. Google ≠ AI Citations
You can rank #1 on Google and still be invisible to AI. ChatGPT’s citations overlapped with Google’s top results only about 8.9% of the time. If you’re optimizing only for the blue links, you’re missing the AI boat.
2. The "Authority Override"
On high-stakes topics like vaccine policy (t05), the AI goes into "Official Mode." About 10% of the topics showed this behavior where models would cite .gov, .edu, or .int domains -even if there was zero consensus among other sources. For vaccine policy, the institutional share was nearly 40%.
3. Consensus Scoring is Real
In about 20% of the topics, I saw "Weak Consensus Wins." This is where the AI picked a source not because it was the most "authoritative," but because it was the only one that appeared across multiple retrievers. Topic t01 (Immigration) had the highest consensus share at 28.7%.
4. The Volatility Gap
25% of the topics were completely unstable. I’d run the same search three times and get three totally different sets of citations. If your citations swing wildly, it means the AI hasn't found a "citable truth" yet.
🧠 THE INSIGHT: TRUTH IS REPEATABILITY
So, why does any of this matter? Because it proves that "ranking" is the wrong framework for the AI age.
In traditional SEO, you're competing for a slot in a list. In AI SEO (or AEO/GEO), you're competing to be retrieved and trusted by a model that is trying to summarize a mess of information.
The AI doesn't want to show the user 10 links. It wants to give them one answer. To give that answer confidently, the model looks for consensus. If three different bots pull from different corners of the web and all find your specific data point, you win.
The Bottom Line: You’re not trying to be the "best" page. You’re trying to be the most repeated fact.
Backlinks used to be the "votes" of the internet. In the AI world, citations and mentions are the new currency. But it’s not just about getting more links; it’s about getting your specific claims, tables, and definitions repeated across different types of platforms.
The AI wants to hedge. It wants to say "Studies suggest..." or "According to [Source]..." If you provide a "citable object" -a specific, quoted sentence that is reinforced by other reputable sites -you become the "truth" the AI is looking for.
🛠️ THE PLAYBOOK: HOW TO WIN IN CONSENSUS MODE
Look, I’ll be honest: the old playbook of "write long content and get backlinks" is dying. If you want to survive the switch to AI-driven search, you need a new set of tactics.
Here is exactly how I’m changing my strategy based on this experiment:
1️⃣ Create "Citable Objects"
AI models are lazy. They love content that is easy to summarize. Use tables, clear definitions, bulleted lists, and short, punchy claims. Don't bury your main point in a 2,000-word "ultimate guide."
2️⃣ Use the "Inverted Pyramid" Formatting
Put the answer at the top. I see so many people writing intros that go on for 500 words before hitting the point. Stop it. Put the money sentence in the first paragraph. The retriever pulls the top of the content first -make sure it finds what it needs.
3️⃣ Build the "Authority Wall"
If you’re in a "High Institutional" niche (finance, health, law), you must get mentions from .gov or .edu sites. The AI has a built-in bias toward these domains for high-stakes topics. Even one quote from an institutional site can "validate" your content for the model.
4️⃣ Win Consensus, Not Just Clicks
Don't just publish on your own site. Get your key facts and data points repeated across PR wires, partner blogs, and documents. The goal is to ensure that no matter which "retriever" the AI uses, it bumps into your facts.
5️⃣ Test Your Own Volatility
Here’s a practical tip: Run your own brand name or key topic through ChatGPT and Perplexity 5 times. Do the citations stay the same? If they swing wildly, your "Consensus Score" is too low. You need broader distribution.
⚠️ WATCH OUT FOR: THE RANDOMNESS TRAP
Don't get discouraged if you do everything right and still don't show up.
Our data showed that 25% of topics are just unstable. Sometimes the AI fetches from a legacy index, sometimes it hits a live scraper, and sometimes it just hallucinates a source.
The kicker? You can't control the AI. You can only control the environment the AI retrieves from.
If you provide the most "citable" answer and distribute it widely enough to create a "consensus," you vastly improve your odds. But randomness is a feature, not a bug, of these systems.
🤔 ANYTHING I MISSED?
This experiment was a wake-up call for me. The idea that we can just "do SEO" for one engine and succeed is officially dead. We are entering the era of Multi-Engine Presence.
Dominance in OpenAI doesn't guarantee you a spot in Perplexity. You have to optimize for the fact, not just the keyword.
I'm going to keep running these tests. The next one is going to focus on Multi-Language Consensus -does ChatGPT trust different things in Spanish than it does in English? I have a feeling the results will be even more divided.
CHEERS!