TL;DR: Buyers increasingly start their purchase journey in AI chat, not Google. We built a free tool that tests whether ChatGPT, Gemini, and Google AI Overviews mention your brand when users ask about your category. This guide walks through how to run the test, how to read the result, and concrete actions to strengthen your presence.
The starting point of the buyer’s journey has moved
Through 2025–2026 we’ve seen a clear shift in how Nordic consumers and B2B buyers find products and vendors. The classic flow — google → click top-3 → evaluate — has for many categories been supplemented, and in some segments replaced, by ask ChatGPT/Gemini/Perplexity → get a curated answer naming 2–4 alternatives → maybe click through.
Two things follow for anyone running e-commerce or B2B sales:
- Top Google rankings are no longer enough. If the AI model doesn’t mention you when answering a question about your category, you’re invisible to the third of buying journeys that now start in AI chat.
- Google ranking and ChatGPT mention are two different games. We’ve observed brands with top-3 Google rankings on their main keywords that get mentioned 0% of the time in contextual AI questions in the same category. The optimization is different.
The second point isn’t theoretical. In an ongoing measurement for a Nordic e-commerce client — anonymized — we’ve logged the following gap:
Short keyword queries (“best X 2026”): brand mentioned in 70% of AI responses.
Long conversational prompts (“I have situation X, with kids and pets, what would you recommend?”): mentioned in 0% of the same models, same time, same category.
It’s not that ChatGPT doesn’t know who you are. It’s that the shape of the question controls which layer of the model’s knowledge gets activated — and most SEO strategies optimize for the wrong layer.
What is GEO?
GEO (Generative Engine Optimization) is the umbrella term for optimizing content and technical signals so that AI models — ChatGPT, Gemini, Claude, Perplexity, Google AI Overviews — choose to mention your brand in their generated answers. It’s adjacent to SEO but with important differences:
| SEO | GEO | |
|---|---|---|
| Goal | Rank in SERP | Get cited in AI answers |
| Signal | Keywords + backlinks | Structured data + pillar content + AI-bot policy |
| Measurement | Position, clicks, CTR | Mention rate, share of voice, AI-attributed conversions |
| Time horizon | 3–6 months for change | Weeks to months |
GEO doesn’t replace SEO. It’s a complementary layer. For most e-commerce brands in 2026, 60–80% of a good GEO strategy is also good for SEO (schema, structure, content), and the rest is AI-specific (llms.txt, robots.txt policy for AI bots, scenario-based content).
Test your domain — free, takes 10 seconds
We built check.adminor.net as a free, sharable test. Enter your domain and we analyze:
- AI bot policy — which of the 14 major AI bots (GPTBot, ClaudeBot, Google-Extended, Perplexity-User, etc.) you allow to crawl your site
- Structured data — whether you have Product, FAQPage, Review, Organization, AggregateRating, HowTo schemas, and whether your products have additionalProperty fields
- llms.txt — whether you’ve published this 2026 LLM-discovery standard
- Technical health — title, meta description, mobile, H1, sitemap.xml
- Brand mention in AI (activates within 24h of first check) — we probe Gemini with 5 test queries in your category and measure whether your brand is named
The result is a combined score of 0–100 plus four sub-scores. The fourth — Brand mention — fills in within a day with a list of actual competitors AI mentions instead of you, plus the domains AI cites most in your category.
The result is cached for 24 hours and shared via a public URL — useful as a discussion document for your CMO or board.
How to read the result
Score 80–100: solid foundation, fine-tuning ahead
The technical building blocks are in place. Focus on pillar content (long, situation-based articles) and scenario marketing. You’re already better positioned than 80% of Nordic e-commerce sites.
Score 50–79: improvement room
Likely one or two big gaps — most often missing FAQPage schema, blocked AI bots in robots.txt, or thin additionalProperty fields on product pages. 1–2 days of developer time typically closes half the gap.
Score 0–49: urgent action
The site is effectively invisible to AI models. This is common on older Magento or custom WordPress builds where JSON-LD was never prioritized, or when a developer blocked AI bots in robots.txt without understanding the consequence.
Brand mention sub-score 0%
This means in the five test questions we ran against Gemini, your brand was never in the answer. This is not an SEO issue — it’s an association issue. The AI model hasn’t built a connection between your category and your brand name. The fix is pillar content that establishes that connection in language AI models can associate (see below).
Concrete actions to improve visibility
The seven most common wins, ordered by ROI from observed customer outcomes:
1. FAQPage schema on key pages
Add structured Q&A blocks (5–8 each) to your category and guide pages. AI models extract the answers verbatim and cite them. This is the single strongest lever for AEO in 2026.
<script type="application/ld+json">
{
"@context": "https://schema.org",
"@type": "FAQPage",
"mainEntity": [{
"@type": "Question",
"name": "What's the difference between X and Y?",
"acceptedAnswer": { "@type": "Answer", "text": "X is ... while Y is ..." }
}]
}
</script>
2. Product schema with additionalProperty
For e-commerce products — add technical specs as structured fields. Not just price and image, but type, dimensions, materials, dosing, compatibility, certifications. AI extracts additionalProperty values directly and cites them in comparison questions.
3. AI bot policy in robots.txt
Verify you’re not blocking AI bots you actually want to index you. Common pitfall: a developer added User-agent: * with a restrictive Disallow without considering AI. Specific lines:
User-agent: GPTBot
Allow: /
User-agent: Google-Extended
Allow: /
User-agent: ClaudeBot
Allow: /
This affects how AI models train on your site over time and how live-citation bots (Claude-User, ChatGPT-User, Perplexity-User) can fetch your content on demand.
4. llms.txt at the domain root
A short markdown file at https://your-domain.com/llms.txt pointing AI models to your most important content — products, guides, FAQ, contact. Established in 2026 as the de facto standard. Trivial to create, often worth 5–10 score points on our audit.
5. Pillar content for scenario queries
This is where most brands have their biggest gap. SEO content is keyword-optimized (“best deck soap”) but AI conversations begin with scenarios (“I have a mossy deck, kids around, want something eco-friendly — what would you recommend?”).
Write 5–8 long guides (1500–3000 words) answering coherent situational questions. Include your products naturally — not as bullet lists but as part of an argument. AI models pick up these texts when building associative answers in conversational contexts.
6. AggregateRating + Review schema
Stars in SERP boost CTR significantly. Individual Review objects under Product let ChatGPT/Perplexity cite review text directly. Many e-commerce sites have a review app installed (Judge.me, Yotpo, Trustpilot) but haven’t verified review data is actually emitted as JSON-LD on the product page — check via Google Rich Results Test.
7. hasMerchantReturnPolicy + shippingDetails
Since July 2024 Google requires these fields to keep showing rich snippets on Product pages. Without them you lose stars and price in SERP, which also affects how AI models weight the source when answering.
FAQ
Are AI models killing SEO?
No. Google search isn’t going away and continues to drive most of the traffic to e-commerce in the foreseeable future. AI-driven search is a complementary layer that’s growing fast. Optimize for both.
How often should I test my domain?
Once a quarter is enough to catch major changes. After bigger technical updates (redesigns, schema changes, robots.txt edits) it’s worth running a quick test immediately to verify the effect.
The measurement gives different results each time — why?
AI models are probabilistic. The same question can yield different answers on different days. That’s why we measure mention rate across multiple probes rather than individual responses. The metric is robust over time but varies on individual data points.
What does it cost to improve GEO visibility?
Most actions are pure development cost — schema implementation, content writing, robots.txt adjustment. No ongoing license fee. For customers who want continuous measurement + action prioritization, Adminor offers a monthly service, but the actual improvements are made by the customer’s own team or web agency.
Is it legal to use data from ChatGPT/Gemini in commercial decisions?
Yes. We probe public AI models via their official APIs, collecting what the models actually answer to public questions — the same data a consumer would see if they’d asked. We don’t store personal data and don’t probe with information about individuals.
Next steps
Test your domain free — check.adminor.net
Or book a 30-minute strategy call and we’ll walk through the result together, identify the 3–5 highest-ROI actions for your specific category, and lay out a plan with time estimates.
No commitments, no sales pressure — we charge only when we can concretely help.