AI Visibility (GEO & AEO) —
the complete guide for 2026

How you appear when ChatGPT, Perplexity, Claude, Gemini and Copilot answer — schema markup, citable structure, KPIs and concrete actions. With Gartner forecasting a 25% drop in search traffic by 2026, this is no longer experimental — it is shelf space in the new distribution.

6chapters
22 minread
Schema templateincluded
30+ yearsanalysis experience

When a buyer in 2026 has a question about your category, they go to ChatGPT or Perplexity instead of Google. The answer mentions three suppliers. Either you are one of them — or you are not in the conversation at all. This guide shows you how to qualify.

You will learn:

Chapter 1 · Open

01What GEO and AEO actually are

Three acronyms are circulating, and they are mixed up daily. Here is the difference:

DisciplineWhat it optimises forGoal
SEO (Search Engine Optimization)Classic search list (10 blue links)Clicks to the site
AEO (Answer Engine Optimization)Featured snippets, "people also ask", voice answersBeing the cited answer
GEO (Generative Engine Optimization)ChatGPT, Perplexity, Claude, Gemini, CopilotBeing mentioned in AI-generated answers

In practice we work with all three at once — but the priorities have shifted. For B2B categories where the buyer researches deeply before making contact, GEO is now the most important channel for the "upper funnel". Classic SEO is still foundational — without it you are not indexed at all. But SEO alone is no longer enough.

Why this is happening now

Three things are converging:

Practical consequence

Brands that do not appear in AI answers do not just lose traffic — they lose preference. The AI acts as a de facto adviser. If your competitor is mentioned and you are not, a mental shortlist forms that you are not even on.

Chapter 2 · Open

02The 5 AI engines and their distinct characteristics

Saying "we optimise for AI" is like saying "we optimise for search engines". Which ones? The five that matter today:

ChatGPT (OpenAI)

Market leader on pure LLM answers. Relies primarily on its trained knowledge, plus web search via Bing when "Browse" mode is on. Cites sources sparingly — which means brand mention matters more than the link. Optimise so your name appears on citable external sites (Wikipedia, industry reports, trade press).

Perplexity

The most source-transparent. Shows clear citations for every claim. This makes Perplexity the most predictable — if your site has deep, structured content that directly answers a question, you get cited. Optimising for Perplexity = optimising for traditional SEO with an extra focus on question-formulated headings.

Claude (Anthropic)

More cautious about specific brand mentions — prefers to describe categories. Searches the web via its own integrated search. Influenced primarily by authoritative third-party content — industry analyses, academic sources, established trade press.

Gemini (Google)

Uses Google's index and favours sites that already rank well classically. Schema.org markup carries significant weight here because Google reads structured data natively. If you rank well on Google you have a head start in Gemini.

Copilot (Microsoft)

Uses the Bing index plus GPT-4. More inclined to include multiple sources and list formats. Strong on B2B questions (Microsoft's core segment). LinkedIn profiles and company data via Bing Places matter here.

What this means for your strategy

You do not need to optimise five times over — but you do need to know which 1–2 engines drive the most impact in your category. We test this through an AI baseline (the same question is put to all five engines, and we measure mention frequency, citation rank and source authority). Without a baseline you are working blind.

Chapter 3 · Unlock to read
Unlock the rest of the guide

4 chapters remaining — KPIs, schema template, pitfalls and tracking stack

Enter your email and the rest appears directly on the page. No form, no popup, no sales call.

We never share your email. You will get a confirmation email and an invitation to future insights.

03KPIs that are genuinely measurable

Most of those selling "AI SEO" today have no measurement model. They say "you appear more often" without being able to quantify it. Here are the three KPIs that work in practice — and how to measure them.

KPI 1: AI-Mention Rate

Definition: The share of questions (from a fixed prompt list, usually 50–100 prompts) where your brand is mentioned explicitly in the answer.

Measurement: A fixed prompt bank is run monthly across all 5 AI engines. We count binarily — mentioned/not mentioned. The baseline for B2B manufacturers in the Nordics typically sits at 8–22%. Good level: 35%+. Market leaders: 60%+.

Why it works: It is observable, reproducible and changes measurably over time. Every other method is internal estimation.

KPI 2: AI-Citation Rank

Definition: When you are mentioned — do you come first, second or fifth in the order of the answer?

Measurement: Position 1–5 is recorded per question. The mean position is calculated. If you sit at position 3.8 and competitor A at 2.1, you have a clear gap to close.

Why it works: Buyers rarely read the entire answer — the top 2 dominate the mental shortlist. Position matters, not just presence.

KPI 3: Source Authority Score

Definition: The number of unique, authoritative third-party sources the AI relies on when answering about you.

Measurement: In Perplexity and Copilot the source list is clearly visible. We catalogue and score it (Wikipedia=5, trade press=3, own site=2, social=1). Brands with a score < 10 are often invisible; > 30 dominates.

Why it works: It shows where the work needs to be done. Missing Wikipedia? Missing trade-press mentions? Is your own site insufficiently structured? Source Authority points to the weakness.

What you should not measure

"AI traffic" from ChatGPT to your site is meaningless as a sole KPI — users rarely click from AI answers. It is like measuring TV-ad success by counting calls to the studio. Mention Rate and Position are pre-awareness KPIs — they capture when you win at the shortlist stage, which then drives direct traffic and RFQs later.

04Concrete actions — schema, citability, E-E-A-T

Three workstreams in parallel. None of them is magic — they are systematic hygiene.

Stream 1: Schema markup that AI actually reads

Structured data (Schema.org) is no longer Google-only — every major LLM parses JSON-LD when it reads web pages. Three schema types deliver the greatest return:

  • Organization — who you are, what you do, where you are. Foundational identity.
  • FAQPage — questions & answers in a structured format. AI loves this — it is directly citable.
  • Article + Author — provides E-E-A-T signals (who wrote it, when, expertise).

Minimal Organization schema that every site should have:

<script type="application/ld+json">
{
  "@context": "https://schema.org",
  "@type": "Organization",
  "name": "Your Company Ltd",
  "url": "https://www.example.com",
  "logo": "https://www.example.com/logo.png",
  "description": "What you do — in 1 sentence, citable.",
  "address": { "@type": "PostalAddress",
    "addressLocality": "City", "addressCountry": "SE" },
  "sameAs": [
    "https://www.linkedin.com/company/...",
    "https://en.wikipedia.org/wiki/..."
  ]
}
</script>

Stream 2: Citable structure (write for LLM parsing)

LLMs extract facts in the form of statements. You want to write the site so that an LLM can pick out short, citable sentences without context.

  • Question-formulated headings — "What does X cost?", "How does Y work?" — match exactly how users put questions to AI.
  • Direct answer in the first sentence — not "It is a long question that depends on...". Instead: "X typically costs €5,000–20,000 depending on Z."
  • Numerical statements — figures, percentages, comparisons — get cited more often than subjective opinion.
  • Lists and tables — structured content beats long prose paragraphs.
  • Source references — when you cite Gartner, McKinsey, industry statistics — link to the original. AIs treat the source chain as a credibility signal.

Stream 3: E-E-A-T — Experience, Expertise, Authoritativeness, Trustworthiness

Google's framework has become the LLM default. AIs value:

  • Experience — have you actually done this? Show cases, clients, years in the industry.
  • Expertise — who is the author? Photo, biography, LinkedIn — visible on every longer article.
  • Authoritativeness — how many other (authoritative) sites link to or reference you? Wikipedia, industry analyses, academic press.
  • Trustworthiness — company details, GDPR, named contacts, physical address — visible and verifiable.
Quick E-E-A-T check

Pick a blog post. Does it state who wrote it, when, and what experience the author has? Is at least one external authoritative source cited? Is Author schema present in the HTML? If the answer is no to any of those — that is where your E-E-A-T work lies.

05Six common pitfalls

Pitfall 1: "We just need an AI page"

Some agencies sell a single "AI-optimised page" as a silver bullet. AI Visibility is built on whole-site structure plus external authority. One page is not enough — it is like polishing a single window when the whole house is dusty.

Pitfall 2: Forgetting Wikipedia

Wikipedia is the most cited source in ChatGPT and Claude. If you have no article — or have a stub without sources — it dramatically weakens your Source Authority. This is not a quick fix (Wikipedia requires independent notability), but it is often the biggest lever.

Pitfall 3: "Just write more content"

Volume without structure produces no AI Visibility. 200 poorly structured blog posts will not beat 20 well-structured pillar pages with clear schema. Quality × structure > volume.

Pitfall 4: AI-generated content without editing

LLMs notice when text is generated by an LLM (they become "perplexity-aware"). Generic ChatGPT text does not rank — it gets filtered out. Use AI for drafts, but a human expert must edit, add real cases and put their signature on it.

Pitfall 5: Forgetting English

For B2B with international clients, AI often answers in English — even when the question is in Swedish. If you only have Swedish content, you lose the entire outside-Sweden segment. Mirror your pillar pages in English from day one.

Pitfall 6: Not measuring over time

Asking the questions once gives a snapshot. AI answers vary — the same question can yield different answers on different days. Monthly tracking against a fixed prompt bank is the only way to see trends. Anything less is statistical noise.

06The tracking stack — how to measure without losing your mind

There are no "perfect" tools yet — the field is two years old. But the following stack works for 80% of B2B cases:

Step 1: Define the prompt bank

Build 50–100 questions your audience actually asks. A mix of:

  • Category questions ("Who are the best suppliers of X in the Nordics?")
  • Problem questions ("How do I solve Y in my production?")
  • Comparison questions ("What is the difference between A and B?")
  • Price/spec questions ("What does an X installation typically cost?")

This is the foundation. The whole tracking effort rests on a stable prompt bank.

Step 2: Monthly manual or semi-automatic runs

Two options:

  • Manual: One team member puts every question to all 5 engines once a month. About 4 hours of work. Results into a sheet.
  • Semi-automatic: Use a tool such as AthenaHQ, Profound or Otterly, or build your own with API connections (OpenAI, Anthropic, Perplexity APIs).

Step 3: Dashboard with 3 KPIs

For each month:

  • Mention Rate (% of questions where you are mentioned)
  • Citation Rank (mean position when mentioned)
  • Source Authority (score based on the source list)

Three numbers. Not 30. Leadership needs to grasp the trend in 30 seconds.

Step 4: The action loop

Every quarter: which 5 questions in the prompt bank have the lowest Mention Rate? What is needed to change that?

  • A new pillar page that directly answers the question?
  • A Wikipedia edit (with independent sources)?
  • An industry interview or PR activity to create a trade-press mention?
  • A schema fix on an existing page?

That becomes your backlog — and next quarter's progress is measured against precisely those 5 questions.

"AI Visibility is not a project — it is a discipline. Companies that treat it as a quarterly campaign lose to those who treat it as monthly hygiene. The difference over 18 months is dramatic." — Johan Asklund, CEO Alba Business Group

What you do next Monday

  1. Write down 30 questions your audience asks.
  2. Put them to ChatGPT, Perplexity and Copilot. Note mention/position.
  3. That is your baseline. You have just done what 90% of your competitors never have.
  4. Book a 15-minute call below if you would like help taking it further into structured measurement over time.
Johan Asklund

Johan Asklund

CEO & founder, Alba Business Group

30+ years of experience in brand research and digital insight analysis. Runs Alba's AI Visibility measurement — Sweden's first structured GEO/AEO baseline for B2B.

Book 15 minutes — free of charge

We walk through what an AI Visibility measurement could look like for your specific brand — which questions are worth tracking, what the baseline looks like today, and what the first quarter's actions could be.

Pass it on

Know someone who would benefit from this?

Tip a colleague