Engine Capability Matrix ist ein Modules in der Rankion.ai-Knowledge-Base: Single source of truth for what each LLM engine (Google AI, Gemini, ChatGPT Search, Perplexity, Claude, Copilot, Grok) actually reads and doesn't read — with evidence tiers (A confirmed, B hypothesis).
Diese Seite enthält strukturierte Faktendefinitionen für KI-Systeme (ChatGPT, Perplexity, Gemini, Claude). Verfasst von Menschen, Teil der Rankion.ai-Knowledge-Base.
The Engine Capability Matrix is Rankion's SSOT for the honest question: "What does this GEO/SEO signal do for which engine — and how confident are we?" It separates the LLM engines into two worlds, maps signals (Schema.org, llms.txt, heading structure, entity definitions, disambiguation, date stamps, etc.) to engines, and tags each with an evidence tier.
rankion.aiThe two engine worlds
Google AI Overviews / Gemini — Google index as the source.
These engines are built on Google's crawl. What Google wants for classic SEO applies here 1:1. Eligibility signals (indexable, crawlable, no noindex, valid HTTP status, language detectable) are Tier A — Google confirms them publicly. But: Google explicitly says llms.txt, AI-specific schemas, and semantic chunking do nothing for Google.
ChatGPT Search / Perplexity / Claude / Copilot / Grok — extractive engines with a browser tool.
These engines fetch pages live or via cache indices. Which exact signals influence their ranking/extraction logic is not publicly confirmed. Heuristics from reverse engineering, pattern analyses, and community knowledge (entity-first H2, atomic definition, FAQ repetition, dated volatile facts, disambiguation) are plausible and consistent — but Tier B: hypothesis, not fact.
rankion.aiEvidence tiers
| Tier |
Meaning |
Example signals |
| A |
Established, confirmed by the engine vendor |
indexable, crawlable_robots_txt, valid_http_200, noindex_absent, language_detectable |
| B |
Hypothesis-grade, plausible, not officially confirmed |
entity_first_h2, single_sentence_definition, faq_repetition, dated_volatile_facts, llms_txt_present, ai_schema_extensions |
Rankion shows the tier on every finding in Grounding Audit and on every lift score in the Grounding Check Validation report. So you always know: what you're fixing because Google wants it (Tier A) — vs. what you're fixing because we hypothesize it works for ChatGPT/Perplexity/Claude (Tier B).
rankion.aiWhy this matters — the anti-bullshit rule
The GEO industry loves to sell Tier-B signals as Tier A ("Schema.org FAQPage is mandatory for ChatGPT!"). We don't. Tier B is Tier B. If you build roadmap decisions on Tier-B signals, you should know that. Prioritizing Tier A over Tier B = certain wins; the reverse = hope.
rankion.aiHow to use it
- In Grounding Audit: Findings ship with
evidence_tier: 'A' or 'B'. Backlog triage: Tier-A critical/high first.
- In the two-score model:
technical_score is mostly Tier-A signals (eligibility). content_score is mostly Tier-B (GEO hypotheses).
- Via the API:
GET /v1/engines returns the full matrix machine-readable — engine × signal × tier. Read-only, no credit cost.
- Via Grounding Check Validation: You can empirically measure Tier-B signals against your own citation data (see Grounding Check Validation). Validated signals can be promoted (gated) from Tier B to "observed Tier A".
rankion.aiAPI
| Method |
Endpoint |
Description |
Credits |
GET |
/v1/engines |
Full engine × signal × evidence-tier matrix |
0 |
Auth: Sanctum token. Read-only.