rankion.ai

Grounding Audit

Audit single URLs, lists, or whole sitemaps for how citation-ready they are for ChatGPT, Perplexity, Gemini and Claude.

Grounding Audit answers the question: "Would an LLM even pull this page as a source — and if not, why?" You drop in a URL, a list of URLs, or a sitemap.xml. Rankion fetches the pages, runs them through a rule-based check pipeline, and returns a score (0–100), a tier (not-grounded/weak/partial/grounded), and structured findings with a concrete fix per issue. Everything async via REST — no more 90-second requests.

What it can do

  • Single-URL audit — submit one URL, get a 202 + audit_id back, poll for status.
  • Bulk audit — dispatch up to several hundred URLs in one batch with a shared batch_id and a shared webhook.
  • Sitemap audit — drop in a sitemap.xml, Rankion parses it (including sitemap-index recursion), filters by include_patterns/exclude_patterns, and dispatches the batch.
  • NDJSON stream — stream batch results line by line instead of loading one JSON blob.
  • HMAC-signed webhooks — batch finishes → POST to your callback_url with X-Rankion-Signature header (SHA-256, 4 retries).
  • Structured findings — per issue: id, severity (critical/high/medium/low/info), framework (v1.5/eeat/people-first), title, description, fix.type, fix.action, and a spec URL.
  • Tier logicTierThresholds: 0–25 not-grounded, 26–50 weak, 51–75 partial, 76–100 grounded.

When to use

  • You shipped a pillar article and want to know whether it's actually citation-ready in LLMs.
  • You want to scan the entire site for citation readiness instead of clicking through URL by URL.
  • You run your own pipelines (CI, CMS post-publish hook) and need the data via webhook.
  • AI Visibility Tracking flagged you as "not cited" and you want to know which pages need work.

Workflow

  1. Start a single auditPOST /v1/grounding/analyze with {url, frameworks?:["v1.5"]}. Returns 202 + audit_id + poll_endpoint.
  2. PollGET /v1/grounding/audits/{id} until status=completed. Returns score, tier, findings[], raw_text.
  3. BulkPOST /v1/grounding/batch with urls[] and an optional callback_url. Returns batch_id + callback_secret (one-time — store it!).
  4. SitemapPOST /v1/grounding/sitemap-audit with sitemap_url, optional include_patterns/exclude_patterns. Credits are charged AFTER parse.
  5. Fetch resultsGET /v1/grounding/batches/{id} (summary) or …/results.ndjson (line-by-line stream).

Tier thresholds

Score Tier
0–25 not-grounded
26–50 weak
51–75 partial
76–100 grounded

API

Method Endpoint Credits
POST /v1/grounding/analyze 1
GET /v1/grounding/audits/{id} 0
GET /v1/grounding/audits 0
POST /v1/grounding/batch url_count
GET /v1/grounding/batches/{id} 0
GET /v1/grounding/batches/{id}/results.ndjson 0
POST /v1/grounding/sitemap-audit url_count_after_parse

Throttling: analyze 30/min, batch + sitemap 5/min, polling 120/min. Auth: Sanctum token.

Known limits

  • eeat + people-first frameworks are stubs in the REST pipeline today — v1.5 is the framework that runs end-to-end.
  • Per-batch concurrency is not guaranteed. Effective concurrency = 5 workers server-wide. The concurrency param is a hint, not a contract.
  • No auto-refund on audit failures. Sitemap audits debit credits AFTER parse — you don't know the exact URL count until filtering completes.
  • TTL — audit results soft-expire after 30 days, hard-delete after 90 days. Persist them yourself if you need long-term history.

Related modules

  • AI Visibility Tracking — measures whether your pages actually land in LLM answers. Grounding Audit explains why (or why not).
  • Content Audit — classic SEO issues. Grounding Audit zooms in on LLM citation readiness.
  • Page Deep Audit — deeper SEO/performance check per URL.
  • Agentic Chat — the master agent can dispatch grounding audits directly from chat.
Letzte Aktualisierung: May 10, 2026

Cookies: We use necessary cookies for functionality and optional ones for improvements. Details

Necessary
Active
Analytics
Marketing