Track AI visibility (ChatGPT, Perplexity, Gemini)
How to track whether your brand is cited in AI search engines, and where the gaps are.
Classic SEO measures whether you're in Google's top 10. AI visibility measures whether you appear in answers from ChatGPT, Perplexity, Gemini and Claude at all — and if so, how. This guide walks you through the full tracking workflow.
What is AI visibility?
LLMs generate answers from their own training, from web searches or from retrieval-augmented sources. When a user asks "What is the best vegan protein source?", the model lists vendors, brands and URLs. Whoever gets named there wins traffic, trust and market share. Whoever doesn't is invisible — no matter how good the Google position is.
Rankion automatically and periodically asks your brand-relevant questions to several models and logs: are you mentioned? In what context? Which sources does the model cite?
Create a tracking project
In the menu: AI Visibility → New tracking. The wizard asks you:
- Name — e.g. "Vegan protein DACH"
- Mode — domain tracking (one brand/domain) or keyword tracking (topic, all relevant players)
- Domain or keywords — what is being tracked
Via the API:
curl -X POST https://rankion.ai/api/v1/tracking-projects \
-H "Authorization: Bearer YOUR_TOKEN" \
-d '{"name": "Vegan protein DACH", "mode": "domain", "domain": "mydomain.com"}'
Define brand aliases
Brands have spelling variants. "MyDomain", "My Domain Inc.", "MD24" — all the same company. Maintain aliases on the project so mentions are attributed correctly. Without aliases you miss hits.
Pick platforms
Pick which models are queried. Typical selection:
{
"llm_platforms": ["chatgpt", "perplexity", "gemini", "claude"]
}
Each platform costs credits per run. You can also pick just one if your market is clear (e.g. only Perplexity for tech B2B).
Set the frequency
tracking_frequency controls how often it repeats: daily, weekly, monthly. For most brands weekly is enough. High-frequency markets (news, crypto) benefit from daily.
Start a run + wait
The first run is automatically scheduled when you create the project. To trigger one manually:
curl -X POST https://rankion.ai/api/v1/tracking-projects/{id}/run \
-H "Authorization: Bearer YOUR_TOKEN"
Response: HTTP 202 with a job ID. Duration depends on number of questions × platforms — usually 2–10 minutes. The UI polls automatically and shows live progress.
Interpret the score
After the run you see:
- Visibility score (0–100) — how often you were mentioned, in what position of the answer, with what sentiment.
- Reality check / AVI — compares the model's knowledge with the real top-10 SERPs. Big gap = LLM remembers you, Google doesn't (or vice versa).
- Platform breakdown — where you're strong, where you're weak.
- Trends — historical development per run.
Analyze citations
The exciting part is the cited sources. Models often link to specific URLs in their answers. Rankion collects them, aggregates per domain and shows you:
- Which URLs are cited most often?
- Which competitors are at the top?
- Which of your own pieces of content aren't picked up at all?
That's your content roadmap: topics where you're missing are your next articles — see Your first article with the AI Content Editor.
Read on
In-depth knowledge of setup, limits, the data model and every endpoint lives in AI Visibility Tracking and the API reference. Questions on credits & quotas are answered in Credits & Rate Limits.