rankion.ai
API

AI Visibility Tracking API

Create tracking projects, start runs, fetch scores plus cited sources.

The tracking API is the API layer around the AI Visibility Tracking module — it measures how your brand shows up in ChatGPT, Perplexity, Claude, Gemini, and others. All endpoints live under /v1/... and are team-scoped via auth:sanctum.

UI context: Track AI Visibility · module documentation: AI Visibility Tracking.

Projects (wizard + runtime)

Tracking projects are created through a 5-step wizard: dispatch analysis → poll status → activate → start run → read scores. The wizard is atomic — the activate action creates keywords, prompts, and competitors together and switches the project on.

Method Endpoint Description Credits
GET /v1/tracking-projects All team projects
POST /v1/tracking-projects/analyze Wizard step 1+2: analyze domain/keyword — body: {mode:"domain"|"keyword", domain?, keyword?, name?, language?, country?} → 202 + project_id
GET /v1/tracking-projects/{id} Detail with KPIs
PUT /v1/tracking-projects/{id} Settings: {name?, brand_name?, brand_aliases[], tracking_frequency?, llm_platforms[], track_aio?, reality_check_enabled?}
GET /v1/tracking-projects/{id}/analysis-status Wizard polling: {analysis_status, analysis_phase, suggested_keywords, suggested_competitors, suggested_prompts}
POST /v1/tracking-projects/{id}/activate Wizard step 5 atomic — body: {keywords[], competitors[], prompts[], platforms[], frequency, with_search?, reality_check_enabled?, personas[]}
POST /v1/tracking-projects/{id}/run Start tracking run (async)
GET /v1/tracking-projects/{id}/runs Run history (?limit=&offset=)

Keywords + prompts

Method Endpoint Description
GET /v1/tracking-projects/{id}/keywords ?active_only=true
POST /v1/tracking-projects/{id}/keywords Body: {keyword, language?, country?, is_active?}
PUT/DELETE /v1/tracking-keywords/{id} Update / delete
GET /v1/tracking-projects/{id}/prompts List
POST /v1/tracking-projects/{id}/prompts Body: {prompt_text, prompt_category?, persona_label?, commercial_intent?}
PUT/DELETE /v1/tracking-prompts/{id} Update / delete
POST /v1/tracking-projects/{id}/prompts/import CSV/Excel bulk — multipart file, optional mapping[col]=index

Cited sources (outreach pipeline)

Method Endpoint Description
GET /v1/tracking-projects/{id}/cited-sources ?domain=&outreach_status=&is_own_domain=
GET /v1/tracking-projects/{id}/cited-sources/export CSV stream — ?from=&to=&status=&platform=&search=&sort=
POST /v1/tracking-projects/{id}/cited-sources/analyze Dispatch citation analysis — body optional {source_ids:[]} → 202
PATCH /v1/tracking-projects/{id}/cited-sources/{source_id} Body: {outreach_status?, notes?} (neu, anfrage_gestellt, verlinkt, abgelehnt, ignoriert)

Insights & scores

Method Endpoint Description Credits
GET /v1/tracking-projects/{id}/scores Score history (?from=&to=)
GET /v1/tracking-projects/{id}/platform-breakdown Score per LLM platform
GET /v1/tracking-projects/{id}/insights/low-hanging-fruit ?days=30&min_possibility=60&limit=5
GET /v1/tracking-projects/{id}/insights/platform-gap ?days=30
POST /v1/tracking-projects/{id}/insights/platform-gap/{platform}/diagnose Async root cause 1
GET /v1/tracking-projects/{id}/insights/potential-hero Prompts with hero potential
POST /v1/tracking-projects/{id}/prompts/{prompt}/generate-brief Content brief via Claude 3
POST /v1/tracking-projects/{id}/brand-aliases Body: {alias}

Reality Check (AVI)

The AI Visibility Index aggregates 6 dimensions (Awareness, Recommendation, Trust, Coverage, Sentiment, Competitive Edge) into a score 0–100.

Method Endpoint Description
GET /v1/tracking-projects/{id}/avi Latest AVI with tier (red/yellow/green) + recommendations
GET /v1/tracking-projects/{id}/dimensions 6-dim scores per platform
POST /v1/tracking-projects/{id}/reality-check-report PDF generation async
GET /v1/tracking-projects/{id}/reality-check-report/status {status, progress?, download_url?}

Complete example: wizard → run → scores

TOKEN="$RANKION_API_TOKEN"
BASE="https://rankion.ai/api/v1"

# 1) Start analysis
PID=$(curl -s -X POST "$BASE/tracking-projects/analyze" \
  -H "Authorization: Bearer $TOKEN" -H "Content-Type: application/json" \
  -d '{"mode":"domain","domain":"example.com","language":"en","country":"US"}' \
  | jq -r .project_id)

# 2) Poll until completed
while [ "$(curl -s "$BASE/tracking-projects/$PID/analysis-status" \
  -H "Authorization: Bearer $TOKEN" | jq -r .analysis_status)" != "completed" ]; do
  sleep 3
done

# 3) Atomic activate
curl -X POST "$BASE/tracking-projects/$PID/activate" \
  -H "Authorization: Bearer $TOKEN" -H "Content-Type: application/json" \
  -d '{
    "keywords":["best crm","crm software"],
    "competitors":[{"domain":"hubspot.com"}],
    "prompts":[{"prompt":"What is the best CRM?","category":"recommendation"}],
    "platforms":["chatgpt","perplexity"],
    "frequency":"weekly",
    "with_search":true,
    "reality_check_enabled":true
  }'

# 4) Dispatch the first run
curl -X POST "$BASE/tracking-projects/$PID/run" \
  -H "Authorization: Bearer $TOKEN"

# 5) Read scores (once the run is completed)
curl "$BASE/tracking-projects/$PID/scores?from=2026-01-01" \
  -H "Authorization: Bearer $TOKEN"

# 6) AVI + recommendations
curl "$BASE/tracking-projects/$PID/avi" \
  -H "Authorization: Bearer $TOKEN"

Notes & pitfalls

  • Activate is atomic. Only after activate is status=active and runs can start — before that, 409.
  • AVI needs a run. GET /avi without a completed run returns 404.
  • Cited-source analysis is batched by default. Without source_ids, all previously unanalyzed sources are processed.
  • Reality Check report is optional. Only dispatch if reality_check_enabled=true on the project — otherwise validation error.

Related: Projects API · Credits · AI Visibility Tracking · Track AI Visibility.

Letzte Aktualisierung: May 1, 2026

Cookies: We use necessary cookies for functionality and optional ones for improvements. Details

Necessary
Active
Analytics
Marketing