rankion.ai
API

Content Audits API

Audit an entire site — pages, recommendations, JSON export.

The content audits API runs a full site audit: crawls a domain, scores every page on content quality, keyword coverage, and technical SEO signals, and returns prioritized recommendations per URL. Unlike the Page Deep Audit (single-URL, vision-based), the content audit covers the entire site at breadth. All endpoints live under /v1/... and are team-scoped.

Module context: Content Audit.

Endpoints

Method Endpoint Description Credits
GET /v1/content-audits All audits in the team (paginated)
POST /v1/content-audits Start a new audit — body: {project_id, max_pages?, exclude_paths?} → 202 10
GET /v1/content-audits/{id} Audit header + aggregate scores + page list (paginated)
GET /v1/content-audits/{id}/pages All pages of the audit (?score_lt=&category=&sort=)
GET /v1/content-audits/{id}/export JSON export of the full audit including all pages and recommendations
GET /v1/content-audit-pages/{id} Single page with recommendations, score breakdown, issue list
PATCH /v1/content-audit-pages/{id}/solved Mark page as solved — body: {solved:bool, note?}

Start an audit

TOKEN="$RANKION_API_TOKEN"
BASE="https://rankion.ai/api/v1"

curl -X POST "$BASE/content-audits" \
  -H "Authorization: Bearer $TOKEN" -H "Content-Type: application/json" \
  -d '{
    "project_id":12,
    "max_pages":200,
    "exclude_paths":["/tag/","/author/","/wp-admin/"]
  }'

Response 202 Accepted:

{
  "id": 5,
  "status": "pending",
  "message": "Content audit dispatched"
}

The crawl runs asynchronously — rule of thumb: 200 pages ~ 5–8 minutes. max_pages is hard, the crawler stops on reach.

Audit header

curl "$BASE/content-audits/$ID" \
  -H "Authorization: Bearer $TOKEN"

Response:

{
  "id": 5,
  "project_id": 12,
  "status": "completed",
  "started_at": "2026-04-30T10:12:00Z",
  "completed_at": "2026-04-30T10:18:42Z",
  "pages_crawled": 187,
  "pages_failed": 3,
  "avg_score": 64,
  "score_distribution": {
    "0-40": 12,
    "40-60": 38,
    "60-80": 91,
    "80-100": 46
  },
  "top_issues": [
    { "category": "missing_meta_description", "count": 47 },
    { "category": "thin_content", "count": 23 }
  ],
  "credits_used": 10
}

Pages with filters

# All pages with score < 40 (critical)
curl "$BASE/content-audits/$ID/pages?score_lt=40&sort=score_asc" \
  -H "Authorization: Bearer $TOKEN"

# All pages with issue category "thin_content"
curl "$BASE/content-audits/$ID/pages?category=thin_content" \
  -H "Authorization: Bearer $TOKEN"

Page detail

curl "$BASE/content-audit-pages/$PAGE_ID" \
  -H "Authorization: Bearer $TOKEN"

Response:

{
  "id": 412,
  "audit_id": 5,
  "url": "https://example.com/blog/old-post",
  "score": 38,
  "score_breakdown": {
    "content_quality": 42,
    "keyword_coverage": 28,
    "technical_seo": 51,
    "freshness": 22
  },
  "issues": [
    {
      "category": "thin_content",
      "severity": "high",
      "message": "Only 320 words — top rankings for this keyword have on average 1,800",
      "fix_suggestion": "Expand to 1,500+ words, add an FAQ section"
    }
  ],
  "solved": false
}

Mark pages as solved

After fixing a page (e.g. via the Articles APIoptimize), mark it as solved — it disappears from open issue lists but stays in the audit history.

curl -X PATCH "$BASE/content-audit-pages/$PAGE_ID/solved" \
  -H "Authorization: Bearer $TOKEN" -H "Content-Type: application/json" \
  -d '{"solved":true,"note":"Content expanded to 1850 words, FAQ added."}'

JSON export

For external reports or backups:

curl "$BASE/content-audits/$ID/export" \
  -H "Authorization: Bearer $TOKEN" -o audit-5.json

The export contains all pages with full recommendations and score breakdowns — usable as input for an Excel pivot or as briefing material.

Complete example: audit → critical pages → optimize

PID=12

# 1) Start audit
ID=$(curl -s -X POST "$BASE/content-audits" \
  -H "Authorization: Bearer $TOKEN" -H "Content-Type: application/json" \
  -d '{"project_id":'$PID',"max_pages":200}' | jq -r .id)

# 2) Poll until completed
while [ "$(curl -s "$BASE/content-audits/$ID" \
  -H "Authorization: Bearer $TOKEN" | jq -r .status)" != "completed" ]; do
  sleep 30
done

# 3) Top-10 critical pages
curl -s "$BASE/content-audits/$ID/pages?score_lt=40&sort=score_asc&per_page=10" \
  -H "Authorization: Bearer $TOKEN" \
  | jq -r '.data[] | "\(.score)\t\(.url)"'

# 4) Run one page through the content optimizer
URL="https://example.com/blog/old-post"
curl -s -X POST "$BASE/content-optimizer/analyze" \
  -H "Authorization: Bearer $TOKEN" -H "Content-Type: application/json" \
  -d "{\"url\":\"$URL\",\"keyword\":\"…\",\"project_id\":$PID}"

Notes & pitfalls

  • max_pages protects against credit bursts. The default is conservative — for large sites bump it up explicitly, otherwise only top-level pages are crawled.
  • exclude_paths is substring-based. /tag/ matches any URL with /tag/ in the path.
  • Re-audit does not overwrite. Each audit is its own snapshot. Comparison = run two audits, export both IDs, diff them.
  • Marking solved is optional. Helps with re-audit comparisons (solved pages get visually marked as "previously critical").

Related: Projects API · Content Optimizer API · Content Audit · Page Deep Audit.

Letzte Aktualisierung: May 1, 2026

Cookies: We use necessary cookies for functionality and optional ones for improvements. Details

Necessary
Active
Analytics
Marketing