Page Deep Audit (Vision + AI Render)
Deep analysis of a single URL with screenshot, AI render comparison, and a 6–9 min render phase.
Page Deep Audit is the deepest analysis Rankion runs on a single URL. Instead of content signals like Content Audit (Site Crawl), Page Deep Audit looks at the landing page as a visual product: 3 screenshots (Desktop, Tablet, Mobile with real UA emulation), Lighthouse metrics, an Opus 4.7 multimodal analysis on layout, trust, CTA, persona fit, critical issues — plus up to 3 gpt-image-2 AI renders that show what the optimal variant could look like visually. This is the tool for CRO and landing-page iteration, not for SEO bulk work.
What it can do
- 3 source screenshots — Desktop, Tablet, Mobile with real user-agent emulation.
- Lighthouse metrics — Performance, SEO, Accessibility, Best Practices as numbers.
- Opus 4.7 vision analysis —
user_intent,persona_fit,trust_score,layout_score,cta_score,problem_solution_clarity,above_the_fold_quality,mobile_friendliness_visual. - Personas + pain points — derived from the visual, not from assumptions.
- Critical issues — prioritized by severity (high / medium / low) with evidence snippet.
- Improvement suggestions — per area (
headline,cta,trust,layout,copy,visuals,forms,navigation,seo,accessibility) with a before/after example. - 3 AI renders — gpt-image-2 generates the ideal Desktop, Tablet, Mobile version as a visual reference.
- Headline rewrite — a ready-to-paste H1 drop-in suggestion.
When to use
- You want to CRO-audit a landing page before pouring money into ads.
- You want to give a designer an objective visual reference ("this is how it should look").
- You're iterating a variant: Audit → Adjust → Re-audit → compare score diff.
- You need persona-driven copy suggestions based on the real page, not on briefing theory.
Workflow
- Start audit —
POST /page-auditwith{url, tracking_project_id?, persona?}. Returns202+{id, status:"pending", url}. - Poll until completed — main flow ~1–2 minutes (
scraping→screenshotting→analyzing→completed). AI renders run +6–9 minutes in a separate background job. - Read reports —
GET /page-audit/{id}returns 6 image URLs (3 source + 3 AI render), ananalysisblock with scores, personas, issues, suggestions, headline rewrite, and Lighthouse metrics. - Iterate — apply suggestions in priority order, deploy the page, run a re-audit, compare the score diff.
Polling example:
ID=$(curl -s -X POST $BASE/page-audit \
-H "Authorization: Bearer $TOKEN" -H "Content-Type: application/json" \
-d '{"url":"https://example.com/landing"}' | jq -r '.id')
while true; do
R=$(curl -s -H "Authorization: Bearer $TOKEN" $BASE/page-audit/$ID)
STATUS=$(echo "$R" | jq -r '.status')
IDEAL_M=$(echo "$R" | jq -r '.ideal_mobile_url // "null"')
[ "$STATUS" = completed ] && [ "$IDEAL_M" != null ] && break
sleep 30
done
API
| Method | Endpoint | Notes | Credits |
|---|---|---|---|
| POST | /v1/page-audit |
Body {url, tracking_project_id?, persona?}, async 202 |
30 + up to 3×15 |
| GET | /v1/page-audit/{id} |
Detail with 6 image URLs, analysis, Lighthouse |
— |
| GET | /v1/page-audits |
List, filter ?per_page=25&tracking_project_id= |
— |
Pipeline status: pending → scraping → screenshotting → analyzing → completed. AI render fields (ideal_screenshot_url, ideal_tablet_url, ideal_mobile_url) populate one by one after completed — Desktop first, then Tablet, then Mobile.
Credits & Limits
- Main audit: 30 credits.
- AI renders: up to 3×15 credits (Desktop, Tablet, Mobile).
- Full audit with all renders: up to 75 credits per run.
- Async — main flow ~1–2 min, renders +6–9 min after that.
urlis required and capped at 500 characters;422on validation fail.- Cross-team and cross-project access returns
403.
Related modules
- Content Audit — site-wide inventory instead of single deep audit.
- Content Optimizer — optimize the content layer; Page Deep Audit targets visual + UX.
- AI Content Editor — apply the headline rewrite suggestion directly in the editor.