AI Detector
Detect whether text was written by AI — for content audits and quality gates.
The AI Detector inspects a text and tells you the probability that it's AI-generated — sentence by sentence, with a heatmap and an aggregate score. Useful when you buy text from freelancers or external sources and need quality control, or when you want to test your own AI drafts against detector tools before publishing. Outputs are directly chainable into the Humanizer so "red" doesn't "stay red".
What it can do
- Aggregate score — a 0–100 number for the "AI-generated" probability. Above 70 = likely AI, below 30 = likely human.
- Sentence heatmap — a color per sentence; red sentences are the typical GPT phrasings.
- Multi-model detection — separate probabilities for the GPT-4 family, Claude family, Gemini family.
- Scan modes —
quick(fast, lower confidence) anddeep(slower, higher recall). - Free text or article — submit raw text or reference an article ID.
- One-click humanize — when the score is too high, there's a "Humanize" button that runs the Humanizer directly.
- Audit trail — every scan is stored; you can show history and change over time.
When to use
- You buy content externally and want to ensure it isn't just unedited ChatGPT output.
- You run a blog on a platform with a "no pure AI" policy (Google Helpful Content, some affiliate programs).
- You want to "humanize" your own AI drafts before publishing so external detector tools don't flag them.
- You're building a quality gate into your automation pipeline: only publish if AI score < 40.
Workflow
- Submit text —
POST /ai-scanner/detectwith{text}or{article_id}and the desiredscan_type. - Read score — the response contains the aggregate score, sentence heatmap, model probabilities.
- Decide — at
> 70: call the Humanizer or revise manually. - Re-scan — after editing, detect again until the score is in your target band.
API
| Method | Endpoint | Credits |
|---|---|---|
POST |
/v1/ai-scanner/detect |
2 |
Body:
{
"text": "Here goes the text to be checked...",
"scan_type": "deep"
}
Or with an article reference:
{
"article_id": 4711,
"scan_type": "quick"
}
Response (truncated):
{
"score": 78,
"verdict": "likely_ai",
"models": {
"gpt": 0.82,
"claude": 0.41,
"gemini": 0.33
},
"sentences": [
{"text": "...", "ai_probability": 0.91},
{"text": "...", "ai_probability": 0.12}
]
}
Credits & Limits
- Per scan: 2 credits, regardless of scan type and text length up to 25,000 characters.
- For texts > 25,000 characters: automatically split into chunks, each chunk = 2 credits.
- Quick mode: ~3 seconds, sync response.
- Deep mode: up to ~30 seconds, also sync (mind the PHP-FPM hard limit).
- Article mode: team-scoped — you can only scan articles in your current team.
Related modules
- Humanizer — the natural follow-up when the detector lights up red.
- AI Content Editor — AI drafts come in here and go out through the detector.
- Content Audit — can configure AI score as an audit criterion.
- Automation — detector integrable as a quality gate in pipelines.
Letzte Aktualisierung: May 1, 2026