rankion.ai

Humanizer

Make AI text sound more natural — as a 1-click action on articles or in batch mode.

The Humanizer rewrites AI text so it sounds less like AI — different sentence rhythm, fewer clichés, a more personal voice, more variation. You have two ways in: single article (1-click on an existing article, sync) or batch mode (free-text packages with multiple documents at once, async). Outputs are usually such that downstream detector tools (including our own AI Detector) lower the score significantly. Required for anyone publishing AI drafts without smelling like AI.

What it can do

  • Single-article humanize — sync, one click in the editor: POST /ai-scanner/humanize with {article_id, level}. The result replaces the article body directly.
  • Bulk humanize — async, batch API: POST /humanize with a free-text list. Job runs in background, status pollable.
  • Levelslight (cosmetic, ~10% change), medium (default, ~25%), heavy (~50%, may shift meaning slightly).
  • Style preservation — if your project has a Style Profile, the humanize output is adjusted to your tone of voice.
  • Diff view — before / after in the editor, sentence by sentence.
  • Roundtrip with AI Detector — a fresh detect run is possible right after humanize to validate the score.
  • Document tracking — batch runs have individual document IDs, each with its own status.

When to use

  • You have an AI Content Editor draft ready and want to defuse it before publishing.
  • You're importing texts from other AI tools and want them on your tone + lower AI score.
  • You're running a bulk migration: 200 old AI articles should all run through the humanizer once.
  • You're building a pipeline "Generate → Humanize → Detect → Publish" and need the humanize stage.

Workflow

Single article (sync)

  1. Click "Humanize" in the editor or call the API: POST /ai-scanner/humanize with {article_id, level: "medium"}.
  2. Sync response returns the rewritten body — replaced directly in the article.
  3. Optional: re-scan with the AI Detector.

Batch (async)

  1. Collect texts and send them to POST /humanize (HTTP 202 + batch_id).
  2. Poll status with GET /humanize/{batch_id} — values: pending, processing, completed, failed.
  3. Fetch individual documents via GET /humanize/{batch_id}/documents/{document_id}.
  4. Move results back into your workflow / importer.

API

Method Endpoint Credits
POST /v1/ai-scanner/humanize 5
POST /v1/humanize 8
GET /v1/humanize/{batch_id}
GET /v1/humanize/{batch_id}/documents/{document_id}

Body of POST /ai-scanner/humanize (sync, single article):

{
  "article_id": 4711,
  "level": "medium"
}

Body of POST /humanize (async, batch):

{
  "project_id": 12,
  "documents": [
    {"id": "doc-a", "text": "..."},
    {"id": "doc-b", "text": "..."}
  ],
  "level": "medium"
}

Response: HTTP 202 with { "batch_id": "hum-9182", "status": "pending" }.

Credits & Limits

  • Single article (/ai-scanner/humanize): 5 credits per call.
  • Batch (/humanize): 8 credits per document in the batch.
  • Status polls and document fetches: free.
  • Sync limit: single article runs against the PHP-FPM 600 s limit — for very long articles (>20k characters) you're routed to batch mode.
  • Batch size: up to 50 documents per batch, each at most 25k characters.
  • Idempotency: re-running on the same article/text is allowed and costs credits each time.

Related modules

Letzte Aktualisierung: May 1, 2026

Cookies: We use necessary cookies for functionality and optional ones for improvements. Details

Necessary
Active
Analytics
Marketing