Tutorials / n8n Integration
How to Humanize AI Text in Your n8n Workflow
A step-by-step guide to wiring ToHuman into n8n — from a single HTTP Request node to a full automated blog pipeline with async batch processing.
An n8n community member recently posted a question that summarizes the problem exactly: they'd built a fully automated blog publishing workflow — AI generates the content, n8n handles the pipeline, a CMS receives the final post — but the output was getting flagged as AI-generated. They needed a way to humanize the text without breaking the automation. The post got responses suggesting various web tools, but none of them offered a real solution for an automated workflow. Pasting text into a web form defeats the whole point.
This tutorial answers that question properly. You'll wire the ToHuman API into n8n so that humanization becomes a node in your workflow — automated, repeatable, and invisible to the rest of your pipeline.
Three workflows are covered: a basic HTTP Request setup to prove the concept, a blog publishing pipeline that goes from AI generation to CMS, and an advanced async pattern for longer content using webhooks.
Why n8n + ToHuman Is a Natural Fit
n8n has become one of the most popular workflow automation platforms for technical teams — over 230,000 active users, 200,000+ community members, and a library of 400+ native integrations. Its strength is connecting arbitrary HTTP APIs through the HTTP Request node, which means anything with a REST endpoint can become a workflow step.
ToHuman's API is exactly that: a single REST endpoint that accepts AI-generated text and returns a humanized version. It runs a fine-tuned Mistral 7B model on dedicated compute, processes the request synchronously for shorter content, and supports async callbacks for longer pieces. No web form required, no manual steps.
The combination lets you build a content pipeline where humanization happens automatically between AI generation and publishing — without you touching anything.
Prerequisites
- An n8n instance — cloud, self-hosted, or desktop. All three work identically for this tutorial.
- A ToHuman API key — sign up free at tohuman.io. Your key is in the dashboard under API settings.
- Basic familiarity with n8n's canvas and node configuration. You don't need to know JavaScript.
Workflow 1: Basic HTTP Request (Proof of Concept)
Before building anything complex, confirm that n8n can call the ToHuman API and parse the response. This is a two-node workflow: a Manual Trigger and an HTTP Request node.
Step 1 — Add credentials
Go to Settings → Credentials → New Credential. Choose Header Auth as the type. Set the name to ToHuman API, the header name to Authorization, and the value to Bearer YOUR_API_KEY. Save it. This keeps the key out of your workflow JSON and lets you rotate it in one place.
Step 2 — Configure the HTTP Request node
Add an HTTP Request node to your canvas. Configure it as follows:
- Method: POST
- URL:
https://tohuman.io/api/v1/humanizations/sync - Authentication: Predefined Credential Type → Header Auth → select "ToHuman API"
- Body Content Type: JSON
- Body: Specify body parameters (use the JSON/RAW option)
Paste this into the JSON body field:
HTTP Request node — JSON body
{
"content": "Artificial intelligence has demonstrated remarkable capabilities across numerous domains, enabling automation of complex tasks that previously required significant human expertise and intervention.",
"intensity": "medium"
}
Step 3 — Run and inspect the output
Click Execute Node. The node should return a 200 with a JSON object. The humanized text lives at humanized_text in the response. In downstream nodes, reference it as {{ $json.humanized_text }}.
If you see a 401, the credential header value isn't set correctly — check for a missing space between "Bearer" and the key. A 422 means the body is malformed or the intensity value is misspelled.
Workflow 2: Automated Blog Pipeline (AI Generate → Humanize → Publish)
This is the workflow the n8n community post was asking for. The full pipeline: a schedule trigger fires, an OpenAI node generates a draft, ToHuman humanizes it, and a WordPress or Ghost node publishes the result.
Node sequence
Build the following chain on your canvas:
- Schedule Trigger — fires on whatever cadence you want (daily, hourly, etc.)
- Set — defines the topic or blog brief for this run
- OpenAI (or HTTP Request calling any AI API) — generates the draft
- HTTP Request — calls ToHuman to humanize the draft
- WordPress or Ghost node — publishes the post
Step 1 — Set node: define your brief
Add a Set node after the Schedule Trigger. Create a string field called topic with whatever you want the AI to write about. This makes it easy to swap topics later or feed them from a Google Sheet or Airtable node upstream.
Step 2 — OpenAI node: generate the draft
Add an OpenAI node. Set the operation to Message a Model and use the gpt-4o model (or whatever you prefer). In the prompt field, reference the Set node output:
OpenAI node — prompt expression
Write a 600-word blog post about {{ $('Set').item.json.topic }}.
Use a direct, informative tone. No headers — just flowing paragraphs.
Do not use filler phrases like "In today's world" or "In conclusion".
Step 3 — HTTP Request node: humanize the draft
Add another HTTP Request node using the same ToHuman credential from Workflow 1. This time, the body uses an expression to pull the generated text from the OpenAI node output:
HTTP Request node — dynamic body (use JSON/RAW mode)
{
"content": "{{ $('OpenAI').item.json.message.content }}",
"intensity": "medium"
}
Note: the exact expression path for the OpenAI node output depends on which version of the node you're using. In n8n's expression editor, open the schema panel to confirm the field path — it'll be something like $json.message.content or $json.choices[0].message.content depending on your node version. Use the expression editor's autocomplete rather than guessing.
Step 4 — WordPress or Ghost node: publish
Add a WordPress node (operation: Create Post) or a Ghost node (operation: Create Post). Set the post content to the humanized output:
WordPress / Ghost node — content field expression
{{ $('HTTP Request').item.json.humanized_text }}
Set the post status to draft for now unless you've validated the output quality enough to go straight to publish. Add an IF node before the publish step if you want to gate on word count or run a basic content check before the post goes live.
Optional: Add an IF node for quality gating
Between the ToHuman HTTP Request and the CMS node, insert an IF node. Configure it to check that the humanized text is long enough — for example, that {{ $json.humanized_text.length }} is greater than 500. Route the false branch to a Send Email or Slack node to alert you when something comes back unexpectedly short. This catches edge cases without breaking the whole pipeline.
Workflow 3: Async Batch Processing with Webhook Callbacks
The sync endpoint caps at 2,000 words. For longer content — deep-dive articles, whitepapers, repurposed long-form video transcripts — use the async endpoint instead. It accepts a webhook_url parameter and posts the result back to you when processing completes. No polling required.
How the async endpoint works
You POST to https://tohuman.io/api/v1/humanizations with the same body plus a webhook_url field. The API returns a job ID immediately. When the humanization finishes (typically a few seconds to a minute depending on length), ToHuman sends a POST to your webhook URL with the completed result.
The webhook payload looks like this:
ToHuman webhook callback payload
{
"event": "humanization.completed",
"humanization": {
"id": 43,
"status": "completed",
"output_content": "The humanized text...",
"processing_time": 3.87
}
}
Step 1 — Create the receiving webhook workflow
First, build the workflow that receives ToHuman's callback. Create a new workflow and add a Webhook node as the trigger. Set the HTTP method to POST and note the webhook URL it generates — you'll need it in the next step. This workflow will handle humanization results whenever they come in.
After the Webhook node, add whatever you want to do with the result: write to a database, trigger a CMS publish, send a Slack notification. Reference the humanized text as:
Webhook node — access humanized output in downstream nodes
{{ $json.body.humanization.output_content }}
Step 2 — Submit jobs in the triggering workflow
In your main workflow (the one that generates content), configure the HTTP Request node to call the async endpoint instead of the sync one. Use a Code node or a Set node to construct the request body with your webhook URL baked in:
HTTP Request node — async endpoint body
{
"content": "{{ $('OpenAI').item.json.message.content }}",
"intensity": "heavy",
"webhook_url": "https://your-n8n-instance.com/webhook/your-webhook-id"
}
The async endpoint returns a job ID immediately. Your main workflow continues or ends — it doesn't wait. When ToHuman finishes, it calls your Webhook node, which picks up the result and continues from there.
Step 3 — Batch multiple articles
If you're processing a list of articles (from a Google Sheet, Airtable base, or database query), add a SplitInBatches node before the HTTP Request to iterate over each item. Each item submits one async job, and each job triggers the receiving webhook independently when it completes. This lets you kick off dozens of humanization jobs without waiting for each to finish before starting the next.
The Code node is useful here if you need to attach metadata to each request — for example, storing the article ID alongside the content so the receiving webhook knows which CMS record to update:
Code node — building the async request body with metadata
// Runs once per item in a SplitInBatches loop
const articleId = $input.item.json.id;
const content = $input.item.json.generated_draft;
return {
json: {
content: content,
intensity: "medium",
// Encode article ID in the webhook URL as a query param
// so the receiving workflow knows which record to update
webhook_url: `https://your-n8n.com/webhook/humanize-done?article_id=${articleId}`
}
};
In the receiving webhook workflow, parse the article_id query param with {{ $json.query.article_id }} and use it in a database Update node or CMS node to write the humanized result back to the right record.
Tips: Intensity, Error Handling, and Rate Limits
Choosing the right intensity level
The intensity parameter controls how aggressively the model rewrites. For automated pipelines, a few rules of thumb:
- minimal — Good for text a human has already edited. Smooths residual AI patterns without touching the structure.
- subtle — Sentence-level rewrites. Use when the draft is mostly solid but rhythm feels mechanical.
- medium — The default for most pipelines. Works well on raw model output for blog posts and articles.
- heavy — Use for content from models that have a strong recognizable voice (GPT-4, Claude, Gemini at default settings). More aggressive rewrites — always review output before publishing if the piece contains precise facts or data.
You can make intensity dynamic by adding it as a column in the spreadsheet or database driving your pipeline, then referencing it with an expression in the HTTP Request body.
Error handling in n8n
Add an Error Trigger workflow to catch failures. On the HTTP Request node, open Settings and enable Continue on Error if you want the pipeline to keep going even when one item fails — useful in batch workflows where a single failed humanization shouldn't stop the rest. Route failures to a Slack or email notification so you know to investigate.
The ToHuman API returns standard HTTP status codes. A 401 means the credential is wrong. A 422 means the request body is malformed — check your expression is producing valid JSON and that intensity is spelled correctly. On 5xx errors, n8n's built-in retry options (under the HTTP Request node's Settings tab) will handle transient failures automatically.
Keeping credentials secure
Store the API key in n8n's credential system, not as a hardcoded value in a Set or Code node. Credentials stored in n8n are encrypted at rest and excluded from workflow export by default. If you're self-hosting n8n, make sure N8N_ENCRYPTION_KEY is set in your environment — without it, credentials are stored unencrypted.
What You've Built
After following these workflows, you have three things: a validated API connection you can reuse, a fully automated content pipeline from AI generation to CMS publishing, and an async batch processor that handles long-form content at scale. Humanization is just another node — it runs automatically, adds no manual steps, and outputs text that reads naturally.
The n8n community member who asked how to humanize AI content in their blog workflow had the right instinct. The answer isn't a web form — it's an API node that fits into what they'd already built. This is it.
To go deeper on the ToHuman API — including all endpoint parameters, async polling as an alternative to webhooks, and how the model handles different content types — see the full API guide. Or if you're coming to this from a concern about AI detection tools flagging legitimate content, the AI detection false positives post covers the data behind why those tools are unreliable.
Frequently Asked Questions
Which n8n node do I use to call the ToHuman API?
The HTTP Request node. Set method to POST, URL to https://tohuman.io/api/v1/humanizations/sync, authentication to Header Auth (with your Bearer token), body type to JSON, and include content and intensity fields in the body.
What intensity level should I use in an automated workflow?
medium is a reliable default for raw AI-generated content. Use heavy for content from ChatGPT or similar at default settings. Use subtle if a human has already reviewed and edited the draft.
How do I handle content longer than 2,000 words?
Use the async endpoint at POST https://tohuman.io/api/v1/humanizations with a webhook_url in the body. ToHuman will POST the completed result back to your n8n Webhook node. See Workflow 3 above for the full setup.
Can I use ToHuman with n8n Cloud?
Yes. The API is a standard HTTPS endpoint with no IP restrictions. It works on n8n Cloud, self-hosted instances, and n8n Desktop identically.
Published March 30, 2026 by the ToHuman team.