Tutorials / Make.com Integration

Build an AI Content Humanization Pipeline in Make.com

A step-by-step guide to wiring the ToHuman API into Make.com — from a single HTTP module proof of concept to a complete pipeline triggered by Google Sheets, RSS feeds, or scheduled runs.

· 15 min read

Make.com has grown into one of the most popular no-code automation platforms for marketing teams and content agencies — more than 500,000 active users building scenarios without writing a single line of code. But one gap keeps coming up in the Make community: how do you stop AI-generated content from getting flagged as AI-written after you've already automated the generation step?

The answer is the same one that works in every automation stack: an API call that slots into your existing scenario. This tutorial shows you how to use Make.com's HTTP module to call the ToHuman API and make humanization an automatic step in any content pipeline — no manual web form, no breaking your workflow.

Three scenarios are covered: a basic single-module proof of concept, a full AI-to-CMS pipeline with a Google Sheets trigger, and an RSS-to-blog automation for content repurposing. There's also a section on async processing for longer content and the error handling patterns that keep production scenarios stable.

Why Make.com + ToHuman Works

Make.com's HTTP module is a universal API connector. If a service has a REST endpoint, you can call it from any scenario as one node in the chain. ToHuman exposes exactly that: a single POST endpoint that accepts text and returns a humanized version processed by a fine-tuned Mistral 7B model.

The combination means humanization becomes invisible in your pipeline. Content comes in from whatever source you're already using — a Google Sheet full of AI drafts, an RSS feed you're repurposing, a scheduled ChatGPT call — and the humanized version flows out the other side to your CMS, email platform, or wherever you publish.

Prerequisites

  • A Make.com account — the free tier supports up to 1,000 operations per month, which is enough to follow this tutorial and run a low-volume pipeline.
  • A ToHuman API keysign up free at tohuman.io. You'll find your key in the dashboard under API settings.
  • Basic familiarity with Make.com's scenario canvas. You don't need to know JSON or HTTP — this tutorial explains both as it goes.

Scenario 1: HTTP Module Proof of Concept

Before building anything multi-step, confirm that Make can call the ToHuman API and parse the response. This scenario has two modules: a manual trigger and an HTTP request.

Step 1 — Store your API key securely

Make.com lets you store credentials as Connections so they're encrypted and reusable across scenarios. Go to Connections in the left sidebar and click Create a new connection. Search for HTTP and choose the API Key connection type.

Set the connection name to ToHuman API. In the API Key field, enter your ToHuman key. Set the header name to Authorization and the value to Bearer YOUR_API_KEY — note the space between "Bearer" and the key. Save the connection.

Storing it here means you can rotate the key in one place and it updates across every scenario that uses it.

Step 2 — Add and configure the HTTP module

Create a new scenario. Add a Manual trigger as the first module, then add HTTP > Make a Request as the second.

Configure the HTTP module as follows:

  • URL: https://tohuman.io/api/v1/humanizations/sync
  • Method: POST
  • Headers: Add one header — Name: Authorization, Value: Bearer YOUR_API_KEY (or select your saved connection)
  • Body Type: Raw
  • Content Type: application/json (JSON)
  • Parse Response: Yes (this is important — it makes the response fields mappable in downstream modules)

In the Request Content field, paste the following JSON body:

HTTP module — Request Content (Raw JSON)

{
  "content": "Artificial intelligence has demonstrated remarkable capabilities across numerous domains, enabling the automation of complex tasks that previously required significant human expertise.",
  "intensity": "medium"
}

Step 3 — Run and inspect the output

Click Run once. Make will execute the scenario and show you the output of each module. Click the HTTP module bubble to inspect the response. You should see a 200 status and a data object. The humanized text is in the humanized_text field.

With Parse Response enabled, Make maps this automatically — you'll see humanized_text listed as a mappable field when you configure downstream modules. If you get a 401, check that the Authorization header value starts with Bearer (with the space). A 422 typically means the JSON body is malformed — check for trailing commas, which JSON does not allow.

Scenario 2: Google Sheets Trigger → Humanize → CMS Publish

This is the most common production pattern for content teams: a Google Sheet holds a queue of AI-generated drafts, Make processes each new row, humanizes the content, and publishes to a CMS. Here's the full setup.

Module sequence

  1. Google Sheets > Watch New Rows — triggers when a new row is added to your draft queue sheet
  2. HTTP > Make a Request — calls the ToHuman API to humanize the content
  3. WordPress > Create a Post (or Webflow > Create a Collection Item, or any CMS module) — publishes the humanized result
  4. Google Sheets > Update a Row — marks the row as processed

Step 1 — Set up your Google Sheet

Create a sheet with at least four columns: title, ai_draft, status, and published_url. Add a few rows with AI-generated content in the ai_draft column and "pending" in status. Make will watch for new rows and process each one.

Step 2 — Configure the Watch New Rows trigger

Add Google Sheets > Watch New Rows as the first module. Connect your Google account, select your spreadsheet and sheet name, and set Table contains headers to Yes. Set the limit to however many rows you want to process per scenario run — start with 5 to avoid hitting API rate limits during testing.

Make will ask you to choose where to start processing from. Select From now on for a live pipeline or All to process existing rows on the first run.

Step 3 — Configure the HTTP module with dynamic content

Add the HTTP module configured exactly as in Scenario 1, but this time use Make's mapping panel to pull the draft content from the Google Sheets trigger. In the Request Content field, build the JSON body dynamically:

HTTP module — dynamic Request Content mapping Google Sheets row

{
  "content": "{{1.ai_draft}}",
  "intensity": "medium"
}

The {{1.ai_draft}} expression maps the ai_draft column from the Google Sheets module (module 1). Make's visual mapping panel will show you all available fields — click into the Request Content field and use the panel to insert the field rather than typing the expression manually. This avoids typos in field names.

If your Google Sheet has an intensity column (useful when different content types need different rewrite levels), you can map that too: "intensity": "{{1.intensity}}". Default to "medium" if the column is empty using Make's ifempty function: "intensity": "{{ifempty(1.intensity; \"medium\")}}".

Step 4 — Publish to your CMS

Add a WordPress, Webflow, Ghost, or Contentful module after the HTTP module. Map the humanized content from the HTTP response to the post body field. With Parse Response enabled, the field is available as:

CMS module — mapping the humanized output from the HTTP response

Post body / Content: {{2.data.humanized_text}}
Post title:          {{1.title}}

The 2 prefix refers to the HTTP module (module 2 in the scenario). The .data.humanized_text path reflects Make's parsed response structure — Make wraps the parsed JSON body under a data key. If you don't see data.humanized_text in the mapping panel, confirm that Parse Response is enabled in the HTTP module settings.

Step 5 — Mark the row as processed

Add a Google Sheets > Update a Row module at the end. Map the row number from the Watch New Rows trigger ({{1.__ROW_NUMBER__}}) so Make knows which row to update. Set the status column to "published" and map the CMS post URL from the previous module into published_url. This gives you a clean audit trail in the sheet without any manual work.

Setting the schedule

Click the clock icon on the Watch New Rows module to set the run schedule. For a daily content pipeline, every 24 hours is fine. For higher-volume operations, every 15 minutes is Make's default polling interval for Google Sheets. You can also trigger the scenario manually from the dashboard any time you add a batch of new rows.

Scenario 3: RSS Feed → Humanize → Publish

This scenario is useful for content repurposing — watch an RSS feed from your own site, a competitor, or an industry publication, run the content through a summarization prompt, humanize the result, and publish it as a new post on your own CMS.

Module sequence

  1. RSS > Watch RSS Feed Items — polls a feed URL for new items
  2. OpenAI > Create a Completion (or any AI module) — summarizes or rewrites the article to your angle
  3. HTTP > Make a Request — humanizes the AI-generated output
  4. CMS module — publishes the result

Step 1 — Configure the RSS trigger

Add RSS > Watch RSS Feed Items as the first module. Paste the feed URL and set the maximum number of items per run to 3-5 to stay within API operation budgets. Make will fire once per scheduled interval for each new item it hasn't seen before.

Step 2 — Generate a rewritten draft with OpenAI

Add an OpenAI > Create a Message module. In the prompt, reference the RSS item fields:

OpenAI module — prompt using RSS feed fields

Rewrite the following article in 400-500 words from the perspective of a content marketer.
Focus on practical takeaways. Do not copy sentences directly — write original prose.
Do not use headers, just flowing paragraphs.

Title: {{1.title}}
Content: {{1.description}}

The {{1.title}} and {{1.description}} expressions map from the RSS module (module 1). The description field carries the article body in most RSS feeds — if you're seeing truncated content, check whether the feed also has a content:encoded field.

Step 3 — Humanize the AI output

Add the HTTP module configured as in Scenario 1. Map the OpenAI response into the content field:

HTTP module — mapping OpenAI output as humanizer input

{
  "content": "{{2.choices[].message.content}}",
  "intensity": "heavy"
}

Use heavy intensity here. Content that comes directly from an AI model at default settings tends to have the most recognizable patterns, and heavy applies more aggressive rewrites. Always review the output before publishing if your piece includes precise factual claims — heavier rewrites can occasionally rephrase a statistic in a way that changes its meaning.

Scenario 4: Async Processing for Long Content

The sync endpoint (/api/v1/humanizations/sync) has a 2,000-word cap and a 30-second timeout. For longer content — white papers, detailed guides, full-length articles — use the async endpoint instead. It accepts a webhook_url and posts the result back when processing completes. No polling, no timeout issues.

How it works

You send a POST to https://tohuman.io/api/v1/humanizations with your content and a callback URL. The API returns a job ID immediately. When the humanization finishes (usually a few seconds to a minute for longer pieces), ToHuman sends a POST to your callback URL with the result.

In Make, the callback URL is a Webhooks > Custom Webhook module in a separate receiving scenario. Here's the callback payload structure:

ToHuman webhook callback payload

{
  "event": "humanization.completed",
  "humanization": {
    "id": 91,
    "status": "completed",
    "output_content": "The humanized text...",
    "processing_time": 5.12
  }
}

Set up the receiving scenario first

Create a new Make scenario with Webhooks > Custom Webhook as the trigger. Click Add to generate a webhook URL — it will look like https://hook.eu2.make.com/abc123.... Copy that URL. Add your downstream modules (CMS publish, Slack notification, Google Sheets update) after the webhook trigger, mapping the humanized content from:

Receiving scenario — mapping humanized output from webhook payload

{{1.humanization.output_content}}

Submit jobs from the triggering scenario

In your main scenario, configure the HTTP module to call the async endpoint with the webhook URL baked into the request body:

HTTP module — async endpoint request body

{
  "content": "{{1.ai_draft}}",
  "intensity": "medium",
  "webhook_url": "https://hook.eu2.make.com/abc123yourhookid"
}

Your main scenario ends after submitting the job. When ToHuman finishes processing, it calls the webhook URL and the receiving scenario picks up from there. This pattern works well for batch operations — kick off 10 humanization jobs and each one independently triggers the receiving scenario when it completes.

Error Handling and Production Stability

Adding an error handler to the HTTP module

Right-click on the HTTP module in the scenario canvas and choose Add error handler. For a content pipeline, the Resume directive is usually right — it lets Make continue processing other bundles (other rows, other RSS items) even if one request fails. Connect the error handler to a Slack or Email module to notify you when a failure happens, so you can requeue that item manually.

The ToHuman API returns standard HTTP codes. A 401 means the Authorization header is wrong — check the Bearer prefix. A 422 means the request body is malformed — confirm the JSON is valid and the intensity value is one of: minimal, subtle, medium, or heavy. On 5xx errors, the API is experiencing a transient issue — those are candidates for retry.

Retrying on failure

For transient errors, add a Tools > Repeater module before the HTTP call. Set it to repeat up to 3 times with a 10-second delay. Connect the Repeater to the HTTP module so that on each repeat cycle, Make re-attempts the API call. After 3 failed attempts, route the error to your notification module.

Choosing the right intensity level

The intensity parameter controls how aggressively the model rewrites. For automated Make.com pipelines:

  • minimal — For content a human has already reviewed and edited. Smooths residual AI patterns without restructuring anything.
  • subtle — Sentence-level rewrites. Good when the draft is mostly clean but rhythm still feels mechanical.
  • medium — The right default for most pipelines. Works well on raw ChatGPT or Claude output for blog posts.
  • heavy — Use when AI-detection risk is highest: content from default-settings GPT-4, Gemini, or Claude. More structural changes — review before publishing if the piece contains exact statistics or quotes.

If different content types flow through the same scenario, add an intensity column to your Google Sheet and reference it in the request body. Use Make's ifempty function to fall back to "medium" if the column is blank.

What You've Built

By the end of these scenarios, you have three working pipelines: a validated API connection, a Google Sheets-triggered humanization workflow that goes from draft to published post, and an RSS repurposing pipeline that generates, humanizes, and publishes content automatically.

Humanization becomes one module in a chain. It adds no manual steps, it runs on whatever cadence your scenario is scheduled for, and the output is text that reads as if a person wrote it — because the model is tuned specifically for that result.

For the full list of API parameters, async polling as an alternative to webhooks, and how the model handles different content types, see the ToHuman API guide. If you're building an n8n pipeline instead, the n8n integration tutorial covers the same patterns for that platform. Both are free during launch — check the pricing page for usage limit details.

Frequently Asked Questions

Which Make.com module do I use to call the ToHuman API?

The HTTP > Make a Request module. Set Method to POST, URL to https://tohuman.io/api/v1/humanizations/sync, add an Authorization header with Bearer YOUR_API_KEY, set Body Type to Raw with Content Type application/json, and enable Parse Response. Include content and intensity fields in the request body.

How do I trigger a Make.com scenario from Google Sheets?

Use Google Sheets > Watch New Rows as your trigger module. It polls your sheet on a schedule (every 15 minutes by default) and fires whenever a new row is added. Each row becomes a separate bundle that flows through the rest of your scenario independently.

Can I humanize content longer than 2,000 words in Make.com?

Yes. Use the async endpoint at POST https://tohuman.io/api/v1/humanizations with a webhook_url pointing to a Webhooks > Custom Webhook module in a separate receiving scenario. ToHuman posts the completed result to that URL when processing finishes. See Scenario 4 above for the full setup.

How do I handle errors in the HTTP module?

Right-click the HTTP module and select Add error handler. Use the Resume directive to continue processing other bundles when one fails. For retries, add a Tools > Repeater module before the HTTP call and configure it to retry up to 3 times with a delay between attempts.

Published April 3, 2026 by the ToHuman team.

Back to tutorials