Blog / Developer Guide
How to Humanize AI Text Programmatically
A complete guide to the ToHuman API — from your first curl request to building a production content pipeline.
If you're building a content pipeline, a CMS integration, or an internal tool that produces AI-generated text, a web form isn't going to cut it. You need an API you can call from code — something that fits into the rest of your stack without manual steps in the middle.
The ToHuman API does exactly one thing: takes AI-generated text, rewrites it to sound naturally human, and returns the result. This guide walks through everything you need to start using it — from getting your API key to batch processing hundreds of documents.
Why an API Instead of a Web Tool
Most AI humanizers are web apps. You paste text into a box, click a button, copy the output. That workflow works for occasional use, but it breaks down quickly when you need to:
- Humanize content as part of an automated publishing pipeline
- Process batches of documents without manual intervention
- Integrate humanization into a CMS, WordPress plugin, or CI/CD workflow
- Build humanization into your own product as a feature
Several competitors have announced API access but haven't shipped it, or charge significant premiums for it — StealthGPT's API access starts at $99/month, positioned as an enterprise add-on. ToHuman was built API-first from day one, and the API is currently free during the launch period.
Getting Started
1. Get your API key
Sign up for a free account at tohuman.io. Once you're logged in, your API key is available in the dashboard. It's a standard Bearer token — store it in an environment variable, not in your code.
2. Your first request
The API has a single endpoint for synchronous humanization:
POST https://tohuman.io/api/v1/humanizations/sync
Here's the simplest possible request with curl:
curl -X POST https://tohuman.io/api/v1/humanizations/sync \
-H "Authorization: Bearer YOUR_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"content": "Artificial intelligence has revolutionized numerous industries by enabling automation of complex tasks that previously required human intervention.",
"intensity": "medium"
}'
Response
{
"humanized_text": "AI has quietly reshaped how entire industries operate — taking over complex work that used to demand a human in the loop.",
"intensity": "medium",
"word_count": 28
}
Request Parameters
The request body accepts two fields:
content(required) — The text to humanize. Plain text, any length.intensity(required) — How aggressively to rewrite. One of:minimal,subtle,medium,heavy.
Authentication uses a standard Authorization: Bearer <token> header.
Intensity Levels Explained
The intensity parameter controls how much the model changes the input. Here's what each level does in practice.
minimal — Light polish, structure mostly preserved
Adjusts word choice and smooths phrasing without restructuring sentences. Good for text that's already fairly natural but needs the AI signature softened.
Input: "The implementation of machine learning algorithms has enabled significant advancements in predictive analytics capabilities."
Output: "Machine learning has opened up a lot of ground in predictive analytics."
subtle — Sentence-level rewrites, meaning unchanged
Restructures individual sentences and varies rhythm. Still conservative — won't change your argument or reorder your content.
Input: "There are several key factors to consider when evaluating cloud infrastructure options for enterprise deployments."
Output: "Picking the right cloud infrastructure for an enterprise deployment comes down to a handful of things that matter more than the rest."
medium — The default for most use cases
Full sentence rewrites with natural variation in structure, length, and tone. Preserves meaning and information while addressing the patterns detectors flag. Works well for blog posts, articles, and marketing copy.
heavy — Thorough rewrite, aggressive humanization
The model takes more liberties — merging short sentences, breaking up long ones, adding light conversational texture. Use this for content that scores high on AI detectors or for output from models that write in a particularly recognizable style.
A note on heavy: it changes more, so verify the output preserves the specific claims or technical details in your original. It's excellent for general content; for anything with precise figures or specifications, review the result.
Python Example
Python — requests
import os
import requests
TOHUMAN_API_KEY = os.environ["TOHUMAN_API_KEY"]
API_URL = "https://tohuman.io/api/v1/humanizations/sync"
def humanize(text: str, intensity: str = "medium") -> str:
response = requests.post(
API_URL,
headers={
"Authorization": f"Bearer {TOHUMAN_API_KEY}",
"Content-Type": "application/json",
},
json={"content": text, "intensity": intensity},
timeout=30,
)
response.raise_for_status()
return response.json()["humanized_text"]
# Basic usage
original = """
Large language models have demonstrated remarkable capabilities across
a wide range of natural language processing tasks, including text
generation, summarization, and question answering.
"""
result = humanize(original, intensity="medium")
print(result)
JavaScript / Node.js Example
Node.js — fetch (native, Node 18+)
const API_KEY = process.env.TOHUMAN_API_KEY;
const API_URL = "https://tohuman.io/api/v1/humanizations/sync";
async function humanize(text, intensity = "medium") {
const response = await fetch(API_URL, {
method: "POST",
headers: {
"Authorization": `Bearer ${API_KEY}`,
"Content-Type": "application/json",
},
body: JSON.stringify({ content: text, intensity }),
});
if (!response.ok) {
const error = await response.json().catch(() => ({}));
throw new Error(`API error ${response.status}: ${error.message ?? "unknown"}`);
}
const data = await response.json();
return data.humanized_text;
}
// Usage
const result = await humanize(
"The utilization of advanced algorithms facilitates enhanced operational efficiency.",
"heavy"
);
console.log(result);
Building a Batch Processing Pipeline
If you're processing a content library — say, a backlog of AI-drafted articles — you'll want to run requests concurrently rather than sequentially. Here's a Python example using asyncio and aiohttp that processes a list of documents in parallel:
Python — async batch processing with aiohttp
import os
import asyncio
import aiohttp
TOHUMAN_API_KEY = os.environ["TOHUMAN_API_KEY"]
API_URL = "https://tohuman.io/api/v1/humanizations/sync"
async def humanize_one(session, text: str, intensity: str = "medium") -> str:
async with session.post(
API_URL,
headers={"Authorization": f"Bearer {TOHUMAN_API_KEY}"},
json={"content": text, "intensity": intensity},
) as response:
response.raise_for_status()
data = await response.json()
return data["humanized_text"]
async def humanize_batch(documents: list[str], intensity: str = "medium") -> list[str]:
async with aiohttp.ClientSession() as session:
tasks = [humanize_one(session, doc, intensity) for doc in documents]
return await asyncio.gather(*tasks)
# Usage
documents = [
"Artificial intelligence enables organizations to automate repetitive tasks...",
"The implementation of cloud computing solutions has transformed data storage...",
"Machine learning models require substantial training data to achieve accuracy...",
]
results = asyncio.run(humanize_batch(documents, intensity="medium"))
for original, humanized in zip(documents, results):
print(f"Original: {original[:60]}...")
print(f"Humanized: {humanized[:60]}...")
print()
Integration Patterns
WordPress: Humanize on publish
Use WordPress's wp_insert_post hook to intercept content before it's saved. Call the ToHuman API on the post content, then save the humanized version. A basic PHP snippet:
PHP — WordPress hook
add_filter('wp_insert_post_data', function ($data, $postarr) {
if ($data['post_status'] !== 'publish') return $data;
if (empty($data['post_content'])) return $data;
$response = wp_remote_post('https://tohuman.io/api/v1/humanizations/sync', [
'headers' => [
'Authorization' => 'Bearer ' . TOHUMAN_API_KEY,
'Content-Type' => 'application/json',
],
'body' => wp_json_encode([
'content' => wp_strip_all_tags($data['post_content']),
'intensity' => 'medium',
]),
'timeout' => 30,
]);
if (!is_wp_error($response)) {
$body = json_decode(wp_remote_retrieve_body($response), true);
if (!empty($body['humanized_text'])) {
$data['post_content'] = $body['humanized_text'];
}
}
return $data;
}, 10, 2);
CMS webhook: Humanize on content creation events
If your CMS (Contentful, Sanity, Strapi, etc.) supports webhooks, set up a listener that fires on content creation. The listener calls the ToHuman API and writes the humanized version back to the CMS via its management API. This keeps humanization out of your application code and runs it as an infrastructure concern.
CI/CD: Gate on AI detection score
For teams running AI detection checks as part of a content review pipeline, you can insert a humanization step before the detection check. If a document's AI score exceeds a threshold, the pipeline calls the ToHuman API at the appropriate intensity level and re-checks. Automates the feedback loop entirely.
Error Handling
The API returns standard HTTP status codes:
- 200 — Success. The
humanized_textfield contains the result. - 401 — Invalid or missing API key. Check your
Authorizationheader. - 422 — Request validation failed. Usually means
contentis empty orintensityisn't one of the accepted values. - 500 — Server error. Retry with exponential backoff.
For production integrations, implement retry logic on 5xx errors. A simple strategy: retry up to three times with 1s, 2s, and 4s delays. Don't retry 4xx errors — those are problems with the request itself.
Python — retry wrapper
import time
import requests
def humanize_with_retry(text: str, intensity: str = "medium", max_retries: int = 3) -> str:
for attempt in range(max_retries):
try:
response = requests.post(
"https://tohuman.io/api/v1/humanizations/sync",
headers={
"Authorization": f"Bearer {os.environ['TOHUMAN_API_KEY']}",
"Content-Type": "application/json",
},
json={"content": text, "intensity": intensity},
timeout=30,
)
if response.status_code < 500:
response.raise_for_status()
return response.json()["humanized_text"]
# 5xx: retry
if attempt < max_retries - 1:
time.sleep(2 ** attempt)
except requests.exceptions.Timeout:
if attempt < max_retries - 1:
time.sleep(2 ** attempt)
raise RuntimeError("ToHuman API failed after max retries")
Performance Tips
Batch concurrently, not sequentially. Each API call is independent. If you have 50 documents, run them concurrently — don't wait for each one to finish before starting the next. The async batch example above shows the pattern in Python. In Node.js, use Promise.all.
Chunk long documents. Very long pieces of content can be split into sections (by heading, paragraph, or a fixed character count), processed in parallel, then reassembled. This also gives you finer control — you can use different intensity levels for the introduction versus the body, for instance.
Cache results where appropriate. If you're processing the same AI-generated template repeatedly, cache the humanized version. The model is deterministic at the same intensity level, but not perfectly so — repeated calls on identical input will produce similar but not always identical output. Cache by input hash if consistency matters.
Set appropriate timeouts. The API processes synchronously on dedicated RunPod compute. Longer documents take longer. Set a timeout of at least 30 seconds for documents over 1,000 words, and consider 60 seconds for very long content.
API vs. Web Tool: The Honest Comparison
If you occasionally humanize a blog post, the web app at tohuman.io is the right tool. The API is for use cases where manual steps are a bottleneck.
Most competitors don't offer an API at all — Humanize AI Pro, QuillBot, and Phrasly are all web-only. StealthGPT offers API access but reserves it for their highest-tier plan. Undetectable.ai has limited API access that isn't well-documented and has reliability complaints on forums.
ToHuman built the API first. The web app is built on top of the same endpoint you're calling here. That means the API is a first-class product, not an enterprise upsell.
The full API reference is at tohuman.io/docs — it covers authentication, all endpoints, error codes, and rate limits.
Frequently Asked Questions
Does ToHuman have a public API?
Yes. The API is live, documented, and currently free. You can find the full reference at tohuman.io/docs.
Is the API free to use?
Yes, during ToHuman's launch period. Sign up to get a key — no credit card required.
Which intensity level should I use?
It depends on how AI-patterned the input is. For lightly assisted drafts, minimal or subtle is usually enough. For outputs from ChatGPT or similar models, medium or heavy will produce more consistently human-sounding results. When in doubt, start at medium.
Can I process multiple documents in batch?
Yes. The API is a standard HTTP endpoint, so you can issue requests concurrently using asyncio in Python, Promise.all in Node.js, or a parallel job queue in any language. See the batch processing example above.
Does ToHuman store the text I send through the API?
No. Text is processed by ToHuman's own fine-tuned Mistral 7B model running on dedicated cloud compute. Nothing is stored after the response is returned, and no external AI APIs are used in processing.
Published March 28, 2026 by the ToHuman team.