Tutorials / CrewAI Integration

How to Humanize AI Text in CrewAI Multi-Agent Workflows

Build a CrewAI custom tool that calls the ToHuman API, assign it to a dedicated Humanizer agent, and run a full crew that generates and humanizes content automatically.

· 18 min read

CrewAI has become one of the fastest-growing frameworks for building multi-agent AI systems — 48,000+ GitHub stars and over 12 million daily agent executions in production. Its core abstraction is simple: define agents with roles and goals, assign them tasks, and let the crew collaborate to produce a result. If your crew generates written content — blog posts, reports, marketing copy, documentation — that output carries the statistical fingerprint of whatever LLM produced it. AI detectors like GPTZero and Turnitin will flag it.

The fix is to humanize CrewAI agent output as part of the workflow itself. This tutorial shows you how to create a custom CrewAI tool that calls the ToHuman API to rewrite AI-generated text so it reads naturally and passes detection. You'll build the tool, assign it to a dedicated Content Polisher agent, and wire up a full crew that writes and humanizes content in a single run.

Why CrewAI Agents Need Humanization

Multi-agent crews are built for automation. A Writer agent drafts content, a Researcher agent gathers sources, an Editor agent polishes prose. The output looks good — but it reads like a machine wrote it, because one did. Every agent in the crew is powered by an LLM, and every LLM leaves recognizable patterns: uniform sentence length, predictable transitions, passive constructions.

This matters when the output is published, submitted, or shared with an audience that cares about authenticity. AI detection tools score text based on these patterns, and a crew that generates content without a humanization step will produce flagged output every time. Adding a humanization tool to the crew solves this at the workflow level — no manual post-processing, no copy-pasting into a separate tool after the fact.

For background on what detection tools actually look for and why AI detection produces false positives, see our deep dive on the subject.

Prerequisites

  • Python 3.9+ with crewai and httpx installed.
  • An OpenAI API key stored as OPENAI_API_KEY (or any LLM provider CrewAI supports).
  • A ToHuman API keysign up free at tohuman.io. Store it as TOHUMAN_API_KEY.

Install the dependencies:

Terminal

pip install crewai crewai-tools httpx

Step 1: Create the Humanization Tool

CrewAI supports two ways to define custom tools: the @tool decorator for simple functions, and subclassing BaseTool for tools that need structured input validation. The decorator approach is faster to write and sufficient for API integrations like this one.

Create a file called tools.py:

tools.py

import os
import httpx
from crewai.tools import tool

TOHUMAN_API_KEY = os.environ["TOHUMAN_API_KEY"]
TOHUMAN_API_URL = "https://tohuman.io/api/v1/humanize"


@tool("Humanize Text")
def humanize_text(text: str) -> str:
    """
    Rewrites AI-generated text so it reads like a human wrote it.

    Use this tool on any written content — blog posts, reports,
    marketing copy — before it is published or returned as a final
    result. The tool sends the text to the ToHuman API and returns
    a rewritten version that preserves the original meaning while
    removing detectable AI writing patterns.

    Args:
        text: The AI-generated text to humanize. Works best on
              complete paragraphs or full sections.

    Returns:
        The humanized version of the input text.
    """
    response = httpx.post(
        TOHUMAN_API_URL,
        headers={
            "Authorization": f"Bearer {TOHUMAN_API_KEY}",
            "Content-Type": "application/json",
        },
        json={"text": text},
        timeout=60,
    )
    response.raise_for_status()
    data = response.json()
    return data["humanized_text"]

The docstring matters. CrewAI agents read the tool's description to decide when and how to call it. Write it like you're explaining the tool to a colleague who doesn't know your codebase. The timeout=60 is important — humanization of longer text can take several seconds, and the default httpx timeout will cut it off.

Test the tool standalone before wiring it into any agent:

Python REPL — verify the tool works

from tools import humanize_text

result = humanize_text.run(
    "Artificial intelligence has demonstrated remarkable capabilities "
    "across numerous domains, enabling the automation of complex tasks "
    "that previously required significant human expertise."
)

print(result)

You should get back a naturally rewritten version. If you see a 401 error, check that TOHUMAN_API_KEY is set in your environment. A 422 means the request body is malformed — verify the JSON payload matches the API documentation.

Step 2: Define the Content Polisher Agent

In CrewAI, an Agent is defined by a role, a goal, a backstory, and a set of tools. The Humanizer agent's job is narrow: take text from another agent and humanize it. Give it a clear role and goal so the crew knows exactly what this agent does.

agents.py

from crewai import Agent
from tools import humanize_text

writer_agent = Agent(
    role="Content Writer",
    goal="Write clear, engaging blog content on the given topic.",
    backstory=(
        "You are an experienced content writer who produces "
        "well-structured blog posts with concrete examples "
        "and a conversational tone."
    ),
    verbose=True,
)

humanizer_agent = Agent(
    role="Content Polisher",
    goal=(
        "Rewrite AI-generated text so it reads naturally and "
        "passes AI detection tools. Always use the Humanize Text "
        "tool on the full text you receive."
    ),
    backstory=(
        "You specialize in making AI-generated content sound "
        "human-written. You always run text through the Humanize "
        "Text tool before returning your result."
    ),
    tools=[humanize_text],
    verbose=True,
)

Notice that only the humanizer_agent gets the tool. The Writer agent generates raw content using its LLM — no tools needed. The Humanizer agent receives that content and runs it through the ToHuman API via the tool. This separation keeps each agent focused on one job, which is the core design principle behind CrewAI's role-based architecture.

Step 3: Build the Crew Workflow

A Crew connects agents with tasks and defines the order of execution. For a content pipeline, a sequential process works well: the Writer goes first, the Humanizer goes second. The output of the Writer's task becomes the input context for the Humanizer's task.

crew.py

from crewai import Crew, Task, Process
from agents import writer_agent, humanizer_agent

# Task 1: Generate a blog post draft
write_task = Task(
    description=(
        "Write a 300-word blog post about the rise of AI coding "
        "assistants in 2026. Use a conversational tone, include "
        "at least one concrete example, and avoid bullet points."
    ),
    expected_output="A 300-word blog post in flowing paragraphs.",
    agent=writer_agent,
)

# Task 2: Humanize the draft
humanize_task = Task(
    description=(
        "Take the blog post from the previous task and humanize "
        "it using the Humanize Text tool. Pass the entire text "
        "to the tool — do not summarize or truncate it. Return "
        "the humanized version as your final output."
    ),
    expected_output="The full blog post, humanized to read naturally.",
    agent=humanizer_agent,
    context=[write_task],
)

# Assemble the crew
crew = Crew(
    agents=[writer_agent, humanizer_agent],
    tasks=[write_task, humanize_task],
    process=Process.sequential,
    verbose=True,
)

result = crew.kickoff()
print("\n--- Final Output ---")
print(result.raw)

The context=[write_task] parameter on the humanize task is what connects the two steps. CrewAI automatically passes the Writer's output as context to the Humanizer. The Humanizer reads that context, calls the humanize_text tool with the full text, and returns the result.

Step 4: Run the Crew and Verify Output

Run the script and watch the verbose output in your terminal:

Terminal

python crew.py

With verbose=True, you'll see each agent's reasoning steps, including the moment the Humanizer agent calls the tool. The output will show the tool invocation, the text sent to the ToHuman API, and the humanized result returned. If the tool call doesn't appear in the trace, check the Humanizer agent's goal — make the instruction to use the tool more explicit.

To verify the output passes AI detection, run it through GPTZero's API or paste it into the web interface. The ToHuman API response also includes a confidence_score field (0-1) indicating how natural the rewritten text reads — you can log this in your pipeline for quality monitoring.

Advanced: Batch Processing Multiple Topics

For production workflows that generate content at scale, you can run the crew in a loop over multiple topics. Each iteration produces a separate humanized article:

batch.py — process multiple topics

from crewai import Crew, Task, Process
from agents import writer_agent, humanizer_agent

topics = [
    "How AI pair programming changes solo developer workflows",
    "The case for keeping humans in the AI review loop",
    "Why code review is getting harder as AI output grows",
]

for topic in topics:
    print(f"\n{'='*60}")
    print(f"Processing: {topic}")
    print('='*60)

    write_task = Task(
        description=f"Write a 300-word blog post about: {topic}",
        expected_output="A 300-word blog post in flowing paragraphs.",
        agent=writer_agent,
    )

    humanize_task = Task(
        description=(
            "Humanize the blog post from the previous task using "
            "the Humanize Text tool. Return the full humanized text."
        ),
        expected_output="The full blog post, humanized.",
        agent=humanizer_agent,
        context=[write_task],
    )

    crew = Crew(
        agents=[writer_agent, humanizer_agent],
        tasks=[write_task, humanize_task],
        process=Process.sequential,
    )

    result = crew.kickoff()
    print(result.raw)

Error Handling for Production Crews

In a batch pipeline, a single failed API call shouldn't kill the entire run. Wrap the tool's API call with proper error handling so the crew can continue processing remaining items:

tools.py — with error handling

import os
import logging
import httpx
from crewai.tools import tool

logger = logging.getLogger(__name__)

TOHUMAN_API_KEY = os.environ["TOHUMAN_API_KEY"]
TOHUMAN_API_URL = "https://tohuman.io/api/v1/humanize"


@tool("Humanize Text")
def humanize_text(text: str) -> str:
    """
    Rewrites AI-generated text so it reads like a human wrote it.
    Use this on any written content before publishing.
    """
    try:
        response = httpx.post(
            TOHUMAN_API_URL,
            headers={
                "Authorization": f"Bearer {TOHUMAN_API_KEY}",
                "Content-Type": "application/json",
            },
            json={"text": text},
            timeout=60,
        )
        response.raise_for_status()
        data = response.json()
        return data["humanized_text"]
    except httpx.HTTPStatusError as e:
        logger.error(
            "ToHuman API error %s: %s",
            e.response.status_code,
            e.response.text[:200],
        )
        return f"[Humanization failed — returning original] {text}"
    except httpx.TimeoutException:
        logger.error("ToHuman API timeout")
        return f"[Humanization timed out — returning original] {text}"

The fallback returns the original text with a prefix tag so you can identify which items need re-processing. This keeps the crew running through all topics even if the API is temporarily unavailable.

Alternative: Using BaseTool for Structured Input

If you need stricter input validation — for example, enforcing a strength parameter with specific allowed values — subclass BaseTool instead of using the decorator:

tools_advanced.py — BaseTool with Pydantic schema

import os
from typing import Literal, Type
import httpx
from crewai.tools import BaseTool
from pydantic import BaseModel, Field


class HumanizeTextInput(BaseModel):
    text: str = Field(description="The AI-generated text to humanize.")
    strength: Literal["light", "medium", "strong"] = Field(
        default="medium",
        description="How aggressively to rewrite the text.",
    )


class HumanizeTextTool(BaseTool):
    name: str = "Humanize Text"
    description: str = (
        "Rewrites AI-generated text so it reads naturally and passes "
        "AI detection tools. Use on any written content before publishing."
    )
    args_schema: Type[BaseModel] = HumanizeTextInput

    def _run(self, text: str, strength: str = "medium") -> str:
        response = httpx.post(
            "https://tohuman.io/api/v1/humanize",
            headers={
                "Authorization": f"Bearer {os.environ['TOHUMAN_API_KEY']}",
                "Content-Type": "application/json",
            },
            json={"text": text},
            timeout=60,
        )
        response.raise_for_status()
        return response.json()["humanized_text"]


# Instantiate for use in agents
humanize_text = HumanizeTextTool()

The BaseTool approach gives you Pydantic validation on inputs, typed fields, and the ability to add custom initialization logic. For most integrations the @tool decorator is sufficient, but BaseTool is useful when you want the agent to have structured options — like choosing between humanization strength levels.

CrewAI vs LangChain: Which Should You Use?

If you're already using LangChain, the LangChain humanization tutorial covers the same integration using LangChain's @tool decorator and create_react_agent. The ToHuman API call is identical in both frameworks — only the agent orchestration layer differs.

CrewAI's advantage is its role-based multi-agent design. When you need multiple agents collaborating on a task — a Writer, a Researcher, a Humanizer, an Editor — CrewAI's Crew/Task/Agent abstractions make the workflow easier to reason about and modify. LangChain is more flexible for single-agent and custom graph workflows. Choose based on your use case, not brand loyalty.

What You've Built

You have a CrewAI custom tool that calls the ToHuman API to humanize AI-generated text, a dedicated Content Polisher agent that uses it, and a full sequential crew that generates and humanizes content in a single pipeline. The same pattern works for any content type — marketing copy, email campaigns, documentation, reports.

From here, you can extend the crew with additional agents (a Researcher that gathers sources, an SEO Optimizer that checks keyword density), add the BaseTool variant for strength control, or integrate the crew into a larger system using CrewAI Flows. The ToHuman API docs cover all available parameters and response fields, and signing up gets you a free API key in 30 seconds.

Frequently Asked Questions

How do I create a custom tool in CrewAI?

Two approaches. The @tool decorator from crewai.tools is the simplest — decorate a Python function and write a clear docstring. For structured input validation, subclass BaseTool with a Pydantic args_schema and implement a _run method. The agent reads the tool's description to decide when to call it, so write the description like you're explaining it to a smart colleague.

Can CrewAI agents automatically humanize their own output?

Yes. Assign the humanize_text tool to an agent and instruct it in the goal or task description to always use the tool on written content. For guaranteed humanization, use a dedicated Humanizer agent in a sequential crew — the output always passes through the ToHuman API regardless of individual agent decisions.

What's the difference between CrewAI and LangChain for this?

CrewAI is purpose-built for multi-agent teams with role-based collaboration. LangChain is a general-purpose LLM framework with agent support via LangGraph. The ToHuman API integration is nearly identical — both use a tool that makes a POST request. The difference is how you orchestrate agents around that tool. CrewAI is faster to set up for multi-agent workflows; LangChain offers more flexibility for single-agent and custom graph patterns.

Does humanizing agent output change the meaning of the content?

The ToHuman API preserves meaning while rewriting sentence structure, word choice, and rhythm. For content with precise statistics or direct quotes, review the humanized version before publishing. The response includes a confidence_score (0-1) that indicates how natural the rewrite reads — useful for automated quality gates in production pipelines.

Published April 10, 2026 by the ToHuman team.

Back to tutorials