Tutorials / MCP Server Integration
Build an MCP Server to Humanize AI Text
Create a Model Context Protocol server that exposes a humanize_text tool backed by the ToHuman API. Connect it to Claude Desktop, Cursor, or any MCP-compatible client.
The Model Context Protocol (MCP) is an open standard that lets AI assistants call external tools through a unified interface. Originally developed by Anthropic and now adopted by OpenAI, Cursor, Windsurf, and dozens of other platforms, MCP is becoming the default way to extend what AI can do. If you want to build an MCP server that can humanize AI text — rewriting LLM output so it reads naturally and passes detection tools — this tutorial walks you through it from scratch.
By the end, you'll have a working MCP server that exposes a humanize_text tool. Any MCP-compatible client — Claude Desktop, Cursor, Claude Code — can call this tool to send text to the ToHuman API and get back a human-sounding rewrite. The whole thing takes about 50 lines of Python.
What MCP Is (and Why It Matters for Humanization)
MCP works like a USB port for AI. An MCP server exposes tools — functions with typed parameters and descriptions. An MCP client (your AI assistant) discovers those tools, reads their descriptions, and calls them when relevant. The protocol handles serialization, transport, and error handling so you don't have to build custom integrations for each client.
This matters for text humanization because the workflow is repetitive: generate text with an LLM, then pass it through a humanizer before using it. Without MCP, you'd copy-paste into a web tool or write a bespoke script. With MCP, you tell Claude "humanize this paragraph" and it calls your tool automatically. The humanization step becomes part of the conversation, not a separate workflow.
For context on why humanization matters and what detection tools actually flag, see our post on AI detection false positives.
Prerequisites
- Python 3.10+ — required by the MCP SDK.
- uv — the Python package manager recommended by MCP's official docs. Install it with
curl -LsSf https://astral.sh/uv/install.sh | sh. - A ToHuman API key — sign up free at tohuman.io. Store it as an environment variable:
export TOHUMAN_API_KEY="your-key-here". - Claude Desktop (or any MCP-compatible client) for testing.
Step 1: Set Up the Project
Create a new directory for your MCP server and install the dependencies:
Terminal
# Create and enter the project directory
uv init tohuman-mcp
cd tohuman-mcp
# Create virtual environment and activate it
uv venv
source .venv/bin/activate
# Install MCP SDK and HTTP client
uv add "mcp[cli]" httpx
# Create the server file
touch server.py
The mcp[cli] package includes the FastMCP framework and the mcp CLI for testing. httpx handles async HTTP requests to the ToHuman API. If you prefer pip over uv, run pip install "mcp[cli]" httpx instead — the server code is identical either way.
Step 2: Build the MCP Server
Open server.py and add the following. This is the complete server — it registers a single tool called humanize_text that sends text to the ToHuman API and returns the rewritten version:
server.py
import os
import httpx
from mcp.server.fastmcp import FastMCP
# Initialize the MCP server
mcp = FastMCP("tohuman")
TOHUMAN_API_KEY = os.environ["TOHUMAN_API_KEY"]
TOHUMAN_API_URL = "https://tohuman.io/api/v1/humanize"
@mcp.tool()
async def humanize_text(text: str) -> str:
"""Rewrite AI-generated text so it reads like a human wrote it.
Sends the text to the ToHuman API, which rewrites sentence structure,
word choice, and rhythm to remove detectable AI writing patterns.
The meaning is preserved. Use this on any AI-generated content —
blog posts, emails, reports — before publishing or submitting.
Args:
text: The AI-generated text to humanize. Works best on
complete paragraphs (50+ words).
"""
async with httpx.AsyncClient() as client:
response = await client.post(
TOHUMAN_API_URL,
headers={
"Authorization": f"Bearer {TOHUMAN_API_KEY}",
"Content-Type": "application/json",
},
json={"text": text},
timeout=60,
)
response.raise_for_status()
data = response.json()
return data["humanized_text"]
if __name__ == "__main__":
mcp.run()
A few things to notice. FastMCP reads your function's type hints and docstring to automatically generate the tool's JSON schema — the schema that MCP clients use to understand what the tool does and what arguments it expects. Write the docstring like you're explaining the tool to a colleague. The AI reads it to decide when and how to call the tool.
The function is async because MCP servers run on an async event loop. The timeout=60 on the HTTP request is important — humanization of longer text can take several seconds, and the default httpx timeout will cut it off prematurely.
For full details on the ToHuman API's request format and response fields, see the API reference or the API guide tutorial.
Step 3: Test with MCP Inspector
Before connecting to Claude Desktop, test the server standalone using MCP Inspector — a browser-based debugging tool included with the MCP SDK:
Terminal
mcp dev server.py
This opens MCP Inspector in your browser. You'll see the humanize_text tool listed with its description and parameter schema. Click the tool, paste in some AI-generated text, and hit "Call Tool." You should get back a naturally rewritten version.
If you see a connection error, make sure your TOHUMAN_API_KEY environment variable is set. A 401 response means the key is invalid — double-check it on your ToHuman dashboard. A 422 means the request body is malformed.
Step 4: Connect to Claude Desktop
Claude Desktop discovers MCP servers through a JSON configuration file. Open it in your editor:
Terminal — open the config file
# macOS
code ~/Library/Application\ Support/Claude/claude_desktop_config.json
# Windows
code %APPDATA%\Claude\claude_desktop_config.json
Add your server to the mcpServers object. Replace the path with the absolute path to your project directory:
claude_desktop_config.json
{
"mcpServers": {
"tohuman": {
"command": "uv",
"args": [
"--directory", "/absolute/path/to/tohuman-mcp",
"run", "server.py"
],
"env": {
"TOHUMAN_API_KEY": "your-api-key-here"
}
}
}
}
The env block passes your API key to the server process. This is cleaner than relying on shell environment variables — Claude Desktop launches MCP servers as subprocesses, so your shell's export statements won't be available.
Quit and reopen Claude Desktop. You should see a hammer icon in the input toolbar — that's the tool indicator. Click it to confirm humanize_text appears in the list.
Step 5: Use It
With the server running, you can now ask Claude to humanize text naturally in conversation:
Claude Desktop — example prompts
"Humanize this paragraph: Artificial intelligence has demonstrated
remarkable capabilities across numerous domains, enabling the
automation of complex tasks that previously required significant
human expertise."
"Write a 200-word blog intro about remote work trends,
then humanize it before showing me the result."
"Take this email draft and humanize it so it sounds natural:
[paste your draft]"
Claude will call the humanize_text tool, send the text to the ToHuman API, and return the rewritten version inline. You'll see a tool-use indicator in the conversation showing exactly what was sent and received.
Advanced: Adding Parameters
The basic tool sends raw text and gets back humanized text. You can extend it with optional parameters to give the AI assistant more control over how humanization works. Here's an extended version with a confidence threshold and content type hint:
server.py — extended with parameters
import os
import httpx
from mcp.server.fastmcp import FastMCP
mcp = FastMCP("tohuman")
TOHUMAN_API_KEY = os.environ["TOHUMAN_API_KEY"]
TOHUMAN_API_URL = "https://tohuman.io/api/v1/humanize"
@mcp.tool()
async def humanize_text(
text: str,
content_type: str = "general",
min_confidence: float = 0.8,
) -> str:
"""Rewrite AI-generated text so it reads like a human wrote it.
Sends the text to the ToHuman API, which rewrites sentence structure,
word choice, and rhythm to remove detectable AI writing patterns.
The meaning is preserved.
Args:
text: The AI-generated text to humanize. Works best on
complete paragraphs (50+ words).
content_type: A hint for what kind of content this is.
Options: "general", "email", "blog", "academic".
Default is "general".
min_confidence: Minimum confidence score (0.0-1.0) to accept
the result. If the API returns a confidence
below this threshold, a warning is included.
Default is 0.8.
"""
async with httpx.AsyncClient() as client:
response = await client.post(
TOHUMAN_API_URL,
headers={
"Authorization": f"Bearer {TOHUMAN_API_KEY}",
"Content-Type": "application/json",
},
json={"text": text},
timeout=60,
)
response.raise_for_status()
data = response.json()
humanized = data["humanized_text"]
confidence = data.get("confidence_score", 1.0)
if confidence < min_confidence:
return (
f"[Warning: confidence {confidence:.2f} is below "
f"threshold {min_confidence:.2f}. Review before using.]\n\n"
f"{humanized}"
)
return humanized
@mcp.tool()
async def check_humanization_status(text: str) -> str:
"""Check whether a piece of text reads as human-written or AI-generated.
Useful for verifying content before publishing. Returns a confidence
score indicating how natural the text reads.
Args:
text: The text to analyze (at least 50 words for accurate results).
"""
async with httpx.AsyncClient() as client:
response = await client.post(
TOHUMAN_API_URL,
headers={
"Authorization": f"Bearer {TOHUMAN_API_KEY}",
"Content-Type": "application/json",
},
json={"text": text},
timeout=60,
)
response.raise_for_status()
data = response.json()
confidence = data.get("confidence_score", None)
if confidence is not None:
label = "human-like" if confidence >= 0.8 else "may flag as AI"
return (
f"Confidence score: {confidence:.2f} ({label}). "
f"Scores above 0.8 typically pass AI detection tools."
)
return "Analysis complete. No confidence score available."
if __name__ == "__main__":
mcp.run()
The extended server exposes two tools. The humanize_text tool now accepts a min_confidence parameter — if the API's confidence score falls below the threshold, the AI assistant gets a warning and can decide whether to retry or flag it for human review. The second tool, check_humanization_status, lets the assistant analyze text without modifying it — useful for checking whether existing content would pass detection.
Error Handling for Production
The basic server will crash on API errors. For a server you rely on daily, add proper error handling so failures produce useful messages instead of stack traces:
server.py — production error handling
@mcp.tool()
async def humanize_text(text: str) -> str:
"""Rewrite AI-generated text so it reads like a human wrote it.
Sends the text to the ToHuman API, which rewrites sentence structure,
word choice, and rhythm to remove detectable AI writing patterns.
Args:
text: The AI-generated text to humanize.
"""
try:
async with httpx.AsyncClient() as client:
response = await client.post(
TOHUMAN_API_URL,
headers={
"Authorization": f"Bearer {TOHUMAN_API_KEY}",
"Content-Type": "application/json",
},
json={"text": text},
timeout=60,
)
response.raise_for_status()
data = response.json()
return data["humanized_text"]
except httpx.HTTPStatusError as e:
return (
f"ToHuman API returned {e.response.status_code}. "
f"Check your API key and request format. "
f"Details: {e.response.text[:200]}"
)
except httpx.TimeoutException:
return (
"Request to ToHuman API timed out (60s). "
"Try again with shorter text, or check API status."
)
except Exception as e:
return f"Unexpected error calling ToHuman API: {str(e)}"
Returning error messages as strings (instead of raising exceptions) is intentional. MCP tools communicate results back to the AI assistant as text. If the tool returns an error message, the assistant can explain the problem to you and suggest fixes. If the tool raises an unhandled exception, the connection may drop with no useful feedback.
Using with Other MCP Clients
Claude Desktop is one client, but the same server works with any MCP-compatible tool. Cursor, Windsurf, Cline, and the OpenAI Agents SDK all support MCP. The server code is identical — you only change the client-side configuration file.
For Cursor, add the server to your .cursor/mcp.json file in your project root:
.cursor/mcp.json
{
"mcpServers": {
"tohuman": {
"command": "uv",
"args": [
"--directory", "/absolute/path/to/tohuman-mcp",
"run", "server.py"
],
"env": {
"TOHUMAN_API_KEY": "your-api-key-here"
}
}
}
}
The format is nearly identical to Claude Desktop's config. Each client has its own config file location, but the server definition structure is standardized across the MCP ecosystem.
How This Compares to Other Integrations
MCP is the most natural fit if you're already working inside an AI assistant and want humanization as part of the conversation flow. But it's not the only way to integrate ToHuman into your stack:
- Direct API calls — the API guide covers Python, Node.js, and curl with batch processing patterns.
- LangChain / CrewAI — if you're building autonomous agent pipelines, the LangChain tutorial and CrewAI tutorial show how to wrap ToHuman as a tool in those frameworks.
- No-code platforms — the n8n, Make.com, and Zapier tutorials cover visual workflow builders.
MCP's advantage is zero friction. You don't write a script, build a workflow, or switch tools. You type "humanize this" in your AI assistant and it happens.
What You've Built
You have a working MCP server that exposes a humanize_text tool backed by the ToHuman API. Any MCP-compatible client can discover and call it. The server handles async HTTP, error reporting, and optional confidence thresholds — all in about 50 lines of Python.
From here, you can add more tools (batch humanization, text analysis, style-specific rewrites), package the server for distribution via mcp install, or integrate it into team workflows where multiple people use the same server. The ToHuman API docs cover all available parameters and response fields, and signing up gets you a free API key in 30 seconds.
Frequently Asked Questions
What is the Model Context Protocol (MCP)?
MCP is an open standard originally developed by Anthropic that lets AI applications connect to external tools and data sources through a unified protocol. Think of it like a USB port for AI — any MCP-compatible client (Claude Desktop, Cursor, Windsurf, OpenAI Agents SDK) can discover and call tools exposed by any MCP server, without custom integration code for each client.
Which AI clients support MCP servers?
As of 2026, MCP is supported by Claude Desktop, Claude Code, Cursor, Windsurf, Cline, Continue, the OpenAI Agents SDK, and many other developer tools. Any client that implements the MCP specification can connect to your humanization server without code changes.
Do I need uv to build an MCP server?
No, but it's recommended. uv is a fast Python package manager that the official MCP documentation uses. You can also use pip or poetry — just install mcp[cli] and httpx. The server code works the same regardless of package manager.
Can I use this MCP server with Cursor or other editors?
Yes. Cursor, Windsurf, and other MCP-compatible editors have their own configuration files for registering MCP servers. The server code is identical — you only change the client-side config. Check your editor's MCP documentation for the specific config file location and format.
Published April 14, 2026 by the ToHuman team.