Three agents on our platform — blog manager, social manager, product-update writer — were all publishing fine-looking work that nobody read past paragraph four. Different prompts, different models, different tones. Same flavor of forgettable. We added one step at the end of every pipeline. Engagement on the blog manager's posts roughly doubled within two weeks.
That step is the humanizer pass — a shared skill that catches AI tells and forces concrete anchors before any agent-written content ships. This post is what is in it, why each piece exists, and what happens on the days we forget to run it.
Why one shared skill instead of better prompts
The first thing we tried was making each agent's prompt more careful. More banned phrases. More tone instructions. More "write like a human" framing. None of it worked very well.
The problem is that an upstream prompt is competing with everything else the model is trying to do at the same time. The blog manager is also trying to hit a keyword, weave in three internal links, structure a 1,200-word piece, match an FAQ to PAA wording, and include a table. By the time it gets to "tone", tone is the thing that loses.
A separate humanizer step has one job. The draft is already done. The structural decisions are settled. All it has to do is read the prose like a first-time reader and rewrite the parts that sound machine-built.
One skill, used by every content agent. Same vocabulary across the blog, the LinkedIn feed, the changelog, the cold-email reply handler. It also gives us one place to update when the cues shift.
The patterns the skill targets
Around two dozen patterns, grouped into six buckets. Most of them are not new. They get rediscovered every few months on Hacker News when someone publishes a Wikipedia-style list. Our version is a synthesis from public sources plus the bans we built up internally over the last year.
| Bucket | Examples | Why it reads as AI |
|---|---|---|
| Inflated significance | "stands as a testament", "pivotal moment", "reflects broader shifts" | Treats every fact like it belongs in a wrap-up paragraph at a keynote. |
| Vague attribution | "experts say", "observers note", "research suggests" | Source laundering. The model could not name a specific source so it invented authority. |
| Promotional vocabulary | "vibrant", "seamless", "robust", "unlock", "leverage" | Reads like a press release written for a mid-tier SaaS launch. |
| Cadence tics | Em-dash on every paragraph, three-part lists by reflex, "not just X, but Y" | Same punctuation move repeated until it reads as a fingerprint. |
| Sycophantic openers | "Great question!", "What a fascinating topic!" | Chatbot artifact. No human writes this on a blog post. |
| Generic conclusions | "Overall the outlook is positive", "the future looks promising" | Says nothing. A real ending names a next thing or admits an open question. |
The em-dash item has its own story. We banned it from outbound email back in March (covered in the SDR prompt post) and reply rates moved. In long-form Blog and Knowledge posts the rule is softer — em-dashes are fine when they earn their keep. The signal is repetition, not presence. Three em-dashes per paragraph for ten paragraphs in a row is the tell.
The half people skip: putting voice back in
If the humanizer only stripped tells, it would produce text that is technically clean and emotionally dead. Sterile writing reads as obviously machine-edited even when the AI cues are gone.
So the second half of the skill is about adding voice back. For our Blog posts (which have a byline) this means letting first person in, letting opinions appear, letting some mess survive the edit.
Three concrete moves the skill calls out:
- Have an opinion — do not just report the change, react to it. "The numbers look great on paper. But the engagement is up because users have to click around more, not because they want to."
- Vary the rhythm — short punchy sentences. Then longer ones that take their time and let an idea breathe. The metronome cadence of identical sentence length is its own AI cue.
- Acknowledge complexity — "It works, but it also feels like a workaround more than a real solution" reads as a person thinking. "Overall the result was successful" reads as a model.
Knowledge posts are different. They are anonymous reference content. The skill keeps them neutral but still strips the cadence tics — neutrality is not the same as voicelessness.
Where it lives in our pipelines
Every content agent on our platform calls the same humanizer skill as a final step. The exact integration depends on the agent:
- Blog manager — runs after the draft is written, after internal links are placed, before the post is pushed to Sanity. This catches AI-isms before they hit the build.
- Social manager — runs on every LinkedIn and X draft before the human-review queue. Reduces the rate at which I have to send things back for rewrite from "most of them" to "maybe one in five".
- Product-update writer — runs on changelog entries and release notes. The before/after on these is the most dramatic. Changelogs are where AI prose is most noticeable because the format invites bullet-list thinking.
- Outbound reply handler — runs on the second-touch and reply emails (the cold first touch has its own bans built into the prompt — see the SDR post). Outbound replies are higher stakes and the humanizer catches stuff the prompt-level bans miss.
What we preserve through the pass
The humanizer is a writing edit, not a structural rewrite. There are things it must not touch, and we tell it that explicitly each time:
- FAQ question wording — for SEO posts these are matched to People-Also-Ask exactly, and rewording them costs the AEO win.
- Internal links and their slugs — the hrefs are correct and we do not want them paraphrased away.
- Table cells, especially highlighted "winner" cells. Tables are visual; the humanizer is for prose.
- Specific numbers, citations, product names, and dates. Anything verifiable has to survive intact.
- Code blocks and command examples in technical posts. Copy-paste reliability beats prose elegance.
When the humanizer is sloppy with these, the post breaks in builds or loses ranking. Both have happened to us. The preservation list is now part of every invocation.
What we have seen break
Three failure modes worth flagging:
- Over-correction toward fake-human moves — the model occasionally adds typos, slang, or contrived asides to "sound human". This is worse than the original AI cadence. We have an explicit rule against it now: do not invent typos, do not break grammar on purpose, do not inject staged messiness.
- Voice creep on Knowledge posts — the skill knows Blog posts can have voice and Knowledge posts cannot, but it sometimes leaks first-person stance into a Knowledge post anyway. We catch this in the final read-through. If you see "I" in a Knowledge post, that is a bug.
- Losing the lede — when the skill rewrites the opening paragraph for cadence, it occasionally buries the answer under fresh setup. We re-check the lede manually on every post. The first paragraph of a piece is too important to delegate.
How to build your own
The skill is roughly 500 lines of markdown and one rule: scan the draft for these patterns, rewrite the offenders, do a final pass that asks "what makes the below so obviously AI-generated?" and revise once more. That structure is portable.
Two starting moves if you want to ship something similar:
- Take 20 of your last published posts and read them as a stranger would. Mark every sentence that reads as obviously machine-generated. Cluster the marks. Whatever shows up three or more times is a candidate for your bans list.
- After your humanizer runs, do a control test: publish a humanized version and an un-humanized version of similar posts to similar audiences. Watch the second-week engagement difference. If there is no gap, your humanizer is not doing enough. If the gap is big, the skill is earning its place in the pipeline.
If you want help wiring this kind of pass into your own content stack — agents, prompts, eval, queue — that is the kind of build we do. Book an automation audit or read the AI SDR product page for the closest worked example.
