Solutions
Services
AI Growth
Industries
Resources
Pricing
Book a call
Home/Blog/The humanizer pass we run on everything our agents write
Playbook·April 28, 2026·8 min read

The humanizer pass we run on everything our agents write

Every blog post, social update, and outbound email our agents produce passes through one shared humanizer skill before publish. Here is what is in it, why we built it, and what happens when we skip it.

QB
Quang Bui
Founder, digicore101
Editorial illustration: a sheet of typewritten text, with several lines crossed out in orange-coral and a few rewritten in clean charcoal below.
#prompting#content#writing#agents#playbook
▶ TL;DR
  • We run three content agents in production: a blog manager, a social manager, and a product-update writer. They all share one humanizer skill that runs as the last step before anything ships.
  • It targets two dozen specific tells: em-dash overuse, "stands as a testament", three-part lists by reflex, sycophantic openers, generic conclusions, vague attributions like "experts say".
  • Removing tells is only half the job. Sterile, voiceless writing is just as obvious as slop. The skill also forces concrete anchors per paragraph and lets first-person stance back into Blog posts where it belongs.
  • When we forget the pass and ship anyway, the engagement drop is obvious within a day. Comments dry up, replies stop, the post sinks. We do not have clean A/B numbers, but the qualitative gap is the kind you stop second-guessing.
  • We do not ship anything without it. Skipping the pass is a checklist item we treat the same as skipping the build.
▶ Q&A

Frequently asked.

Q.01

How do you humanize AI-generated text?

Run the draft through a separate editing step (not part of the original prompt) that targets specific patterns: em-dash overuse, vague attributions like "experts say", inflated phrases like "stands as a testament", reflexive three-part lists, and sycophantic openers. Then add voice back: opinions, varied rhythm, acknowledged complexity. Stripping tells without adding voice produces sterile text that still reads as machine-edited.

Q.02

What are the most common signs of AI-written content?

Inflated significance ("pivotal moment", "stands as a testament"), promotional vocabulary ("vibrant", "seamless", "unlock"), vague source-laundering ("experts say", "research suggests"), and cadence tics like every paragraph ending with the same punctuation move. None of these are decisive on their own. The signal is regularity — the same move repeated until it reads as a fingerprint.

Q.03

Can AI detectors detect humanized text?

Sometimes, but unreliably. Detectors are probabilistic classifiers with real false-positive rates, especially on short text and formal prose. We do not optimize for them. We optimize for the human reader who has seen too much LLM output and clicks away the second something feels off — that is a harder bar and the gains transfer.

Q.04

Should you ban em-dashes in AI-written content?

Depends on the context. In short outbound email, yes — em-dashes were the strongest "this looks AI-written" signal in our human-review tests. In long-form blog or knowledge posts, em-dashes are fine when they earn their keep. The problem is repetition, not presence. Three em-dashes per paragraph for ten paragraphs in a row is the tell.

Q.05

How often should you update a humanizer skill?

We touch ours about every six to eight weeks. New AI cues emerge as readers learn to spot the previous batch (bold-face overuse and curly quotes are recent additions). Old cues get less reliable as models adjust. Treat the skill like a Playbook entry: dated by design, useful for the structure, assume the specifics drift.