Humanizer for German: Why AI Text Detection Is Language-Specific
AI-generated text has telltale patterns. But those patterns aren't universal.
When you read a text that feels wrong—too smooth, perfectly structured, every paragraph the same length—you can often sense that an LLM wrote it. But what you're sensing is mostly English. German AI text sounds different. It has its own tells.
The problem: most AI detection tools and guides are designed for English. They catch English patterns beautifully. They miss German ones entirely.
I built the Humanizer (Deutsch) to fix that. It's a free, open-source Claude Code skill that detects 34 German-specific AI writing patterns, ranks them by severity, and walks you through a structured 2-pass cleanup.
Try it
github.com/marmbiz/humanizer-de — MIT licensed, free, works as a Claude Code skill.
git clone https://github.com/marmbiz/humanizer-de ~/.claude/skills/humanizer-de
Then in Claude Code: /humanizer — done.
What the Humanizer Does
You call it with /humanizer or just say "Humanize this text for me." It gives you:
- A draft rewrite with the obvious AI patterns removed
- A quick anti-AI audit flagging remaining tells
- A final version after the second pass
Three modes adjust the correction to your context:
| Mode | When | What happens |
|---|---|---|
| Casual | Blog posts, social media, newsletters | Adds personality and rhythm |
| Neutral | Business reports, product docs, emails | Removes AI tells, keeps tone neutral |
| Formal | Academic papers, legal texts, technical docs | Only removes tells, preserves structure |
Default is Neutral when the context is unclear.
Severity ranking (HIGH / MEDIUM / LOW) for each pattern lets you focus on what matters most. HIGH patterns are almost always AI. LOW patterns only stand out when they cluster.
Why German AI Text Is Different
English and German diverge in their vulnerabilities to LLMs. The same model that produces flawless English can betray itself immediately in German through patterns that native English speakers don't notice.
Take these examples:
- Participle-I constructions like "gewährleistend" or "hervorhebend" (ensuring, highlighting). In English, "-ing" forms are natural everywhere. In German, this construction screams LLM.
- Overused transition phrases like "Darüber hinaus" (furthermore) appearing three times per paragraph. Native German writers vary their transitions. LLMs repeat the same mechanical connectors.
- Em-dashes everywhere — a punctuation habit from English that German doesn't share natively.
- Vague authorities like "Experten sagen" (experts say) with no sources attached.
- Symbolic overload like "steht als Zeugnis für" (stands as testimony to) — nobody writes like this.
- Promotional tone with "atemberaubend" (breathtaking) in contexts where it doesn't belong.
- Chatbot artifacts like "Stand Januar 2024" (as of January 2024) appearing in articles written months later.
Before (LLM):
Die atemberaubende Stadt mit ihrem reichen kulturellen Erbe steht als Zeugnis für die künstlerische Brillanz vergangener Generationen.
"The breathtaking city with its rich cultural heritage stands as testimony to the artistic brilliance of past generations."
After (human):
Die Stadt hat eine lange Geschichte. Ihre Denkmäler zeigen die Handwerkskunst des Mittelalters.
"The city has a long history. Its monuments show medieval craftsmanship."
Less decoration, more substance.
34 Patterns in 6 Categories
The Humanizer detects patterns across six categories:
1. Language & Tone (12 patterns, mostly HIGH)
Symbolic overload, promotional language, editorial comments, mechanical conjunctions, section summaries, participle-I constructions, vague authorities, forced conclusions, negative parallelisms, tricolon overuse, false extensions, misplaced "Fazit" sections.
2. Style (4 patterns, MEDIUM/LOW)
Excessive bold text, false lists, emojis before headings, em-dash overuse (anglicism).
3. Communication (6 patterns, all HIGH)
Letter-style writing, collaborative chatbot phrases ("I hope this helps!"), knowledge cutoff references, prompt refusals, placeholder text, links to search queries.
4. Markup (6 patterns, all MEDIUM)
Markdown instead of wikitext, broken wikitext, dead links, fabricated DOIs/ISBNs, incorrect reference formats, wrong categories.
5. Miscellaneous (3 patterns, LOW/MEDIUM)
Abrupt cutoffs, style shifts mid-text, first-person edit summaries.
6. Rhetoric & Structure (3 patterns, NEW)
| Pattern | Severity | Example |
|---|---|---|
| Persuasive authority phrases | MEDIUM | "Im Kern" (at its core), "In Wirklichkeit" (in reality) |
| Signposting | MEDIUM | "Schauen wir uns an" (let's look at), "Here's what you need to know" |
| Fragmented headings | LOW | Generic one-liner immediately after a heading |
These three patterns were adapted from upstream PR #39. They're common in both English and German AI text, but the German versions have distinct phrasing.
Why I Created the German Humanizer
I discovered Siqi Chen's original Humanizer and immediately saw the gap: it worked brilliantly for English, but German AI had different patterns. Testing it on German text was like using an English spell-checker on German — not wrong, just missing the point.
The German Wikipedia maintains its own guide to AI-generated content indicators. The English Wikipedia has a comparable resource. Siqi's original pulls from the English one; the German version documents something different. I used both as the foundation.
The philosophy is the same as Siqi's tool — analysis, not auto-rewriting. But the patterns are German-specific.
Working with English content? Use Siqi Chen's original Humanizer. It's excellent for English text.
Working with German content? That's what the German adaptation is for.
If your goal is to disguise AI use, this is the wrong tool. The point is better writing, not camouflage.
Who Needs This
- German content creators using AI who want their writing to sound authentic
- Marketing teams reviewing copy for AI artifacts
- Wikipedia editors evaluating German submissions
- Bilingual teams where English editors need to catch German AI patterns
- Anyone learning how to recognize German AI-generated text
Credits and Open Source
The tool is MIT licensed and open source. It builds on:
- Pattern research: German Wikipedia's AI detection guide + English Wikipedia's AI writing signs
- Original concept & English Humanizer: Siqi Chen (blader)
- German adaptation: github.com/marmbiz/humanizer-de
I built the German version. Siqi built the original. Both Wikipedias documented the patterns.
Changelog
v2.3.0-de.1 (March 2026)
- 3 new patterns (32–34): Persuasive authority phrases, Signposting, Fragmented headings (adapted from upstream PR #39)
- Severity ranking (HIGH / MEDIUM / LOW) for all 34 patterns (inspired by upstream PR #51)
- Mode system: Casual / Neutral / Formal
- Quick reference table for fast scanning
- "Don't touch" rules and guardrails
v2.2.0-de.2 (February 2026)
- 2-pass workflow instead of one-shot cleanup: Draft -> Quick audit -> Final
- More emphasis on voice: rhythm, perspective, natural variation
- Cleaner review format: three separated output blocks
This post was written with AI assistance. But reviewed with language-specific awareness. Because the patterns that reveal AI aren't just in what you write — they're in which language you write it in.