A Claude Code plugin that detects AI-generated writing patterns and builds voice profiles through adaptive interviews and computational stylistics.
Experimental. This project is a research prototype. The scoring pipeline, dimension mapping, and NLP analysis have not been validated against external benchmarks or peer-reviewed psychometric standards. The voice profiles it produces are plausible but unproven. Treat the output as a starting point for editorial guidance, not as a validated instrument. The question bank, scoring weights, and dimension definitions will change as the system matures. Use it, break it, report what does not work.
- Multi-tier pattern detection: Character, language, structural, and voice analysis
- Automated character fixes: Auto-fix em dashes, smart quotes, emojis
- Proactive review: Agent triggers after content creation/editing
- Interactive setup: Configuration wizard for project-specific settings
- Configurable: Customize file types, directories, and detection tiers
claude plugin install zircote/human-voiceClone and add to Claude Code:
git clone https://github.com/zircote/human-voice.git
claude --plugin-dir /path/to/human-voiceOr copy to your project's .claude-plugin/ directory.
- Claude Code CLI
- Node.js 18+ (for validation scripts)
| Component | Name | Purpose |
|---|---|---|
| Skill | human-voice | Core detection patterns and writing guidelines |
| Command | /human-voice:voice-setup |
Interactive configuration wizard |
| Command | /human-voice:voice-review [path] |
Analyze content for AI patterns |
| Command | /human-voice:voice-fix [path] |
Auto-fix character-level issues |
| Agent | voice-reviewer | Proactive content review after edits |
# Set up configuration for your project
/human-voice:voice-setup
# Review content for AI patterns
/human-voice:voice-review docs
# Auto-fix character issues
/human-voice:voice-fix docs --dry-runThe skill loads automatically when you say:
- "review for AI patterns"
- "make this sound human"
- "check for AI writing"
- "ai slop detection"
- "fix AI voice"
- "improve writing voice"
Set up configuration:
/human-voice:voice-setup
Detects project structure, content directories, and creates config.json with your preferences.
Review content for AI patterns:
/human-voice:voice-review docs # review specific directory
/human-voice:voice-review content/blog # review specific path
/human-voice:voice-review # auto-detects content directories
Auto-fix character issues:
/human-voice:voice-fix docs # apply fixes to directory
/human-voice:voice-fix --dry-run docs # preview changes first
/human-voice:voice-fix # auto-detect and fix
The voice-reviewer agent triggers:
- Proactively: After Write/Edit operations on .md/.mdx files
- On request: When you ask to review content voice
| Character | Unicode | Replacement |
|---|---|---|
| Em dash (--) | U+2014 | Period, comma, colon |
| En dash (-) | U+2013 | Hyphen |
| Smart quotes | U+201C/D, U+2018/9 | Straight quotes |
| Ellipsis (...) | U+2026 | Three periods |
| Emojis | Various | Remove |
- Buzzwords: delve, realm, pivotal, harness, revolutionize, seamlessly
- Hedging: "it's worth noting", "generally speaking", "arguably"
- Filler: "in order to", "due to the fact", "at this point in time"
- List addiction (everything as bullets)
- Rule of three overuse
- "From X to Y" constructions
- Monotonous sentence structure
- Passive voice overuse
- Generic analogies
- Meta-commentary ("In this article...")
- Perfect grammar with shallow insights
Run /human-voice:voice-setup for interactive configuration, or edit config.json directly.
Configuration is stored at $CLAUDE_PLUGIN_DATA/config.json (defaults to ~/.human-voice/config.json in standalone mode). Use python -m lib.config show to view the effective config, or python -m lib.config reset to write defaults.
When Subcog MCP server is available, the plugin can leverage persistent memory:
- Recall project-specific voice decisions before analysis
- Capture findings and patterns for future sessions
- Track configuration preferences across sessions
All features work without Subcog. Memory integration is additive and never blocks core functionality.
human-voice/
├── .claude-plugin/
│ └── plugin.json
├── agents/
│ └── voice-reviewer.md
├── commands/
│ ├── voice-fix.md
│ ├── voice-review.md
│ └── voice-setup.md
├── skills/
│ └── human-voice/
│ ├── SKILL.md
│ ├── scripts/
│ │ ├── fix-character-restrictions.js
│ │ └── validate-character-restrictions.js
│ ├── references/
│ │ ├── character-patterns.md
│ │ ├── language-patterns.md
│ │ ├── structural-patterns.md
│ │ └── voice-patterns.md
│ └── examples/
│ └── before-after.md
├── templates/
│ └── observer-protocol.md
├── LICENSE
├── CHANGELOG.md
└── README.md
Voice is an experimental voice elicitation system that captures a writer's voice through a 67-question adaptive interview, computational NLP analysis of writing samples, and automated profile synthesis. It produces two independent profiles per writer: a self-reported profile (what the writer believes about their voice) and a computationally observed profile (what their writing exhibits). A calibration layer identifies where these profiles agree and where they diverge.
Status: The scoring pipeline produces numeric dimension scores but these scores have not been validated against external psychometric instruments. The NLP analysis uses standard stylometric measures (type-token ratio, Flesch-Kincaid, hedge density, etc.) but the mapping from NLP metrics to voice dimensions is hand-authored and unvalidated. The question bank is based on published findings in voice elicitation research but the specific item-to-dimension mappings are untested for reliability (Cronbach alpha) across a population. This is a functional prototype, not a finished measurement tool.
| Tool | Purpose |
|---|---|
voice-session |
Session lifecycle: create, load, list, pause, resume |
voice-scoring |
Score a completed session and produce dimension profiles |
voice-nlp |
Run the stylometric NLP analysis pipeline on writing samples |
voice-branching |
Evaluate interview routing and module sequencing |
voice-sequencer |
Determine the next question based on session state |
voice-quality |
Detect satisficing and response quality issues |
See the Getting Started tutorial for a complete walkthrough of running your first voice elicitation session.
See the CLI Reference for detailed documentation of all commands, options and output formats.
Pattern detection based on:
- Fork the repository
- Create a feature branch
- Make your changes
- Submit a pull request
MIT
