From 48232392a550f9cfd80fa5d150350d7c9a693e1d Mon Sep 17 00:00:00 2001 From: Original Gary <276612211+OpenGaryBot@users.noreply.github.com> Date: Sat, 25 Apr 2026 18:17:21 +1000 Subject: [PATCH] chore(rules): sync to canonical rule set + OP-fork desloppify install --- .claude/rules/accessibility.md | 42 -- .claude/rules/context-repo.md | 111 ++++ .claude/rules/cost-optimization.md | 8 +- .claude/rules/desloppify.md | 51 +- .claude/rules/emotional-safety.md | 52 -- .claude/rules/external-contribution-safety.md | 80 +++ .claude/rules/geo-seo.md | 616 ------------------ .claude/rules/git-identity.md | 56 ++ .claude/rules/parallelization.md | 106 +++ .claude/rules/pipeline-nevers.md | 43 ++ .claude/rules/privacy.md | 42 -- .claude/rules/research-methodology.md | 63 ++ .claude/rules/security.md | 51 -- .claude/rules/seven-concerns.md | 39 ++ .claude/rules/testing.md | 46 -- .claude/rules/tooling-reference.md | 100 +++ .claude/rules/user-profile.md | 17 + .claude/rules/voice.md | 8 + .claude/skills/advocacy-code-review/SKILL.md | 56 -- .../skills/advocacy-testing-strategy/SKILL.md | 60 -- .claude/skills/geo-seo-audit/SKILL.md | 339 ---------- .claude/skills/git-workflow/SKILL.md | 45 -- .../skills/plan-first-development/SKILL.md | 52 -- .../skills/requirements-interview/SKILL.md | 66 -- .claude/skills/security-audit/SKILL.md | 63 -- .gitignore | 4 + CLAUDE.md | 14 +- 27 files changed, 688 insertions(+), 1542 deletions(-) delete mode 100644 .claude/rules/accessibility.md create mode 100644 .claude/rules/context-repo.md delete mode 100644 .claude/rules/emotional-safety.md create mode 100644 .claude/rules/external-contribution-safety.md delete mode 100644 .claude/rules/geo-seo.md create mode 100644 .claude/rules/git-identity.md create mode 100644 .claude/rules/parallelization.md create mode 100644 .claude/rules/pipeline-nevers.md delete mode 100644 .claude/rules/privacy.md create mode 100644 .claude/rules/research-methodology.md delete mode 100644 .claude/rules/security.md create mode 100644 .claude/rules/seven-concerns.md delete mode 100644 .claude/rules/testing.md create mode 100644 .claude/rules/tooling-reference.md create mode 100644 .claude/rules/user-profile.md create mode 100644 .claude/rules/voice.md delete mode 100644 .claude/skills/advocacy-code-review/SKILL.md delete mode 100644 .claude/skills/advocacy-testing-strategy/SKILL.md delete mode 100644 .claude/skills/geo-seo-audit/SKILL.md delete mode 100644 .claude/skills/git-workflow/SKILL.md delete mode 100644 .claude/skills/plan-first-development/SKILL.md delete mode 100644 .claude/skills/requirements-interview/SKILL.md delete mode 100644 .claude/skills/security-audit/SKILL.md diff --git a/.claude/rules/accessibility.md b/.claude/rules/accessibility.md deleted file mode 100644 index 0c8fdf978..000000000 --- a/.claude/rules/accessibility.md +++ /dev/null @@ -1,42 +0,0 @@ ---- -paths: - - "**/ui/**" - - "**/frontend/**" - - "**/i18n/**" - - "**/l10n/**" ---- -# Accessibility Rules for Animal Advocacy Projects - -Advocacy networks span borders, languages, economic conditions, and infrastructure environments. An activist coordinating a rescue in a rural area with intermittent connectivity has fundamentally different needs than a campaign organizer at a well-resourced urban nonprofit. Accessibility in advocacy software is not about compliance with standards — it is about ensuring the movement's tools work for everyone the movement serves. - -## Internationalization from Day One - -Design every user-facing component with internationalization from the start — never retrofit it. Advocacy networks operate across linguistic boundaries: a coalition might include Spanish-speaking organizers in the Americas, Mandarin-speaking activists in Asia, and English-speaking legal teams in Europe. Externalize all user-facing strings from the beginning. Support right-to-left text layouts. Handle pluralization rules that differ across languages. Date, time, currency, and number formatting must respect locale. Adding i18n after the fact requires touching every component — the cost grows exponentially with codebase size. - -## Low-Bandwidth Optimization - -Many activists operate on mobile data in regions with expensive or throttled connections. Optimize aggressively: compress all assets, lazy-load non-critical content, minimize payload sizes, implement efficient data synchronization that transfers only deltas. Set performance budgets and test against them on throttled connections. A tool that requires broadband to function excludes the activists who need it most. - -## Offline-First Architecture - -Design for disconnected operation as the default, not as an exception. Activists in areas with unreliable connectivity — rural investigation sites, countries with internet shutdowns, disaster response scenarios — need tools that work without a network connection. Local-first data storage with background sync when connectivity is available. Conflict resolution for data modified offline by multiple users. Queue operations during disconnection and replay them on reconnect. The application must be fully functional for core workflows without any network access. - -## Low-Literacy Design Patterns - -Not all advocacy participants are fluent readers. Rescue coordinators, sanctuary workers, and community organizers come from diverse educational backgrounds. Design for comprehension: use icons alongside text labels, provide visual workflows instead of text-heavy instructions, support voice input and audio output where possible, use progressive disclosure to avoid overwhelming users with information density. Test interfaces with users who have limited formal literacy. - -## Mesh Networking Compatibility - -In environments where centralized internet infrastructure is unavailable, compromised, or surveilled, mesh networking enables direct device-to-device communication. Design data synchronization protocols that can operate over mesh networks with high latency, low bandwidth, and intermittent peer availability. This is not a theoretical concern — activists operating in regions with government internet shutdowns depend on mesh-capable tools. - -## Graceful Degradation - -Every feature must have a degraded mode that functions under constrained conditions. If the encryption library fails to load, the application must refuse to transmit sensitive data rather than transmitting it in plaintext. If the media processing pipeline is unavailable, investigation footage must be stored safely for later processing rather than discarded. If the network connection is lost, the user must see clear status indicators, not silent failures. Degrade capability, never safety. - -## Device Seizure Preparation — Application State - -When connectivity is lost suddenly — device confiscated, signal jammed, power cut — the application must not leave sensitive data exposed. No temporary files with decrypted investigation content. No in-memory caches that persist to swap files. No crash dumps containing witness identities. No recovery modes that display previously viewed sensitive content without re-authentication. Design the application so that power loss at any moment leaves zero recoverable sensitive state on disk. - -## Multi-Language Activist Networks Across Borders - -Coalition tools must support simultaneous use in multiple languages within the same deployment. A shared coordination platform where each user sees the interface in their language, but shared content (campaign plans, action alerts, investigation summaries) can be viewed in translated or original form. Support both machine translation for real-time use and human-reviewed translation for legally sensitive content. diff --git a/.claude/rules/context-repo.md b/.claude/rules/context-repo.md new file mode 100644 index 000000000..386dae0b6 --- /dev/null +++ b/.claude/rules/context-repo.md @@ -0,0 +1,111 @@ +# Context Repo — Org-Wide Read Safety + +Activate this rule when working on `github.com/Open-Paws/context` or proposing any change that may be merged into it. The context repo is the org's single source of truth, readable by every staff member and every AI agent across the Open Paws ecosystem. That reach is what makes it valuable — and what makes misclassification costly. Material that leaks into the context repo propagates to every future agent session across the org, with no clean retraction. + +## The Org-Wide Read Test + +Before proposing, writing, reviewing, or merging any content in the context repo, ask: + +> Imagine this change merged. Every staff member including brand-new contractors, every intern in the May cohort, every AI agent in every repo, every persona QA agent, every scheduled Claude Code task — all of them can now read this. Is that okay? + +If not clearly okay, the content does not belong in the context repo. Redirect to a private location. There is no "technically private but in the context repo" category. + +## What Must Not Go In + +Non-exhaustive. Reject at plan review and redirect: + +- Individual personal information — salaries, compensation, health, neurodivergence disclosures, recovery status, family, relationships +- HR / performance matters — performance feedback, interpersonal conflicts, hiring/firing discussions, contract negotiations with specific people +- Active sensitive funder dynamics — criticism from specific funders, donor red flags, active grant negotiation positions, internal funder assessments +- Legal matters in progress — contract disputes, IP issues, regulatory inquiries, anything a lawyer is or should be involved in +- Unannounced partnerships or programs — anything whose premature leak would damage. Once announced, it can move in +- Security-sensitive operational details — threat models, unpatched vulnerabilities, defense-in-depth specifics that would help an attacker +- Credentials, secrets, API keys — never, regardless of perceived convenience +- Personal context about individuals beyond what they've publicly chosen to share — history, politics, family situation, country-of-origin dynamics +- Active campaign intelligence that could tip off opposition — specific corporate targets mid-campaign, undercover operation plans, timing-sensitive material +- Sam's personal notes, journal, or private decision-making — belongs in the personal workspace repo +- Anything treated as confidential in its originating context — Slack DMs, CryptPad docs, private channels — even if innocuous out of context + +Rule of thumb: gossip, personnel, legal, or "don't forward this email" — not context-repo material. + +## What Belongs In + +- Org identity, mission, public frame +- Settled decisions, after they're settled and shareable +- Current priorities every staff member should be aligned on +- Program structures, playbooks, frameworks, methodology +- Technical conventions and architecture principles that apply across repos +- Published or ready-to-publish work +- Glossaries, onboarding material, routing tables +- Links out to where more detail lives +- Decision *outcomes* and *rationale*, without embedding the sensitive discussion that produced them + +Test: a fresh contractor or new agent session reading cold should get (a) valuable context and (b) see nothing that would make a staff member uncomfortable. Both must be yes. + +## Where Sensitive Material Actually Lives + +| Material | Correct location | +|---|---| +| Personal notes, journal, draft strategic thinking | Sam's personal workspace repo | +| Individual grant tracker, funder contact details, internal funder assessments | `private/grants/` in personal repo, or locked CryptPad | +| HR / people matters | Outside version control — direct comms, locked docs, legal counsel where appropriate | +| Active campaign intelligence | Per-campaign private repos or CryptPad with explicit access list | +| Credentials | Password manager / secrets manager | +| Early-stage partnership discussions | DM or locked doc until announced; summary can move in post-announcement | +| Staff-specific feedback | 1:1 docs outside the context repo | + +If no private home exists for the rejected material, file a separate issue to establish one — in the personal workspace or ops repo, not the context repo. + +## Pipeline Additions For Context-Repo Changes + +**STAGE 2 (Triage) — classify every issue with a `sensitivity:` label:** + +- `sensitivity:public-ok` — already public or trivially shareable +- `sensitivity:staff-ok` — fine for all staff + all agents (the bar for this repo) +- `sensitivity:private` — belongs elsewhere; redirect, do not advance + +Issues labeled `sensitivity:private` never enter the plan wave. + +**STAGE 4 (Plan Review) — run the org-wide read test explicitly.** If not clearly okay, reject with guidance on what to strip or redirect. + +**STAGE 13 (Adversarial) — add a 7th check:** + +7. **Confidentiality leak** — does this merge expose any individual's personal information, any sensitive relationship dynamic, any active negotiation, any unannounced plan, or any material that originated in a confidential context? If yes, `major+` severity, back to fix loop. + +Adversarial patterns to flag (content that "seems fine" but leaks in context): + +- Closed decision citing a specific funder's objection as the reason (strip identity, keep principle) +- Priority justified by "we lost trust with X partner" (strip dynamic, keep strategic implication) +- Program doc listing specific individuals as "struggling" or "not meeting expectations" +- Playbook referencing active campaign targets by name before launch + +## Default Direction Is Out, Not In + +When uncertain, keep it out. Context flows from private to public, not back. Once merged, downstream agents have already consumed it. + +If material genuinely belongs but currently leaks something, **rewrite at a higher level of abstraction** — keep the principle, strip the specifics. + +- Good: "Decision: prefer multi-year unrestricted funding over single-year restricted" +- Bad: "Decision: avoid Funder X because they demanded reporting we found unreasonable" + +Same principle, different safety profile. + +## Decision Tree + +``` +Is every fact in this change something I'd say out loud in an all-staff meeting +with interns, contractors, and partner org reps present? +│ +├── YES → proceed through normal pipeline +│ +└── NO → which category? + │ + ├── Personal / HR / relationship → personal repo or external tool + ├── Active sensitive operation → locked CryptPad with access list + ├── Legal / contractual → outside version control, with counsel + ├── Credentials → secrets manager + ├── Sensitive but abstractable → rewrite at higher level, retry pipeline + └── Unclear → ask Sam before filing the issue +``` + +Misclassification is costly in both directions. Too restrictive and the repo becomes useless. Too loose and it leaks. When genuinely unsure, ping rather than guess. diff --git a/.claude/rules/cost-optimization.md b/.claude/rules/cost-optimization.md index fd0af82d6..d0a798ef3 100644 --- a/.claude/rules/cost-optimization.md +++ b/.claude/rules/cost-optimization.md @@ -4,7 +4,13 @@ Advocacy organizations operate on nonprofit budgets. Every dollar spent on AI co ## Model Routing — Right Model for Each Task -Route tasks to the cheapest model capable of handling them well. Use cheaper, faster models for: test generation, boilerplate code, formatting assistance, simple refactoring, and documentation. Use mid-tier models for: debugging, multi-file changes, code review, and integration work. Reserve frontier models for: hard architectural problems, complex debugging, novel design challenges, and security-critical code review. Aider achieves comparable benchmark scores at 3x fewer tokens than some alternatives — consider token-efficient tools for routine workflows. +Route tasks to the cheapest model capable of handling them well. + +- **Cheap tier — Claude Haiku 4.5 (`claude-haiku-4-5-20251001`)**: test generation, boilerplate code, formatting assistance, simple refactoring, mechanical edits, documentation, glue code, log parsing, summarization of structured output. Default for desloppify-driven mechanical work. Default for first-pass scout / triage when the observation is well-structured. +- **Mid tier — Claude Sonnet 4.6 (`claude-sonnet-4-6`)**: debugging, multi-file changes, code review, integration work, plan authoring against a clear spec, test review, persona-QA narrative writing. +- **Frontier — Claude Opus 4.7 (`claude-opus-4-7`, default on this stack)**: hard architectural problems, complex debugging, novel design challenges, security-critical code review, adversarial audit, the strategic / Chat-Gary surface. + +Aider achieves comparable benchmark scores at 3× fewer tokens than some alternatives — consider token-efficient tools for routine workflows. The single biggest cost win in this stack is **routing cheap things to Haiku rather than reaching for Opus by default**. ## Token Budget Discipline diff --git a/.claude/rules/desloppify.md b/.claude/rules/desloppify.md index 891e9e139..96b64edf0 100644 --- a/.claude/rules/desloppify.md +++ b/.claude/rules/desloppify.md @@ -1,23 +1,64 @@ # Code Quality — desloppify -Run desloppify to systematically identify and fix code quality issues. Install and configure before scanning (requires Python 3.11+): +Run desloppify to systematically identify and fix code quality issues. Install from the **Open Paws fork** (Python 3.11+): ```bash -pip install --upgrade "desloppify[full]" +# Install from this fork — NEVER from PyPI / upstream +pip install "git+https://github.com/Open-Paws/desloppify.git#egg=desloppify[full]" desloppify update-skill claude ``` -Add `.desloppify/` to `.gitignore` — it contains local state that should not be committed. Before scanning, exclude directories that should not be analyzed (vendor, build output, generated code, worktrees) with `desloppify exclude `. Share questionable candidates with the project owner before excluding. +**Canonical install command.** This file is the single source of truth for how to install desloppify. Every other file in this stack (`skills/desloppify-playbook/SKILL.md`, `agents/desloppifier.md`, `$OP_CONTEXT_REPO/.claude/rules/desloppify.md`, every `Open-Paws/*/.claude/rules/desloppify.md`) links here rather than restating the command — duplication is how the multi-repo drift that existed pre-2026-04-25 happened. If you're editing an install command anywhere other than this file, stop. + +**OP fork only — never upstream.** The git install above pulls from `github.com/Open-Paws/desloppify`, which carries the movement conventions (no-speciesist-language rules, type-safety patterns, gateway response shape discipline, compassionate language enforcement, persona-QA browser testing) that upstream desloppify lacks. `pip install desloppify` from PyPI pulls upstream and is a hard-rule violation per `~/.claude/rules/pipeline-nevers.md`. + +Add `.desloppify/` to `.gitignore` — it contains local state that should not be committed. Before scanning, exclude generated / vendor / build dirs: ```bash +# Exclude generated directories, then scan +desloppify exclude node_modules dist desloppify scan --path . -desloppify next ``` `--path` is the directory to scan (use `.` for the whole project, or a subdirectory like `src/`). Your goal is to get the strict score as high as possible. The scoring resists gaming — the only way to improve it is to actually make the code better. -**The loop:** run `next`. It is the execution queue from the living plan, not the whole backlog. It tells you what to fix now, which file, and the resolve command to run when done. Fix it, resolve it, run `next` again. This is your main job. Use `desloppify backlog` only when you need to inspect broader open work not currently driving execution. +## The fix loop + +Run `next` → fix → resolve → repeat: + +```bash +desloppify next # get the top-priority item; shows which file and the resolve command +# fix the code +desloppify plan resolve # mark it done +desloppify next # get the next item +``` + +It is the execution queue from the living plan, not the whole backlog. It tells you what to fix now, which file, and the resolve command to run when done. Use `desloppify backlog` only when you need to inspect broader open work not currently driving execution. Do not be lazy. Large refactors and small detailed fixes — do both with equal energy. No task is too big or too small. Fix things properly, not minimally. Use `plan` / `plan queue` to reorder priorities or cluster related issues. Rescan periodically. The scan output includes agent instructions — follow them, do not substitute your own analysis. + +## Persona-QA workflow (UI repos with persona-driven testing) + +```bash +desloppify persona-qa --prepare --url https://example.com # generate agent instructions +# agent runs browser testing and captures findings in JSON +desloppify persona-qa --import findings.json # merge into state +desloppify persona-qa --status # per-persona summary +desloppify next # persona QA items now appear in the queue +``` + +## Baseline Capture Process + +**At plan time (STAGE 3):** Capture desloppify baseline against branch point: + +```bash +desloppify status --json > .desloppify/baseline.json +``` + +Post baseline JSON as GitHub issue comment for durable storage (`.desloppify/` is gitignored). + +**Recovery if missing:** STAGE 9 uses `git merge-base HEAD main` to recapture against the branch point. + +**Score-cannot-regress gate.** STAGE 9 blocks merge if the strict score drops below baseline. Regression requires `override:allow-score-drop` label (human-only — agents cannot apply it). diff --git a/.claude/rules/emotional-safety.md b/.claude/rules/emotional-safety.md deleted file mode 100644 index 622a08f04..000000000 --- a/.claude/rules/emotional-safety.md +++ /dev/null @@ -1,52 +0,0 @@ ---- -paths: - - "**/content/**" - - "**/media/**" - - "**/display/**" - - "**/upload/**" ---- -# Emotional Safety Rules for Animal Advocacy Projects - -Animal advocacy software routinely handles content documenting extreme suffering: factory farm conditions, slaughterhouse footage, animal testing documentation, and witness testimony of abuse. This content is necessary for the movement's work — it drives investigations, legal cases, public campaigns, and policy change. But uncontrolled exposure to this content causes measurable psychological harm. Every display decision must balance the operational need for access against the human cost of exposure. Emotional safety is not a UX preference — it is a duty of care to the people doing this work. - -## Progressive Disclosure of Traumatic Content - -NEVER display graphic content by default. Every piece of investigation footage, slaughter documentation, or exploitation imagery must be behind at least one intentional interaction. The default state is always safe: blurred, hidden, or represented by a text description. Users escalate to more graphic content through deliberate choices, never through automatic loading, scrolling, or navigation. - -## Configurable Detail Levels - -Implement user-controlled detail settings that persist across sessions. At minimum, provide three tiers: (1) text-only descriptions with no imagery, (2) blurred or low-detail representations with contextual descriptions, (3) full-resolution content. Each user chooses their own default level. The system MUST remember this preference and never reset it. Different roles need different defaults: a legal reviewer may need full-resolution evidence access; a campaign coordinator may only need text summaries. - -## Content Warnings — Mandatory Before Display - -Every piece of content involving animal suffering, investigation footage, or slaughter documentation MUST be preceded by a specific content warning describing what the content contains. Generic warnings like "sensitive content" are insufficient — the warning must indicate whether the content includes: graphic injury, death, distress vocalizations, confined living conditions, or slaughter processes. Users must be able to make an informed decision about whether to view specific content, not just "something sensitive." - -## Investigation Footage Handling - -Investigation footage is the most operationally important and psychologically dangerous content in the system. Implementation requirements: -- NEVER auto-play video or audio from investigations -- ALWAYS display footage in blurred state by default -- Require explicit opt-in for full resolution — a deliberate click, not a hover or scroll -- Provide frame-by-frame navigation for reviewers who need to examine specific moments without watching continuous footage -- Strip audio by default — distress vocalizations cause acute stress responses; audio should be a separate opt-in from video -- Support annotation without full-resolution viewing (reviewers can mark timestamps and regions on blurred preview) - -## Witness Testimony Display - -Before displaying any witness testimony: (1) verify that display consent is current and has not been withdrawn, (2) anonymize by default — display pseudonyms, not legal names, (3) require explicit opt-in to view identifying details, (4) log access to testimony for audit purposes while protecting the identity of who accessed it. When testimony includes descriptions of animal suffering, apply the same progressive disclosure and content warning rules as for visual media. - -## Burnout Prevention Patterns - -Advocacy software should actively support user wellbeing during extended content review sessions: -- **Session time awareness** — track continuous exposure time to traumatic content and surface non-intrusive reminders after configurable intervals (default: 30 minutes of active content review) -- **"Take a break" prompts** — for content reviewers who have been processing investigation footage or testimony for extended periods; these are suggestions, not blocks -- **Session summaries** — at the end of a content review session, provide a summary of what was reviewed so the reviewer does not need to re-expose themselves to verify completeness -- **Workload distribution** — when multiple reviewers are available, the system should support distributing traumatic content review across the team rather than concentrating it - -## Secondary Trauma Mitigation - -Secondary trauma affects not just end users but also **developers** building and testing this software. Design the development workflow to minimize unnecessary exposure: use abstract test data (described references, not actual footage) in automated tests, provide mock data generators that produce realistic metadata without graphic content, and document which test suites involve real content so developers can prepare. The CI/CD pipeline must never display graphic content in test output, logs, or failure reports. - -## Opt-In Escalation of Graphic Content - -When a user needs to access full-resolution graphic content, require multiple confirmation steps proportional to content severity. A single click is insufficient for the most graphic content. Implement a confirmation dialog that names what the user is about to see, requires an explicit "I understand" acknowledgment, and provides an alternative (text description, blurred summary) alongside the full-access option. This is not friction for friction's sake — it is informed consent applied to content exposure. diff --git a/.claude/rules/external-contribution-safety.md b/.claude/rules/external-contribution-safety.md new file mode 100644 index 000000000..3685ce10c --- /dev/null +++ b/.claude/rules/external-contribution-safety.md @@ -0,0 +1,80 @@ +# External Contribution Safety + +When helping a developer contribute to a third-party repository — any repo whose git remote does not belong to your organization — the AI tool must suppress all identity, attribution, and advocacy framing from commits, pull requests, and branches. This is last-line-of-defense enforcement: configure your tool to disable attribution trailers as the first line, and rely on these instructions when configuration alone is insufficient. + +## Repo Classification + +Before producing any commit message, PR description, or branch name, check the remote: + +```bash +git remote get-url origin +``` + +- If the remote matches `github.com/Open-Paws/` or `github.com:Open-Paws/` (SSH), you are in an **internal repo** — full identity is fine. +- If the remote is anything else, treat it as **external** — apply all rules below. +- If there is no remote configured, treat it as **external** (safe default). + +Confidence must be positive that you are in an internal repo. Absence of a matching remote is not ambiguous — it defaults to external mode. + +## What to Suppress in External Repos + +**Tool identity** — Do not mention Claude Code, Cursor, Copilot, Windsurf, Cline, or any other AI coding tool in commits, PR descriptions, branch names, or code comments. + +**Organization** — Do not mention Open Paws, any Open Paws project name, or any advocacy organization in any contribution artifact. + +**Mission framing** — Do not use advocacy language (liberation, compassionate, animal rights, plant-based, vegan) in code, commits, PR titles, or PR bodies unless the target project is explicitly about those topics. + +**Attribution trailers** — No `Co-Authored-By: Claude` lines. No "Generated with Claude Code" footers. No AI attribution in commit messages or PR descriptions. + +## Commit Message Rules in External Repos + +Read the last five commits in the target repo before writing a commit message: + +```bash +git log --oneline -5 +``` + +Match their exact style: length, format (conventional vs prose vs terse), tense, specificity. Then apply these constraints: + +- **Proportional to diff** — a one-line change gets a one-line commit. A 50-line change gets two or three sentences maximum. +- **Imperative mood** — "Fix", "Add", "Update", not "Fixed", "Added", "Updated". +- **No AI-generated patterns** — reject "This commit introduces...", "Updated X to support Y", "Added support for Z", "Improve overall...". +- **Self-check** — Read the message. Does it look indistinguishable from the existing commits in this repo? If not, shorten it and remove any summary language. + +## PR Description Rules in External Repos + +Check how merged PRs are described in the target repo before writing a PR body: + +```bash +gh pr list --state merged --limit 5 +``` + +Then apply: + +- **Match the target repo's style** — if merged PRs are two sentences, write two sentences. If they use headers, use headers. If they use none, use none. +- **No section headers for small changes** — omit `##`, `###`, "Summary:", "Motivation:", "Background:", "Approach:" unless the target repo uses them. +- **No bullet lists of benefits** — a list of what this improves is an AI tell. One explanation of what changed and why is sufficient. +- **Length** — most good external PRs are one to three sentences. Longer is rarely better. +- **Self-check** — read it aloud. Does it sound like a developer who works on this codebase, without a toolkit or an agenda? If not, cut it by half. + +## Branch Naming in External Repos + +Check existing open PRs for branch naming conventions: + +```bash +gh pr list --state open --limit 10 +``` + +Use that convention. If no clear pattern exists, default to `fix/short-description` or `add/short-description`. Keep the branch name under 40 characters. Do not include advocacy language, org identifiers, or tool names in branch names. + +## Defense-in-Depth Principle + +These instructions are the last line of defense, not the first. Before contributing to any external repo, configure your tool to disable attribution trailers: + +- Claude Code: set `"attribution": { "commit": "", "pr": "" }` in `~/.claude/settings.json` (deprecates `includeCoAuthoredBy`; already wired on this machine) +- Cursor: disable "Add AI attribution" in settings +- Copilot: no attribution trailers are inserted by default in commit flows + +Git author identity (who shows up in `git log`, not the AI-tool trailer) is governed separately by `git-identity.md` — external repos inherit the same bot identity by default. If a target repo requires a different author for some reason, raise it before committing; do not quietly switch. + +Instructions to the AI are what you rely on when tool configuration fails or when the tool generates surrounding prose (PR descriptions, branch names) that configuration does not control. diff --git a/.claude/rules/geo-seo.md b/.claude/rules/geo-seo.md deleted file mode 100644 index 5ff43d3e5..000000000 --- a/.claude/rules/geo-seo.md +++ /dev/null @@ -1,616 +0,0 @@ ---- -paths: - - "**/*.html" - - "**/robots.txt" - - "**/sitemap.xml" - - "**/llms.txt" - - "**/head/**" - - "**/seo/**" - - "**/meta/**" - - "**/schema/**" - - "**/structured-data/**" - - "**/layout.*" - - "**/Layout.*" - - "**/BaseHead.*" - - "**/Head.*" ---- -# SEO + GEO Rules for Animal Advocacy Websites - -Websites built for animal advocacy serve two discovery channels: traditional search engines and AI answer systems (ChatGPT, Perplexity, Google AI Overviews, Claude, Gemini, Bing Copilot). The game has shifted from optimizing for keyword matching to optimizing for **intent satisfaction** (does your content completely solve the user's problem?), **entity authority** (does Google recognize your brand as a trusted entity in its knowledge graph?), and **technical excellence** (can crawlers efficiently process your site?). Approximately 60% of searches end without a click — appearing in search results is no longer enough; you need to be the source Google trusts enough to synthesize into AI-generated answers. - -**How AI citation works:** Google generates an answer first, then scores content against it using embedding distance. Only 17-32% of AI Overview citations come from pages ranking in the organic top 10 — lower-authority pages can win with the right structure (source: Authoritas AI Overviews study, 2024). Domain Authority correlates with AI citations at only r=0.18; topical authority (r=0.40) and branded web mentions (r=0.664) are the real predictors (source: Kalicube GEO correlation study, 2025). 80% of URLs cited by AI assistants do not rank in Google's top search results for the same queries (source: Semrush AI citation analysis, 2024). - ---- - -## HTML Structure - -Every page needs exactly one `

` tag containing the primary topic. Use a logical heading hierarchy (`h1 > h2 > h3`), never skipping levels. Phrase `

` headings as questions when the section answers something — question-based headings improve Featured Snippet selection, People Also Ask appearance, and produce 7× more AI citations for smaller sites. The first paragraph after any heading must directly answer that question in 40-60 words. AI systems pull from the first 30% of content 44% of the time — lead with the answer. - -Keep paragraphs to 2-4 sentences (40-60 words). Structure content as self-contained 120-180 word modules — this generates 70% more ChatGPT citations than unstructured prose. - -Use semantic HTML correctly: `
`, `
`, `