RFC: integrate PiPNN as an alternative graph-index build algorithm#1049
RFC: integrate PiPNN as an alternative graph-index build algorithm#1049SeliMeli wants to merge 14 commits into
Conversation
|
@SeliMeli please read the following Contributor License Agreement(CLA). If you agree with the CLA, please reply with the following information.
Contributor License AgreementContribution License AgreementThis Contribution License Agreement (“Agreement”) is agreed to by the party signing below (“You”),
|
Per rfcs/README.md step 4: rename from 00000-short-title.md to NNNNN-short-title.md using the zero-padded PR number (microsoft#1049). Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
There was a problem hiding this comment.
Pull request overview
Note
Copilot was unable to run its full agentic suite in this review.
Adds an RFC proposing PiPNN as an opt-in, feature-gated alternative to Vamana for DiskANN disk-index graph construction, keeping disk format and search API unchanged.
Changes:
- Introduces RFC 01049 detailing PiPNN’s algorithm, integration plan, and two-stage rollout
- Specifies a
BuildAlgorithmselector design and feature-gating strategy (pipnn) - Documents benchmark results and Stage-1 milestones gating potential Stage-2 deprecation of Vamana full rebuilds
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
Codecov Report✅ All modified and coverable lines are covered by tests. Additional details and impacted files@@ Coverage Diff @@
## main #1049 +/- ##
==========================================
- Coverage 90.60% 89.51% -1.09%
==========================================
Files 461 461
Lines 85494 85920 +426
==========================================
- Hits 77462 76912 -550
- Misses 8032 9008 +976
Flags with carried forward coverage won't be shown. Click here to find out more. 🚀 New features to boost your workflow:
|
Adds an RFC proposing PiPNN (arXiv:2602.21247) as a second graph-index build algorithm for DiskANN's disk index. Integration is two-stage: Stage 1 lands PiPNN behind a build-algorithm selector with Vamana as default; Stage 2 (conditional on Stage 1 milestones) retires Vamana's full-rebuild path while keeping it for incremental inserts via the hybrid update model. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Per rfcs/README.md step 4: rename from 00000-short-title.md to NNNNN-short-title.md using the zero-padded PR number (microsoft#1049). Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
…tones - Add new M1 for in-memory build/search parity with Vamana (PiPNN today only feeds into DiskIndexWriter; a path that populates a DiskANNIndex directly for in-mem-only consumers is missing). - Renumber M1-M7 → M2-M8. - Convert each milestone's plain-text paragraph into bullet lists (Scope / Validation / etc.) for readability per RFC reviewer feedback. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
- Explicitly document feature-gated deserialization behavior: configs with "algorithm": "PiPNN" fail at parse time in non-pipnn binaries with a serde unknown-variant error. Not a backward-compatibility regression; configs without build_algorithm parse identically across feature combinations. - Add explanation for disk-edges path being not-slower than one-shot despite extra I/O (smaller working set, sequential append spills overlap with compute). Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
943fa74 to
4fe210f
Compare
- Add three index-update modes (incremental/full/partitioned) to background - Clarify leaf k-NN is pairwise GEMM (N×N), distinct from 1×N flat scan - Note HashPrune is streamable; clarify M5 disk-edges variants - Add concrete trade-off hypothesis with fixed-resource build-time table; state honestly that neither algorithm converts surplus RAM into faster builds - Reframe hybrid update model: Vamana inserts apply to in-memory graph, disk index is rebuilt rather than mutated - Replace "quality decay → rebuild" with operational triggers (embedding rotation, schema/param retuning, batch insert, safety rebuilds) - Add num_threads to BuildAlgorithm::PiPNN with bounded RAM-impact note - Scope feature-gated deserialization restriction to JSON config only; index files are byte-identical and load without the pipnn feature - Move checkpoint/resume (was M1's M2) to Stage 2 with rationale — streaming checkpoint doesn't fit PiPNN's batch phases - Use full hyperlinked arXiv URL throughout Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Address arrayka's "concrete hypothesis" comment on PR microsoft#1049 by adding an explicit Stage-1 experiment plan that validates the trade-off framing from the Problem Statement. - New M6 — Fixed-resource trade-off validation: cgroup-locked RAM budget sweep on BigANN 10M / Enron 10M / 100M-scale, with explicit cells for Vamana one-shot, Vamana partitioned, PiPNN one-shot, PiPNN disk-edges, PiPNN merged-shards. Captures wall-clock, peak RSS, CPU util, SSD bytes, recall, and QPS — reported as vectors/min/worker so different worker shapes compare directly. - Four explicit hypotheses to confirm or falsify, including the "surplus RAM doesn't buy speed" claim from the Problem Statement. - Pass criterion: documented matrix with a clearly-better algorithm (or tie) per budget at matching recall; surprises are Stage-1 blockers. - Renumber subsequent milestones: production-matrix → M7, hybrid-update → M8, operational readiness → M9. - Forward-link the Problem Statement hypothesis to M6. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
- Fix workspace structure: drop the spurious diskann-pipnn → diskann-disk dependency from the description; PiPNN produces Vec<Vec<u32>> consumed by diskann-disk behind its pipnn feature. The actual Cargo.toml never had this edge — the RFC text was wrong and described a dependency cycle. - Redraw the dep diagram showing the one-way produces/consumes flow. - Add a determinism note to the deferred checkpoint/resume section: PiPNN is rayon-parallel, so byte-identical output is not a free property and would require extra determinism work. Recall parity is the right validation criterion for any future resumed-build test, not byte-identity. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Per reviewer suggestion, narrow the RFC's Stage-1 commitment so it covers only the disk index build path. The in-memory DiskANNIndex builder exists primarily to support streaming per-point construction, which is exactly what PiPNN's batch algorithm cannot do efficiently — adding a PiPNN in-mem builder would offer no incremental capability and duplicate the disk path's value. - Summary: PiPNN is "for the disk index full-rebuild path"; in-mem build explicitly out of scope. - Two-stage rollout: Stage 1 = disk-index full-rebuild only; in-memory PiPNN is not part of any stage. - Goals: scoped to the disk index full-rebuild path; in-memory construction is out of scope. - Future Work: remove M1 (in-memory build / search) from Stage 1 milestones; renumber introduction accordingly. - Out of scope: add an explicit "In-memory PiPNN build (was M1)" entry with the rationale (streaming use-case mismatch, no incremental capability, would force a runtime dep on the in-mem graph crate). - Rename "Out of scope (intentionally not on this list)" to "Out of scope: not part of any stage" for clarity. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Previous wording said "full-rebuild path", which is misleading — PiPNN serves both initial builds (no prior index on disk) and full rebuilds (replacing an existing index). The algorithm doesn't care whether an older index exists; both produce a fresh disk graph from a dataset snapshot. Reword "full-rebuild path" to "full-build path" in the algorithm- capability statements (Summary, Two-stage rollout, Goals intro, Goal 4, Stage-1 milestones intro). Where the text refers to operational reruns of an existing index (hybrid update model triggers, M8 hybrid-loop validation), keep "rebuild". Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Restructure the RFC for readability without losing substance. Every reviewer-comment fix from prior iterations is preserved. Changes: - Front-load the value proposition; replace verbose paragraphs with tables (update modes, RAM-budget trade-off, alternatives). - Compress the four-phase algorithm description from long bullets to one-line essences while keeping the GEMM-vs-flat-scan distinction and HashPrune-streamable note. - Collapse the "batch-only" trade-off section from a bullet list of per-phase observations into a single tight paragraph that keeps the load-bearing claim (batching needs leaf membership; batch-size-1 PiPNN ≈ Vamana, so Vamana keeps the incremental role). - Trim repetition where the same point is made in Summary, Background, Two-stage rollout, and Trade-offs — keep one canonical statement and reference it from the other sections. - Tighten the milestone section: each milestone is now a short paragraph or compact list rather than a Scope/Boundaries/Behavior/ Validation matrix. - Drop overly granular implementation notes (per-thread KB breakdowns, enumeration of every PQ training reuse point, etc.) — these belong in code comments, not the RFC body. Net effect: ~408 → ~287 lines. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Stage 1 covers only build-from-scratch and full rebuilds with PiPNN. Validating the Stage-2 hybrid loop (PiPNN build → incremental Vamana inserts → rebuild) doesn't belong in Stage 1: there is no in-production hybrid behavior to characterize until Stage 2 actually adopts the model. - Remove M8 (Hybrid update model validation) from the milestone list. - Move it to "Deferred to Stage 2" with the rationale; note that the one-shot disk-format-compatibility check is sufficient at Stage 2 entry. - Update the milestone-list intro to call out that M1, M2, and now M8 are intentionally absent, and add a sentence stating Stage 1 exercises no hybrid behavior. The "Hybrid update model (Stage 2 direction)" motivation section is preserved — it explains why PiPNN doesn't need insert(point), which remains true regardless of when hybrid validation runs. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
The RFC describes the current plan, not its revision history. Strip "(was M1)" / "(was M8)" / "intentionally absent" phrasing and renumber milestones contiguously M0–M6. - M0 — Skeleton integration (this RFC) - M1 — Quantization parity (was M3) - M2 — Label-filtered indexes (was M4) - M3 — Three-tier memory dispatch (was M5) - M4 — Fixed-resource validation (was M6) - M5 — Production matrix (was M7) - M6 — Operational readiness (was M9) The "Deferred to Stage 2" and "Out of scope" sections describe what's not in the plan in their own terms, not as removed milestones. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
- Drop "scales linearly with point count" — table speaks for itself. - Drop the memory-channel caveat and "honest framing:" meta-tag. - Drop "is the working set the algorithm needs, not a bug" — defensive. - Drop "A search-only consumer cannot tell..." — restates the compat table. - Drop "This is why Vamana keeps the incremental role" — already said. - Drop "Data flows one-way" — diagram already shows this. - Trim verbose elaborations on M-deferred entries to one sentence each. - Demote "In-memory PiPNN build is not in any stage" callout to inline. Update milestone reference (validation hypothesis now points to M4). Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
hildebrandmw
left a comment
There was a problem hiding this comment.
Most of this PR is dedicated to how PiPNN will integrate into diskann-disk, and I do not have the review capacity nor much stake in that side of the integration.
What I am curious about, though, is how we plan to integrate PiPNN into the project as a whole. #856 integrates the algorithm as a separate crate with kind of its own set of abstractions, its own quantization. I have a very strong preference to keep diskann homogeneous, even if it means working on lower levels of the stack to build what we need in a more reuseable way (e.g., quantized matrix multiply, which is currently being worked on for multi-vectors). Basically, investing in shared capability where everyone can benefit.
I'm not necessarily asking that this blocks forward progress, but I would like to see milestones that commit to converging on shared abstractions/implementations somewhere in the roll out of late stage 1 or early stage 2. I would love for the PiPNN integration to leave whole project stronger and not just be a faster disk build.
Summary
This RFC proposes adding PiPNN (Pick-in-Partitions Nearest Neighbors, arXiv:2602.21247) as a second graph-construction algorithm for DiskANN's disk index, alongside the existing Vamana builder.
PiPNN produces a graph byte-compatible with Vamana's disk format and search API, at up to 6.3× lower build time on our measured workloads (Enron 10M, BigANN 10M). Vamana remains the default and the only algorithm supported for incremental inserts; PiPNN is the proposed faster path for full rebuilds.
Two-stage integration
BuildAlgorithmselector withVamanaas the default. PiPNN is opt-in via apipnnCargo feature. Existing build sites see no behavior change. Stage 1 defines explicit milestones (M0–M7) gating Stage 2 readiness.Highlights
build_ram_limit_gbknob, bringing PiPNN's peak RSS to or below Vamana's at a configurable build-time cost.Reviewers
Please read the full RFC for trade-offs, benchmark tables, and milestone definitions.
🤖 Generated with Claude Code