Skip to content

various new clickhouse syncs#1267

Closed
BilalG1 wants to merge 24 commits into
devfrom
clickhouse-sync-improve-test-speed
Closed

various new clickhouse syncs#1267
BilalG1 wants to merge 24 commits into
devfrom
clickhouse-sync-improve-test-speed

Conversation

@BilalG1
Copy link
Copy Markdown
Collaborator

@BilalG1 BilalG1 commented Mar 18, 2026

Summary by CodeRabbit

  • New Features

    • External DB sync expanded to cover teams, team members, permissions, invitations, email outboxes, session replays, refresh tokens, connected accounts, notification preferences, and related entities.
    • Dashboard sequencer now shows per-entity sync stats for the newly supported tables.
  • Chores

    • Added sequence-tracking columns and indexes across multiple tables to improve synchronization and consistency.
    • Expanded end-to-end tests and sync infrastructure to validate these flows.

BilalG1 added 18 commits March 16, 2026 10:53
…invites' into clickhouse-sync-email-outbox-table

# Conflicts:
#	apps/backend/scripts/clickhouse-migrations.ts
#	apps/backend/src/app/api/latest/internal/external-db-sync/sequencer/route.ts
#	apps/backend/src/lib/external-db-sync.ts
#	apps/e2e/tests/backend/endpoints/api/v1/external-db-sync-basics.test.ts
#	apps/e2e/tests/backend/endpoints/api/v1/external-db-sync-utils.ts
@vercel
Copy link
Copy Markdown

vercel Bot commented Mar 18, 2026

The latest updates on your projects. Learn more about Vercel for GitHub.

Project Deployment Actions Updated (UTC)
stack-auth-hosted-components Ready Ready Preview, Comment Apr 7, 2026 5:52pm
stack-backend Ready Ready Preview, Comment Apr 7, 2026 5:52pm
stack-dashboard Ready Ready Preview, Comment Apr 7, 2026 5:52pm
stack-demo Ready Ready Preview, Comment Apr 7, 2026 5:52pm
stack-docs Ready Ready Preview, Comment Apr 7, 2026 5:52pm

@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai Bot commented Mar 18, 2026

📝 Walkthrough

Walkthrough

Adds sequenceId and shouldUpdateSequenceId fields and indexes to multiple models, extends external-db-sync to cover teams/permissions/email outbox/session replays/refresh tokens/oauth accounts/verification codes, updates sequencer/status/routes, ClickHouse mappings/migrations, CRUD handlers to record sync events, and extensive e2e tests and migration scripts.

Changes

Cohort / File(s) Summary
Prisma migrations
apps/backend/prisma/migrations/20260316000000_add_team_team_member_sequence_columns/migration.sql, .../20260316000001_add_email_outbox_sequence_columns/migration.sql, .../20260316000002_add_session_replay_sequence_columns/migration.sql, .../20260317000000_add_team_permission_invitation_sequence_columns/migration.sql, .../20260317000001_add_project_permission_notification_preference_sequence_columns/migration.sql, .../20260318000000_add_sequence_id_to_refresh_tokens_and_oauth_accounts/migration.sql, .../20260318000001_add_sequence_indexes_concurrently/migration.sql
Add sequenceId BIGINT (nullable/unique) and shouldUpdateSequenceId BOOLEAN NOT NULL DEFAULT true columns and create indexes across multiple tables; index creation done concurrently in a separate migration.
Prisma schema
apps/backend/prisma/schema.prisma
Add sequenceId BigInt? @unique`` and shouldUpdateSequenceId Boolean @default`(true)` to 10 models and add composite indexes for tenancy/sequence and shouldUpdateSequenceId/tenancy.
External DB sync core
apps/backend/src/lib/external-db-sync.ts
Expand ExternalDbSyncTarget to many tables; replace single deletion recorder with table-driven recorder and add many specialized recorders; add withExternalDbSyncUpdate, normalization map for ClickHouse, adjust sequence parsing and remove ensureClickhouseSchema from mapping sync.
Sequencer & status APIs
apps/backend/src/app/api/latest/internal/external-db-sync/sequencer/route.ts, .../status/route.ts
Backfill/enqueue logic extended to 10+ entities (Team, TeamMember, permissions, invitations, email outbox, session replay, refresh tokens, oauth accounts, notification prefs); capture per-entity counts, cascade verification-code re-sync for team invitations, and expose new sequencer stats in status responses.
ClickHouse migrations & mappings
apps/backend/scripts/clickhouse-migrations.ts, packages/stack-shared/src/config/db-sync-mappings.ts
Parallelized ClickHouse DDL/DCL execution; add many analytics MergeTree tables and default.* views, row-level isolation policy generation per view, and add 11 new db-sync mappings (Postgres + ClickHouse DDL, internal fetch queries, external upsert/delete CTEs).
API CRUD updates (external-sync hooks)
apps/backend/src/app/api/.../ (auth/password/update/route.tsx, auth/sessions/*, emails/notification-preference/crud.tsx, emails/outbox/crud.tsx, oauth-providers/crud.tsx, session-replays/batch/route.tsx, team-memberships/crud.tsx, teams/crud.tsx, users/crud.tsx, emails/unsubscribe-link/route.tsx, team-member-profiles/crud.tsx)
Wrap create/update/upsert payloads with withExternalDbSyncUpdate(...), add pre-delete calls to recordExternalDbSyncDeletion(...) or specialized recorders, include shouldUpdateSequenceId updates for EmailOutbox and session replays.
Permissions logic
apps/backend/src/lib/permissions.tsx
Wrap grant/upsert/updateMany payloads in withExternalDbSyncUpdate(...); pre-fetch rows and call recordExternalDbSyncDeletion(...) before deletes for permission revokes/deletions.
Email pipeline
apps/backend/src/lib/email-queue-step.tsx
All email state transition/update SQL now also set shouldUpdateSequenceId = TRUE; parsing returns sequenceId and shouldUpdateSequenceId.
Sequencer/cron behavior
apps/backend/scripts/run-cron-jobs.ts
Replace randomized post-run delay with fixed 1s delay.
E2E tests & helpers
apps/e2e/tests/backend/endpoints/api/v1/external-db-sync-utils.ts, .../external-db-sync-basics.test.ts, analytics-query.test.ts
Add many wait/poll helpers for new mappings and extensive tests verifying sync and deletion behaviors across Postgres and ClickHouse; update analytics-query expectations for limited_user grants to include new views.
Verification code handler
apps/backend/src/route-handlers/verification-code-handler.tsx
Record external-db-sync deletion for TEAM_INVITATION codes before deleting.

Sequence Diagram(s)

sequenceDiagram
    actor Client
    participant Backend as Backend API
    participant Prisma as Prisma Client
    participant DeletedRow as DeletedRow Table
    participant Sequencer as Sync Sequencer
    participant ClickHouse as ClickHouse
    participant ExtDB as External Postgres

    Client->>Backend: Create/Update entity (e.g., Team) with upsert
    Backend->>Prisma: upsert/create wrapped by withExternalDbSyncUpdate(...)
    Prisma-->>Backend: Return row (may include sequenceId/flags)

    alt Deletion path
        Client->>Backend: Delete entity
        Backend->>DeletedRow: recordExternalDbSyncDeletion(...) (insert DeletedRow)
        Backend->>Prisma: delete entity
        Prisma-->>Backend: Confirm deletion
    end

    Sequencer->>Prisma: SELECT ... FOR UPDATE SKIP LOCKED (shouldUpdateSequenceId = TRUE)
    Prisma-->>Sequencer: Batch of rows
    Sequencer->>Prisma: UPDATE set sequenceId = nextval(...), shouldUpdateSequenceId = FALSE
    Sequencer->>DeletedRow: include deleted events in batch
    Sequencer->>ClickHouse: fetch internal rows by sequence range
    ClickHouse-->>Sequencer: Return rows
    Sequencer->>ExtDB: Execute CTE upsert/delete to apply batch
    ExtDB-->>Sequencer: Confirm and update _stack_sync_metadata
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~60 minutes

Possibly related PRs

Suggested reviewers

  • N2D4
  • nams1570

"🐰
I hopped through rows both big and small,
added IDs and flags to track them all.
ClickHouse tables stretch and gleam,
sequencer hums its steady dream.
Deletions noted, syncs set right — a carrot-coded night!"

🚥 Pre-merge checks | ❌ 3

❌ Failed checks (2 warnings, 1 inconclusive)

Check name Status Explanation Resolution
Description check ⚠️ Warning The pull request description is empty except for the CONTRIBUTING.md template reminder, providing no meaningful information about the changes, objectives, or rationale. Add a detailed description explaining the PR objectives, key changes across database migrations, schema updates, sync handlers, and test coverage, along with any relevant context or breaking changes.
Docstring Coverage ⚠️ Warning Docstring coverage is 13.51% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
Title check ❓ Inconclusive The title 'various new clickhouse syncs' is vague and generic, using non-descriptive terms that don't clearly convey the main purpose or scope of the extensive changes. Provide a more specific and descriptive title that captures the primary change, such as 'Add external DB sync support for teams, permissions, and other entities' or 'Implement comprehensive ClickHouse sync mappings and backfill logic'.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
📝 Generate docstrings
  • Create stacked PR
  • Commit on current branch
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Commit unit tests in branch clickhouse-sync-improve-test-speed

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@greptile-apps
Copy link
Copy Markdown
Contributor

greptile-apps Bot commented Mar 18, 2026

Greptile Summary

This PR improves the ClickHouse sync speed by removing the ensureClickhouseSchema DDL call (CREATE TABLE IF NOT EXISTS) that was executed on every mapping per sync invocation. Since the target tables are internal and pre-created, the call was always a no-op but still incurred a slow DDL round-trip to ClickHouse for every mapping.

Key changes:

  • Removes the ensureClickhouseSchema call in syncClickhouseMapping, eliminating one DDL network round-trip per mapping per sync cycle.
  • Adds an explanatory comment justifying the removal.
  • Side effect: tableSchema (lines 1424–1427) is now declared, validated, and thrown on if missing — but is never actually read, making it dead code that should be removed alongside the deleted call.

Confidence Score: 4/5

  • Safe to merge with a minor cleanup — the optimization is sound but leaves behind dead code.
  • The change is a straightforward single-line removal with a clear performance rationale. The only real issue is the now-unused tableSchema variable and its associated null-guard, which are dead code. There is no functional regression risk since the removed DDL call was always a no-op against pre-existing tables. The assumption that tables always exist is reasonable for an internal ClickHouse instance, but adds a small operational risk if a new mapping is ever added without a corresponding migration.
  • No files require special attention beyond the dead-code cleanup in apps/backend/src/lib/external-db-sync.ts.

Important Files Changed

Filename Overview
apps/backend/src/lib/external-db-sync.ts Removes the ensureClickhouseSchema DDL round-trip call in syncClickhouseMapping for a performance win; leaves tableSchema declared and validated but now entirely unused — dead code that should be cleaned up.

Sequence Diagram

sequenceDiagram
    participant Sync as syncClickhouseMapping
    participant CH as ClickHouse

    Note over Sync,CH: Before this PR (per mapping, per invocation)
    Sync->>CH: ensureClickhouseSchema (CREATE TABLE IF NOT EXISTS — DDL round trip)
    Sync->>CH: getClickhouseLastSyncedSequenceId
    loop Batch loop
        Sync->>CH: pushRowsToClickhouse (INSERT)
        Sync->>CH: updateClickhouseSyncMetadata
    end

    Note over Sync,CH: After this PR (DDL call removed)
    Sync->>CH: getClickhouseLastSyncedSequenceId
    loop Batch loop
        Sync->>CH: pushRowsToClickhouse (INSERT)
        Sync->>CH: updateClickhouseSyncMetadata
    end
Loading

Comments Outside Diff (1)

  1. apps/backend/src/lib/external-db-sync.ts, line 1424-1427 (link)

    P2 Dead code — tableSchema is now unused

    Since ensureClickhouseSchema was removed, tableSchema is assigned and validated but never read anywhere in the function body. The null-guard that throws a StackAssertionError also has no effect because the value it protects is never consumed.

    This should either be removed entirely, or — if the intent is to keep a sanity check that the mapping is correctly configured — the comment should make that explicit. As-is, the dead assignment will confuse future readers.

    (Remove lines 1424-1427 and the assertNonEmptyString(fetchQuery, ...) on line 1428 can stay as-is, since fetchQuery is still used.)

Last reviewed commit: "clickhouse improve s..."

@BilalG1 BilalG1 requested a review from N2D4 March 18, 2026 21:41
@BilalG1 BilalG1 assigned N2D4 and unassigned BilalG1 Mar 18, 2026
Comment thread apps/backend/src/lib/external-db-sync.ts Outdated
@github-actions github-actions Bot assigned BilalG1 and unassigned N2D4 Mar 21, 2026
@N2D4 N2D4 changed the base branch from clickhouse-sync-sessions-and-connected-accounts to dev March 22, 2026 01:37
@N2D4 N2D4 changed the title clickhouse improve sync speed various new clickhouse syncs Mar 22, 2026
@N2D4 N2D4 self-requested a review March 22, 2026 01:38
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 18

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (5)
apps/backend/src/app/api/latest/users/crud.tsx (1)

1193-1203: ⚠️ Potential issue | 🟠 Major

Wrap the refresh-token sync write and deleteMany() together.

These tombstones are recorded before the delete and outside any transaction. If projectUserRefreshToken.deleteMany() fails, external-db-sync will emit deletions for tokens that are still present; please execute both calls inside one globalPrismaClient.$transaction(...).

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/backend/src/app/api/latest/users/crud.tsx` around lines 1193 - 1203,
Wrap the tombstone write and the actual deletion in a single transaction so the
external-db-sync deletions are only emitted if the DB delete succeeds: move the
calls to recordExternalDbSyncRefreshTokenDeletionsForUser(...) and
globalPrismaClient.projectUserRefreshToken.deleteMany(...) into a single
globalPrismaClient.$transaction(...) and run them there (ensure you pass both
operations as transactional actions so they either both commit or both roll
back). Make sure to preserve the same parameters (auth.tenancy.id and
params.user_id) and return/await the transaction result as appropriate.
apps/backend/src/app/api/latest/auth/sessions/current/route.tsx (1)

37-48: ⚠️ Potential issue | 🔴 Critical

Don't record the refresh-token tombstone before the delete outside a transaction.

recordExternalDbSyncDeletion() throws when it inserts 0 DeletedRow rows for ProjectUserRefreshToken, so an already-missing token now fails here before deleteMany() can translate it to KnownErrors.RefreshTokenNotFoundOrExpired(). It also commits the tombstone before the actual delete, so a later delete failure leaves external-db-sync ahead of the source DB. Please run both operations in one globalPrismaClient transaction and preserve the 0-row mapping.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/backend/src/app/api/latest/auth/sessions/current/route.tsx` around lines
37 - 48, Run the delete and the tombstone insertion inside a single Prisma
transaction so the tombstone cannot be committed without the source delete: wrap
globalPrismaClient.projectUserRefreshToken.deleteMany(...) and the call to
recordExternalDbSyncDeletion(...) in a single
globalPrismaClient.$transaction(...) and ensure you pass the delete result (the
deleted count) or an explicit expected-deleted-count to
recordExternalDbSyncDeletion so it can insert the 0-row mapping instead of
throwing; if needed, update recordExternalDbSyncDeletion to accept an
expectedCount/allowZero flag and avoid throwing when expectedCount === 0 while
still inserting the corresponding DeletedRow record for tenancy.id and
refreshTokenId.
apps/backend/src/lib/permissions.tsx (2)

433-460: ⚠️ Potential issue | 🟠 Major

Record team-permission tombstones before deleteMany.

The project-scope branch now records external deletions, but the team-scope branch still bulk-deletes TeamMemberDirectPermission rows directly. Deleting a team permission definition will therefore leave stale team_permissions rows in external DBs.

Possible fix
   if (options.scope === "team") {
+    const teamPermissions = await sourceOfTruthTx.teamMemberDirectPermission.findMany({
+      where: {
+        tenancyId: options.tenancy.id,
+        permissionId: options.permissionId,
+      },
+      select: { id: true },
+    });
+    for (const perm of teamPermissions) {
+      await recordExternalDbSyncDeletion(sourceOfTruthTx, {
+        tableName: "TeamMemberDirectPermission",
+        tenancyId: options.tenancy.id,
+        permissionDbId: perm.id,
+      });
+    }
     await sourceOfTruthTx.teamMemberDirectPermission.deleteMany({
       where: {
         tenancyId: options.tenancy.id,
         permissionId: options.permissionId,
       },
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/backend/src/lib/permissions.tsx` around lines 433 - 460, The team-scope
branch currently calls
sourceOfTruthTx.teamMemberDirectPermission.deleteMany(...) directly and thus
skips recording tombstones; mirror the project-scope logic by first fetching
matching teamMemberDirectPermission rows (use
sourceOfTruthTx.teamMemberDirectPermission.findMany with where { tenancyId:
options.tenancy.id, permissionId: options.permissionId } and select { id: true
}), loop over each result and call recordExternalDbSyncDeletion(sourceOfTruthTx,
{ tableName: "TeamMemberDirectPermission", tenancyId: options.tenancy.id,
permissionDbId: perm.id }), then call the existing
teamMemberDirectPermission.deleteMany(...) to remove them.

330-348: ⚠️ Potential issue | 🟠 Major

Mark permission renames for resync.

These updateMany calls change permissionId in place but leave shouldUpdateSequenceId untouched, so synced team_permissions / project_permissions rows can keep the old permission id until some unrelated write touches them.

Possible fix
   await sourceOfTruthTx.teamMemberDirectPermission.updateMany({
     where: {
       tenancyId: options.tenancy.id,
       permissionId: options.oldId,
     },
-    data: {
-      permissionId: newId,
-    },
+    data: withExternalDbSyncUpdate({
+      permissionId: newId,
+    }),
   });

   await sourceOfTruthTx.projectUserDirectPermission.updateMany({
     where: {
       tenancyId: options.tenancy.id,
       permissionId: options.oldId,
     },
-    data: {
-      permissionId: newId,
-    },
+    data: withExternalDbSyncUpdate({
+      permissionId: newId,
+    }),
   });
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/backend/src/lib/permissions.tsx` around lines 330 - 348, The updateMany
calls on sourceOfTruthTx.teamMemberDirectPermission.updateMany and
sourceOfTruthTx.projectUserDirectPermission.updateMany only change permissionId
and must also mark those rows for resync; update the data payload in both calls
to set shouldUpdateSequenceId (e.g., to null or the sentinel value your sync
uses) alongside permissionId: newId so the downstream sync will re-evaluate and
propagate the renamed permission immediately.
apps/backend/src/app/api/latest/internal/external-db-sync/status/route.ts (1)

236-283: ⚠️ Potential issue | 🟠 Major

Include contact_channels in mapping_status.

Line 369 still computes contactChannelStats, but this helper only emits users and email_outboxes. As a result, sync_engine.mappings and external_databases[].mapping_status will never report backlog for contact_channels, even though the endpoint already exposes sequencer.contact_channels.

Also applies to: 368-378

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/backend/src/app/api/latest/internal/external-db-sync/status/route.ts`
around lines 236 - 283, buildMappingInternalStats currently only sets mappings
for "users" and "email_outboxes", so "contact_channels" never appears in
mapping_status; add a new mapping entry for contact_channels using the existing
contactChannelStats (and corresponding deletedRowsByTable lookup for
ContactChannel similar to deletedProjectUserStats) and compute
internal_min_sequence_id/internal_max_sequence_id/internal_pending_count with
minBigIntString, maxBigIntString and addBigIntStrings just like you do for
"users", then ensure mappingInternalStats.set("contact_channels", { mapping_id:
"contact_channels", internal_min_sequence_id: ..., internal_max_sequence_id:
..., internal_pending_count: ... }) so contact_channels shows up in mappings and
mappingStatuses.
🧹 Nitpick comments (1)
apps/backend/scripts/run-cron-jobs.ts (1)

33-33: Make the 1s loop test-only or configurable.

This now drives both internal cron endpoints at a fixed 1Hz in every environment. That's a large steady-state traffic increase over the old jittered delay, and removing the jitter also makes replica bursts line up again. An env-controlled fast path would preserve the test-speed win without baking the hot polling cadence into normal deployments.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/backend/scripts/run-cron-jobs.ts` at line 33, The loop currently
awaiting wait(1000) forces a 1s fixed poll in all environments; change it to use
a configurable interval and restore non-deterministic/backoff behavior for
production. Replace the hardcoded wait(1000) in run-cron-jobs.ts with a value
derived from an env var (e.g., CRON_POLL_MS or a boolean TEST_FAST_CRON) so
tests can opt into 1s polling while normal runs use a larger, jittered/default
interval; or compute the delay when the loop starts (use wait(getCronDelayMs()))
where getCronDelayMs() returns test-configured value if
process.env.TEST_FAST_CRON is set, otherwise returns a randomized/backoff delay
to avoid aligned bursts. Ensure references to wait(...) and the loop calling it
are updated and documented in env usage.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In
`@apps/backend/prisma/migrations/20260316000000_add_team_team_member_sequence_columns/migration.sql`:
- Around line 5-12: This migration creates several indexes for Team
(Team_sequenceId_key, Team_tenancyId_sequenceId_idx,
Team_shouldUpdateSequenceId_idx) (and similarly for TeamMember at lines 18-25)
inside the Prisma transaction which will block writes on large tables; split
these index creations into one or more follow-up migrations that run OUTSIDE the
migration transaction and build the indexes CONCURRENTLY (use CREATE INDEX
CONCURRENTLY / CREATE UNIQUE INDEX CONCURRENTLY where appropriate), and add the
CONDITIONALLY_REPEAT_MIGRATION_SENTINEL pattern and any temporary marker columns
needed to perform the change safely on >1,000,000-row tables so the original
transactional migration no longer includes these plain index builds.

In
`@apps/backend/prisma/migrations/20260316000001_add_email_outbox_sequence_columns/migration.sql`:
- Around line 5-12: The three indexes on EmailOutbox
("EmailOutbox_sequenceId_key", "EmailOutbox_tenancyId_sequenceId_idx",
"EmailOutbox_shouldUpdateSequenceId_idx") must not be created inside this
transactional migration; instead remove these CREATE INDEX statements from this
migration and add separate non-transactional/concurrent migration files
following the repo's large-table pattern (use CREATE INDEX CONCURRENTLY and the
project’s concurrent/sentinel migration flow) so the index builds run outside a
transaction and as their own migration steps for the EmailOutbox table.

In
`@apps/backend/prisma/migrations/20260316000002_add_session_replay_sequence_columns/migration.sql`:
- Around line 5-12: The migration currently creates three indexes
("SessionReplay_sequenceId_key", "SessionReplay_tenancyId_sequenceId_idx",
"SessionReplay_shouldUpdateSequenceId_idx") inside the transactional migration
on table "SessionReplay"; remove these CREATE INDEX statements from this
migration and instead add one or more follow-up migrations that build the
indexes using CREATE INDEX CONCURRENTLY (or the DB-specific non-blocking
concurrent option), ensuring you follow the repository's safe-migration pattern
(use CONDITIONALLY_REPEAT_MIGRATION_SENTINEL and any temporary marking columns
needed) so the index builds run outside the transaction and can be retried
safely for large tables.

In
`@apps/backend/prisma/migrations/20260317000000_add_team_permission_invitation_sequence_columns/migration.sql`:
- Around line 5-22: The current migration creates heavy indexes inline
(TeamMemberDirectPermission_sequenceId_key,
TeamMemberDirectPermission_shouldUpdateSequenceId_idx,
TeamMemberDirectPermission_tenancyId_sequenceId_idx,
VerificationCode_sequenceId_key,
VerificationCode_shouldUpdateSequenceId_type_idx) inside a transaction which can
block large tables; instead keep only safe DDL in this migration (e.g., ALTER
TABLE "VerificationCode" ADD COLUMN "sequenceId" BIGINT and ADD COLUMN
"shouldUpdateSequenceId" BOOLEAN DEFAULT true) and move all index creation into
separate migration files that follow the large-table/sentinel pattern: create a
sentinel column if needed, run CREATE INDEX CONCURRENTLY ... for each index (use
the exact index names above), then run a small follow-up migration to set the
sentinel flags/visibility and finally drop the sentinel column; ensure the
concurrent index migrations are not wrapped in a transaction (use CREATE INDEX
CONCURRENTLY) and split them into distinct migration sql files.

In
`@apps/backend/prisma/migrations/20260317000001_add_project_permission_notification_preference_sequence_columns/migration.sql`:
- Around line 5-25: This migration adds two new columns ("sequenceId",
"shouldUpdateSequenceId") to UserNotificationPreference and then creates six
indexes (ProjectUserDirectPermission_sequenceId_key,
ProjectUserDirectPermission_shouldUpdateSequenceId_idx,
ProjectUserDirectPermission_tenancyId_sequenceId_idx,
UserNotificationPreference_sequenceId_key,
UserNotificationPreference_shouldUpdateSequenceId_idx,
UserNotificationPreference_tenancyId_sequenceId_idx) inside the same
transactional migration, which risks long-running locks on large tables; split
this up by keeping only the fast, transactional schema change (ALTER TABLE to
add the "sequenceId" and "shouldUpdateSequenceId" columns in the current
migration) and move all CREATE INDEX statements into one or more separate
non-transactional large-table migrations following the repo's
concurrent/sentinel pattern (use CREATE INDEX CONCURRENTLY in those migrations
and the repo’s sentinel workflow for safe rollouts) so the index builds run
outside the transaction for ProjectUserDirectPermission and
UserNotificationPreference.

In
`@apps/backend/prisma/migrations/20260318000000_add_sequence_id_to_refresh_tokens_and_oauth_accounts/migration.sql`:
- Around line 5-25: This migration must be split into safe, non-transactional
index-creation steps: remove the six CREATE INDEX / CREATE UNIQUE INDEX
statements from this transactional file (the lines creating
ProjectUserRefreshToken_sequenceId_key,
ProjectUserRefreshToken_shouldUpdateSequenceId_idx,
ProjectUserRefreshToken_tenancyId_sequenceId_idx,
ProjectUserOAuthAccount_sequenceId_key,
ProjectUserOAuthAccount_shouldUpdateSequenceId_idx,
ProjectUserOAuthAccount_tenancyId_sequenceId_idx) and keep only the ALTER TABLE
that adds sequenceId and shouldUpdateSequenceId; then add separate migration
files that (a) use the repo’s CONDITIONALLY_REPEAT_MIGRATION_SENTINEL pattern
and a temporary marking column if needed, (b) run outside the transaction or
explicitly create indexes CONCURRENTLY on ProjectUserRefreshToken and
ProjectUserOAuthAccount (creating the unique index on sequenceId and the two
non-unique indexes referencing shouldUpdateSequenceId and tenancyId), and (c)
ensure each index build is in its own migration to avoid Prisma transaction
timeouts.

In `@apps/backend/src/app/api/latest/auth/password/update/route.tsx`:
- Around line 82-88: Wrap the tombstone recording and the deleteMany into a
single serializable Prisma transaction so they commit or abort together: call
globalPrismaClient.$transaction([...], { isolationLevel: 'Serializable' }) and
inside it invoke recordExternalDbSyncRefreshTokenDeletionsForUser (passing
tenancyId, projectUserId and excludeRefreshToken) and then
globalPrismaClient.projectUserRefreshToken.deleteMany with the same predicate;
this ensures the recorded tombstones and the deleted set are aligned and
prevents phantom-insert windows.

In `@apps/backend/src/app/api/latest/auth/sessions/crud.tsx`:
- Around line 75-81: The tombstone insert (recordExternalDbSyncDeletion) is
currently committed before deleting the refresh token, risking inconsistent
external sync if the delete fails; wrap both operations in a single Prisma
transaction so they succeed or fail together. Modify the code to run
recordExternalDbSyncDeletion and
globalPrismaClient.projectUserRefreshToken.deleteMany inside a single
transaction (e.g., use globalPrismaClient.$transaction with an async callback or
pass both statements to $transaction) so the tombstone insert and delete are
atomic and reference the same tenancyId/refreshTokenId parameters.

In
`@apps/backend/src/app/api/latest/internal/external-db-sync/sequencer/route.ts`:
- Around line 157-174: The UPDATE's subquery collapses changed teams to only
{projectId, branchId} (changed_teams), causing EVERY TEAM_INVITATION in that
tenancy to be marked; modify the subquery and WHERE join to preserve team-level
specificity by selecting "Team"."id" (or the appropriate team identifier used on
VerificationCode, e.g., "teamId") in changed_teams and add a matching condition
like AND "VerificationCode"."teamId" = changed_teams."id" (or the actual column
name) so only verification codes for the specific changed teams are toggled;
update the SELECT DISTINCT and the WHERE clause that references changed_teams
accordingly (symbols: "Team", "Tenancy", changed_teams, "VerificationCode").
- Around line 83-98: The cascade UPDATE currently re-queries all sequenced
ProjectUser rows per tenancy; change it to join only the exact (tenancyId,
projectUserId) pairs produced by the first CTE (the rows_to_update result) so
TeamMember rows are marked only for users updated in this batch. Locate the
globalPrismaClient.$executeRaw block that updates "TeamMember" and replace the
FROM subquery that selects from "ProjectUser" with a join to the first CTE (the
rows_to_update CTE name used earlier in this query) or propagate the updated
(tenancyId, projectUserId) pairs into a temp/table expression and join on those
pairs, keeping the existing filters (shouldUpdateSequenceId = FALSE, sequenceId
IS NOT NULL) but applying them only to the rows_to_update set.

In `@apps/backend/src/lib/external-db-sync.ts`:
- Around line 274-302: The permission tombstone INSERT currently filters only by
permission id, which can record a deletion for the wrong tenant; update the
WHERE clause in the tx.$executeRaw calls for the "TeamMemberDirectPermission"
branch (and the similar branch at lines 312-339) to include tenancy ID filtering
by adding AND "tenancyId" = ${target.tenancyId}::uuid, and ensure
target.tenancyId is asserted/validated (target.tenancyId /
target.permissionDbId) before running tx.$executeRaw so the lookup only affects
the intended tenant.
- Around line 454-485: The INSERT builds tombstones for VerificationCode but
scopes Tenancy only by projectId/branchId, so sibling tenancies can produce
extra rows; ensure the query restricts to the intended tenancy by validating and
using target.tenancyId and adding a Tenancy.id filter. Add a validation for
target.tenancyId (e.g., assertUuid or assertNonEmptyString depending on type)
and modify the tx.$executeRaw SQL (the JOIN/WHERE involving "Tenancy" and
"VerificationCode") to include AND "Tenancy"."id" = ${target.tenancyId} (or move
it into the JOIN condition) so the DeletedRow insert creates at most one row for
the specified tenancy; keep references to VerificationCode, Tenancy, DeletedRow,
and target.verificationCodeProjectId/verificationCodeBranchId/verificationCodeId
when making the change.

In `@apps/backend/src/route-handlers/verification-code-handler.tsx`:
- Around line 280-299: The external DB tombstone write
(recordExternalDbSyncDeletion) is done before and outside the verificationCode
delete, which can lead to inconsistent state if the delete fails; wrap the
conditional tombstone call and the globalPrismaClient.verificationCode.delete
together inside a single globalPrismaClient.$transaction so both operations
succeed or fail atomically (move the call to recordExternalDbSyncDeletion into
the transaction callback alongside the delete, using the same transaction
client), referencing recordExternalDbSyncDeletion,
globalPrismaClient.verificationCode.delete and globalPrismaClient.$transaction
to locate the changes.

In `@apps/e2e/tests/backend/endpoints/api/v1/external-db-sync-basics.test.ts`:
- Around line 699-741: Add a post-membership user-update case to the "TeamMember
CRUD sync (Postgres)" test: after creating the membership (the addMemberResponse
and waitForSyncedTeamMember(client, teamId, user.userId) steps), update the
ProjectUser/User record (e.g., change display_name and/or primary_email via
User.update or the API using niceBackendFetch), then wait for the sync (reuse
waitForSyncedTeamMember or add a new wait helper) and query
"team_member_profiles" to assert the row reflects the updated
display_name/primary_email; finally, proceed to the existing remove-member and
waitForSyncedTeamMemberDeletion checks so the test covers insert,
post-membership update, and deletion.

In `@packages/stack-shared/src/config/db-sync-mappings.ts`:
- Around line 303-320: The tombstone rows are emitting NULL into non-null
ClickHouse columns which causes the first deletion sync to fail; update the
affected table definitions so deleted-row placeholders can be written by either
making the problematic columns nullable (e.g., LowCardinality(Nullable(String))
or Nullable(String)) or giving them non-null defaults (e.g., DEFAULT '') as
appropriate. Specifically, modify the analytics_internal.contact_channels CREATE
TABLE to allow NULL/default for contact_channels.type and contact_channels.value
(and likewise adjust teams.display_name and connected_accounts.provider and
connected_accounts.provider_account_id in their respective CREATE TABLEs
referenced in the review) so tombstone rows no longer violate the schema while
preserving existing ordering/partitioning keys.
- Around line 1111-1132: The query currently scopes by "Tenancy" via
projectId/branchId which can include sibling tenancies; instead make the "Team"
join authoritative by changing the LEFT JOIN "Team" to an INNER JOIN and adding
a tenancy filter on the team (e.g. add AND "Team"."tenancyId" = $1::uuid to the
"Team" ON clause) so rows are only returned for teams that belong to the target
tenancy; apply the same change to the other query block that starts at the
second occurrence (around lines 1160-1179) to ensure both queries are scoped by
"Team"."tenancyId" rather than only by "Tenancy".
- Around line 819-826: The mapping for last_active_at_millis incorrectly
extracts the createdAt timestamp; update the CASE expression in db-sync-mappings
so that when "ProjectUser"."lastActiveAt" IS NOT NULL you EXTRACT(EPOCH FROM
"ProjectUser"."lastActiveAt") * 1000 instead of using "ProjectUser"."createdAt"
(keep the existing NULL branch and overall structure that builds the "user"
jsonb). This change targets the CASE producing "last_active_at_millis" in the
db-sync-mappings file.
- Line 667: The team_member_profiles sync currently lists ProjectUser in
sourceTables but always advances the cursor using
TeamMember.sequenceId/sequence_id, so updates to ProjectUser never move its
cursor and embedded user payloads go stale; update the team_member_profiles
fetch/advance logic (where you reference sourceTables, TeamMember, ProjectUser,
sequenceId/sequence_id) to advance the cursor based on the originating source:
when the row came from ProjectUser use ProjectUser.sequenceId/sequence_id, when
from TeamMember use TeamMember.sequenceId/sequence_id (or maintain separate
per-source cursors), and ensure the SELECT/WHERE and subsequent cursor-update
statements reference the correct sequence column so ProjectUser changes trigger
re-sync of the embedded user payload.

---

Outside diff comments:
In `@apps/backend/src/app/api/latest/auth/sessions/current/route.tsx`:
- Around line 37-48: Run the delete and the tombstone insertion inside a single
Prisma transaction so the tombstone cannot be committed without the source
delete: wrap globalPrismaClient.projectUserRefreshToken.deleteMany(...) and the
call to recordExternalDbSyncDeletion(...) in a single
globalPrismaClient.$transaction(...) and ensure you pass the delete result (the
deleted count) or an explicit expected-deleted-count to
recordExternalDbSyncDeletion so it can insert the 0-row mapping instead of
throwing; if needed, update recordExternalDbSyncDeletion to accept an
expectedCount/allowZero flag and avoid throwing when expectedCount === 0 while
still inserting the corresponding DeletedRow record for tenancy.id and
refreshTokenId.

In `@apps/backend/src/app/api/latest/internal/external-db-sync/status/route.ts`:
- Around line 236-283: buildMappingInternalStats currently only sets mappings
for "users" and "email_outboxes", so "contact_channels" never appears in
mapping_status; add a new mapping entry for contact_channels using the existing
contactChannelStats (and corresponding deletedRowsByTable lookup for
ContactChannel similar to deletedProjectUserStats) and compute
internal_min_sequence_id/internal_max_sequence_id/internal_pending_count with
minBigIntString, maxBigIntString and addBigIntStrings just like you do for
"users", then ensure mappingInternalStats.set("contact_channels", { mapping_id:
"contact_channels", internal_min_sequence_id: ..., internal_max_sequence_id:
..., internal_pending_count: ... }) so contact_channels shows up in mappings and
mappingStatuses.

In `@apps/backend/src/app/api/latest/users/crud.tsx`:
- Around line 1193-1203: Wrap the tombstone write and the actual deletion in a
single transaction so the external-db-sync deletions are only emitted if the DB
delete succeeds: move the calls to
recordExternalDbSyncRefreshTokenDeletionsForUser(...) and
globalPrismaClient.projectUserRefreshToken.deleteMany(...) into a single
globalPrismaClient.$transaction(...) and run them there (ensure you pass both
operations as transactional actions so they either both commit or both roll
back). Make sure to preserve the same parameters (auth.tenancy.id and
params.user_id) and return/await the transaction result as appropriate.

In `@apps/backend/src/lib/permissions.tsx`:
- Around line 433-460: The team-scope branch currently calls
sourceOfTruthTx.teamMemberDirectPermission.deleteMany(...) directly and thus
skips recording tombstones; mirror the project-scope logic by first fetching
matching teamMemberDirectPermission rows (use
sourceOfTruthTx.teamMemberDirectPermission.findMany with where { tenancyId:
options.tenancy.id, permissionId: options.permissionId } and select { id: true
}), loop over each result and call recordExternalDbSyncDeletion(sourceOfTruthTx,
{ tableName: "TeamMemberDirectPermission", tenancyId: options.tenancy.id,
permissionDbId: perm.id }), then call the existing
teamMemberDirectPermission.deleteMany(...) to remove them.
- Around line 330-348: The updateMany calls on
sourceOfTruthTx.teamMemberDirectPermission.updateMany and
sourceOfTruthTx.projectUserDirectPermission.updateMany only change permissionId
and must also mark those rows for resync; update the data payload in both calls
to set shouldUpdateSequenceId (e.g., to null or the sentinel value your sync
uses) alongside permissionId: newId so the downstream sync will re-evaluate and
propagate the renamed permission immediately.

---

Nitpick comments:
In `@apps/backend/scripts/run-cron-jobs.ts`:
- Line 33: The loop currently awaiting wait(1000) forces a 1s fixed poll in all
environments; change it to use a configurable interval and restore
non-deterministic/backoff behavior for production. Replace the hardcoded
wait(1000) in run-cron-jobs.ts with a value derived from an env var (e.g.,
CRON_POLL_MS or a boolean TEST_FAST_CRON) so tests can opt into 1s polling while
normal runs use a larger, jittered/default interval; or compute the delay when
the loop starts (use wait(getCronDelayMs())) where getCronDelayMs() returns
test-configured value if process.env.TEST_FAST_CRON is set, otherwise returns a
randomized/backoff delay to avoid aligned bursts. Ensure references to wait(...)
and the loop calling it are updated and documented in env usage.

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: d3c429cf-f64b-4a0a-9826-3cec8b2c92f7

📥 Commits

Reviewing files that changed from the base of the PR and between 1d00ed2 and b969f45.

📒 Files selected for processing (29)
  • apps/backend/prisma/migrations/20260316000000_add_team_team_member_sequence_columns/migration.sql
  • apps/backend/prisma/migrations/20260316000001_add_email_outbox_sequence_columns/migration.sql
  • apps/backend/prisma/migrations/20260316000002_add_session_replay_sequence_columns/migration.sql
  • apps/backend/prisma/migrations/20260317000000_add_team_permission_invitation_sequence_columns/migration.sql
  • apps/backend/prisma/migrations/20260317000001_add_project_permission_notification_preference_sequence_columns/migration.sql
  • apps/backend/prisma/migrations/20260318000000_add_sequence_id_to_refresh_tokens_and_oauth_accounts/migration.sql
  • apps/backend/prisma/schema.prisma
  • apps/backend/scripts/clickhouse-migrations.ts
  • apps/backend/scripts/run-cron-jobs.ts
  • apps/backend/src/app/api/latest/auth/password/update/route.tsx
  • apps/backend/src/app/api/latest/auth/sessions/crud.tsx
  • apps/backend/src/app/api/latest/auth/sessions/current/route.tsx
  • apps/backend/src/app/api/latest/emails/notification-preference/crud.tsx
  • apps/backend/src/app/api/latest/emails/outbox/crud.tsx
  • apps/backend/src/app/api/latest/internal/external-db-sync/sequencer/route.ts
  • apps/backend/src/app/api/latest/internal/external-db-sync/status/route.ts
  • apps/backend/src/app/api/latest/oauth-providers/crud.tsx
  • apps/backend/src/app/api/latest/session-replays/batch/route.tsx
  • apps/backend/src/app/api/latest/team-memberships/crud.tsx
  • apps/backend/src/app/api/latest/teams/crud.tsx
  • apps/backend/src/app/api/latest/users/crud.tsx
  • apps/backend/src/lib/email-queue-step.tsx
  • apps/backend/src/lib/external-db-sync.ts
  • apps/backend/src/lib/permissions.tsx
  • apps/backend/src/route-handlers/verification-code-handler.tsx
  • apps/e2e/tests/backend/endpoints/api/v1/analytics-query.test.ts
  • apps/e2e/tests/backend/endpoints/api/v1/external-db-sync-basics.test.ts
  • apps/e2e/tests/backend/endpoints/api/v1/external-db-sync-utils.ts
  • packages/stack-shared/src/config/db-sync-mappings.ts

Comment thread packages/stack-shared/src/config/db-sync-mappings.ts
Comment thread packages/stack-shared/src/config/db-sync-mappings.ts Outdated
Comment thread packages/stack-shared/src/config/db-sync-mappings.ts Outdated
Comment thread packages/stack-shared/src/config/db-sync-mappings.ts
Copy link
Copy Markdown
Contributor

@N2D4 N2D4 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

should be good, but please create the verify-data-integrity stuff in a separate PR

Comment thread apps/backend/scripts/clickhouse-migrations.ts Outdated
Comment thread apps/backend/scripts/run-cron-jobs.ts
Comment thread apps/backend/scripts/clickhouse-migrations.ts Outdated
Comment thread apps/backend/scripts/clickhouse-migrations.ts Outdated
Comment thread apps/backend/scripts/clickhouse-migrations.ts
Comment thread apps/backend/scripts/clickhouse-migrations.ts
Comment thread apps/backend/scripts/clickhouse-migrations.ts
Comment thread apps/backend/src/app/api/latest/internal/external-db-sync/sequencer/route.ts Outdated
Comment thread apps/backend/src/lib/permissions.tsx
Comment thread packages/stack-shared/src/config/db-sync-mappings.ts
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
apps/backend/src/app/api/latest/internal/external-db-sync/status/route.ts (1)

252-291: ⚠️ Potential issue | 🟠 Major

Populate mappingStatuses for the rest of the new sync mappings.

This function still emits internal checkpoints for only users and email_outboxes. Since packages/stack-shared/src/config/db-sync-mappings.ts now also defines contact_channels, teams, team_member_profiles, team_permissions, team_invitations, project_permissions, notification_preferences, refresh_tokens, and connected_accounts, /internal/external-db-sync/status will under-report backlog for most of the new sync surfaces.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/backend/src/app/api/latest/internal/external-db-sync/status/route.ts`
around lines 252 - 291, buildMappingInternalStats currently only emits entries
for "users" and "email_outboxes"; add entries for the missing mappings
(contact_channels, teams, team_member_profiles, team_permissions,
team_invitations, project_permissions, notification_preferences, refresh_tokens,
connected_accounts) by following the same pattern used for "users" and
"email_outboxes": for each mapping compute
internal_min_sequence_id/internal_max_sequence_id using
minBigIntString/maxBigIntString and internal_pending_count using
addBigIntStrings (or fall back to the deletedRowsByTable find for that
table_name when appropriate), then call
mappingInternalStats.set("<mapping_name>", { mapping_id: "<mapping_name>",
internal_min_sequence_id, internal_max_sequence_id, internal_pending_count }).
If you need dedicated SequenceStats per mapping, extend the
buildMappingInternalStats signature to accept them and use those variables (same
pattern as projectUsersStats/emailOutboxStats).
♻️ Duplicate comments (2)
apps/e2e/tests/backend/endpoints/api/v1/external-db-sync-basics.test.ts (1)

699-741: ⚠️ Potential issue | 🟡 Minor

Cover the post-membership user update here.

This only proves membership insert/delete. team_member_profiles also changes when the underlying user changes, so a display-name or profile-image update after membership creation can still regress without tripping this test. As per coding guidelines: **/apps/e2e/**/*.{ts,tsx}: ALWAYS add new E2E tests when you change the API or SDK interface. Generally, err on the side of creating too many tests.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/e2e/tests/backend/endpoints/api/v1/external-db-sync-basics.test.ts`
around lines 699 - 741, The test currently only verifies team_member_profiles on
membership insert/delete; add a step to update the underlying user after
membership creation (e.g., update User via the same model used in the test:
User.update or modify the user instance created with User.create) and then call
waitForSyncedTeamMember(client, teamId, user.userId) again to assert that the
profile fields changed in the external DB (check display_name and/or
profile_image via client.query on "team_member_profiles"); keep the existing
membership-add and delete assertions and reuse functions like niceBackendFetch,
waitForSyncedTeamMember, and the client.query to validate the persisted user
update sync.
apps/backend/src/app/api/latest/internal/external-db-sync/sequencer/route.ts (1)

140-157: ⚠️ Potential issue | 🟠 Major

Scope the invitation cascade to the updated teams.

changed_teams currently collapses to {projectId, branchId}, so backfilling one team flips shouldUpdateSequenceId for every TEAM_INVITATION row in that tenancy. That can create a lot of unnecessary re-sync churn on unrelated invites. Carry the updated (tenancyId, teamId) pairs forward and join on the invite’s team id instead.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/backend/src/app/api/latest/internal/external-db-sync/sequencer/route.ts`
around lines 140 - 157, The cascade currently groups changed_teams by
projectId/branchId which causes any TEAM_INVITATION in the whole tenancy to be
marked; modify the subquery used in the raw SQL (the block that builds
changed_teams in route.ts) to SELECT DISTINCT tenancyId, teamId (or map
teamTenants to include both tenancyId and teamId) and then change the outer
WHERE to join VerificationCode on its teamId (or tenancyId AND teamId) instead
of only projectId/branchId; update the Prisma.join usage so the raw query
carries the (tenancyId, teamId) pairs and ensure the WHERE includes
"VerificationCode"."teamId" = changed_teams."teamId" to scope updates to only
invites for the changed teams.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@apps/backend/scripts/clickhouse-migrations.ts`:
- Around line 57-62: The backfill can run before the row-format mutation
finishes because Promise.all launches
TOKEN_REFRESH_EVENT_ROW_FORMAT_MUTATION_SQL and
BACKFILL_REFRESH_TOKEN_ID_COLUMN_SQL concurrently; fix by either making
BACKFILL_REFRESH_TOKEN_ID_COLUMN_SQL tolerant to both data.refresh_token_id and
data.refreshTokenId (use a COALESCE/ifEmpty/JSON extract that checks both keys)
or by serializing the operations: await client.command({ query:
TOKEN_REFRESH_EVENT_ROW_FORMAT_MUTATION_SQL }) before calling client.command({
query: BACKFILL_REFRESH_TOKEN_ID_COLUMN_SQL }) instead of running them inside
Promise.all. Ensure you update the Promise.all block that references
TOKEN_REFRESH_EVENT_ROW_FORMAT_MUTATION_SQL and
BACKFILL_REFRESH_TOKEN_ID_COLUMN_SQL accordingly.

In
`@apps/dashboard/src/app/`(main)/(protected)/projects/[projectId]/external-db-sync/page-client.tsx:
- Around line 548-549: The sequencer table is displaying rows using
status?.sequencer while loadStatus() is always called with scope='all', causing
a mix of global totals in the summary cards and tenancy-scoped rows in the
table; update the sequencer table to use the same scope source as the summary
cards by deriving the scope from the same variable/state/prop used when calling
loadStatus() (or change loadStatus to accept and use the current scope used by
the cards), and ensure the value passed into loadStatus() and the value used to
read status (status?.sequencer) are the identical scope so both cards and table
reflect the same tenancy scope (also apply same change to the other occurrences
around the second block that reads status?.sequencer).

In `@apps/e2e/tests/backend/endpoints/api/v1/external-db-sync-basics.test.ts`:
- Around line 1131-1174: Add a second team and a rename-after-invite scenario to
this TeamInvitation sync test: after creating the first team and sending its
invite (using niceBackendFetch and waitForSyncedTeamInvitation), create a second
team and send an invite for it, then rename the first team (PATCH/PUT the team
via the same API used elsewhere) and wait for the sequencer sync (reuse
waitForSyncedTeamInvitation) to assert the first invite's team_display_name
updated to the new name while the second invite's team_display_name remains
unchanged; also assert deletions still work using
waitForSyncedTeamInvitationDeletion and reference the existing helpers and
symbols like niceBackendFetch, waitForSyncedTeamInvitation,
waitForSyncedTeamInvitationDeletion, dbManager.getClient and the
"team_invitations" query to locate where to add these steps.

---

Outside diff comments:
In `@apps/backend/src/app/api/latest/internal/external-db-sync/status/route.ts`:
- Around line 252-291: buildMappingInternalStats currently only emits entries
for "users" and "email_outboxes"; add entries for the missing mappings
(contact_channels, teams, team_member_profiles, team_permissions,
team_invitations, project_permissions, notification_preferences, refresh_tokens,
connected_accounts) by following the same pattern used for "users" and
"email_outboxes": for each mapping compute
internal_min_sequence_id/internal_max_sequence_id using
minBigIntString/maxBigIntString and internal_pending_count using
addBigIntStrings (or fall back to the deletedRowsByTable find for that
table_name when appropriate), then call
mappingInternalStats.set("<mapping_name>", { mapping_id: "<mapping_name>",
internal_min_sequence_id, internal_max_sequence_id, internal_pending_count }).
If you need dedicated SequenceStats per mapping, extend the
buildMappingInternalStats signature to accept them and use those variables (same
pattern as projectUsersStats/emailOutboxStats).

---

Duplicate comments:
In
`@apps/backend/src/app/api/latest/internal/external-db-sync/sequencer/route.ts`:
- Around line 140-157: The cascade currently groups changed_teams by
projectId/branchId which causes any TEAM_INVITATION in the whole tenancy to be
marked; modify the subquery used in the raw SQL (the block that builds
changed_teams in route.ts) to SELECT DISTINCT tenancyId, teamId (or map
teamTenants to include both tenancyId and teamId) and then change the outer
WHERE to join VerificationCode on its teamId (or tenancyId AND teamId) instead
of only projectId/branchId; update the Prisma.join usage so the raw query
carries the (tenancyId, teamId) pairs and ensure the WHERE includes
"VerificationCode"."teamId" = changed_teams."teamId" to scope updates to only
invites for the changed teams.

In `@apps/e2e/tests/backend/endpoints/api/v1/external-db-sync-basics.test.ts`:
- Around line 699-741: The test currently only verifies team_member_profiles on
membership insert/delete; add a step to update the underlying user after
membership creation (e.g., update User via the same model used in the test:
User.update or modify the user instance created with User.create) and then call
waitForSyncedTeamMember(client, teamId, user.userId) again to assert that the
profile fields changed in the external DB (check display_name and/or
profile_image via client.query on "team_member_profiles"); keep the existing
membership-add and delete assertions and reuse functions like niceBackendFetch,
waitForSyncedTeamMember, and the client.query to validate the persisted user
update sync.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: c659f109-0950-499e-babb-cccd2e4b297f

📥 Commits

Reviewing files that changed from the base of the PR and between b969f45 and 82fabef.

📒 Files selected for processing (21)
  • apps/backend/prisma/migrations/20260316000000_add_team_team_member_sequence_columns/migration.sql
  • apps/backend/prisma/migrations/20260316000001_add_email_outbox_sequence_columns/migration.sql
  • apps/backend/prisma/migrations/20260316000002_add_session_replay_sequence_columns/migration.sql
  • apps/backend/prisma/migrations/20260317000000_add_team_permission_invitation_sequence_columns/migration.sql
  • apps/backend/prisma/migrations/20260317000001_add_project_permission_notification_preference_sequence_columns/migration.sql
  • apps/backend/prisma/migrations/20260318000000_add_sequence_id_to_refresh_tokens_and_oauth_accounts/migration.sql
  • apps/backend/prisma/migrations/20260318000001_add_sequence_indexes_concurrently/migration.sql
  • apps/backend/prisma/schema.prisma
  • apps/backend/scripts/clickhouse-migrations.ts
  • apps/backend/src/app/api/latest/emails/unsubscribe-link/route.tsx
  • apps/backend/src/app/api/latest/internal/external-db-sync/sequencer/route.ts
  • apps/backend/src/app/api/latest/internal/external-db-sync/status/route.ts
  • apps/backend/src/app/api/latest/oauth-providers/crud.tsx
  • apps/backend/src/app/api/latest/team-member-profiles/crud.tsx
  • apps/backend/src/app/api/latest/users/crud.tsx
  • apps/backend/src/lib/email-queue-step.tsx
  • apps/backend/src/lib/external-db-sync.ts
  • apps/backend/src/lib/permissions.tsx
  • apps/dashboard/src/app/(main)/(protected)/projects/[projectId]/external-db-sync/page-client.tsx
  • apps/e2e/tests/backend/endpoints/api/v1/analytics-query.test.ts
  • apps/e2e/tests/backend/endpoints/api/v1/external-db-sync-basics.test.ts
✅ Files skipped from review due to trivial changes (4)
  • apps/backend/prisma/migrations/20260316000000_add_team_team_member_sequence_columns/migration.sql
  • apps/backend/prisma/migrations/20260318000001_add_sequence_indexes_concurrently/migration.sql
  • apps/backend/src/lib/external-db-sync.ts
  • apps/backend/prisma/schema.prisma
🚧 Files skipped from review as they are similar to previous changes (8)
  • apps/backend/prisma/migrations/20260317000001_add_project_permission_notification_preference_sequence_columns/migration.sql
  • apps/backend/prisma/migrations/20260316000002_add_session_replay_sequence_columns/migration.sql
  • apps/backend/src/app/api/latest/users/crud.tsx
  • apps/e2e/tests/backend/endpoints/api/v1/analytics-query.test.ts
  • apps/backend/prisma/migrations/20260316000001_add_email_outbox_sequence_columns/migration.sql
  • apps/backend/prisma/migrations/20260317000000_add_team_permission_invitation_sequence_columns/migration.sql
  • apps/backend/prisma/migrations/20260318000000_add_sequence_id_to_refresh_tokens_and_oauth_accounts/migration.sql
  • apps/backend/src/lib/permissions.tsx

Comment thread apps/backend/scripts/clickhouse-migrations.ts
@N2D4 N2D4 requested a review from mantrakp04 April 7, 2026 16:55
Comment thread apps/backend/src/app/api/latest/oauth-providers/crud.tsx
Comment thread apps/backend/src/lib/external-db-sync.ts
Comment thread apps/backend/src/lib/external-db-sync.ts
Comment thread packages/stack-shared/src/config/db-sync-mappings.ts
Comment thread apps/backend/src/app/api/latest/emails/unsubscribe-link/route.tsx
BilalG1 added a commit that referenced this pull request Apr 8, 2026
- Narrow team invitation cascade to specific changed teamId instead of
  entire tenancy
- Populate status dashboard stats for all 11 sync mappings, not just
  users and email_outboxes
- Wrap all OAuth provider updates with withExternalDbSyncUpdate so
  allowSignIn and allowConnectedAccounts changes trigger sync
- Remove session_replays from sequencer and column type map since no
  mapping exists yet
- Add DeletedRow union to email_outboxes fetch queries for future safety
@BilalG1
Copy link
Copy Markdown
Collaborator Author

BilalG1 commented Apr 8, 2026

Closing in favor of #1304

@BilalG1 BilalG1 closed this Apr 8, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants