Skip to content

feat(analytics): gzip event batch body to bypass adblockers#1407

Merged
BilalG1 merged 6 commits intodevfrom
gzip-analytics-batch-body
May 5, 2026
Merged

feat(analytics): gzip event batch body to bypass adblockers#1407
BilalG1 merged 6 commits intodevfrom
gzip-analytics-batch-body

Conversation

@BilalG1
Copy link
Copy Markdown
Collaborator

@BilalG1 BilalG1 commented May 4, 2026

Summary

The POST /api/latest/analytics/events/batch endpoint was being dropped by content-blocking browser extensions (adblockers) because the JSON request body literally contains the substring $click. Many filter lists pattern-match on tokens like that and silently kill the request — analytics events from anyone with an adblocker enabled never reached our backend.

This PR encodes the request body so keyword-matching filters can't see those tokens, while keeping the URL path unchanged (only the body was being matched here) and keeping older SDK clients working.

Approach

  • Client: gzip the JSON payload via the browser-native CompressionStream("gzip") API and POST it as application/octet-stream. Falls back to plain JSON if CompressionStream isn't available (very old browsers / non-browser runtimes).
  • Server: a yup .transform() on the body schema detects an ArrayBuffer/Uint8Array input, gunzips it, and JSON.parses before normal schema validation runs. The existing JSON path is untouched, so requests from older SDK versions in the wild continue to work without changes — and all existing schema-error snapshot tests still pass verbatim.
  • Safety: hard caps on compressed (1 MB) and decompressed (8 MB) sizes guard against zip-bomb shaped abuse. node:zlib's maxOutputLength enforces the latter at the C++ layer.

Bonus: gzip also gives a meaningful bandwidth win — click/page-view events compress very well — and keepalive bodies (which have a 64 KB cap in browsers) get more headroom.

Files

  • apps/backend/src/app/api/latest/analytics/events/batch/route.tsx — body schema gains .transform() that gunzips binary inputs; size limits added; everything else unchanged.
  • packages/stack-shared/src/interface/client-interface.tssendAnalyticsEventBatch now routes through a new module-level encodeAnalyticsBody helper that gzips and switches Content-Type. Same outer signature; encoding is internal.
  • apps/e2e/tests/backend/backend-helpers.tsniceBackendFetch gains optional rawBody/rawContentType params so tests can send non-JSON payloads. Existing JSON callers unaffected.
  • apps/e2e/tests/backend/endpoints/api/v1/analytics-events-batch.test.ts — adds two tests:
    • happy path: gzipped binary body returns inserted: 1
    • sad path: garbage bytes return 400

Out of scope (intentional)

  • URL path renaming: not all adblockers match on /analytics/, but some do. We're shipping the body fix first and will revisit if requests still get blocked after deployment.
  • Encryption: gzip is enough to defeat keyword filters. Encryption adds key-management cost with no real adversary.
  • SDK regen: only client-interface.ts (in stack-shared) was touched; event-tracker.ts (the caller) is unchanged because it already passes a JSON string. No pnpm -w run generate-sdks needed.

Test plan

  • pnpm typecheck — green
  • pnpm lint — green
  • Manually verify in dev: enable adblocker, click around with analytics enabled, confirm batch requests now go through
  • Spot-check ClickHouse analytics_internal.events shows the expected rows
  • Run the new e2e tests (pnpm test run apps/e2e/tests/backend/endpoints/api/v1/analytics-events-batch.test.ts) and confirm both new cases plus all preexisting snapshots pass
  • Confirm the JSON back-compat path still works by hitting the route with the existing JSON-body curl/test payloads

Summary by CodeRabbit

  • New Features

    • Analytics batch uploads now accept gzipped binary payloads; clients can send compressed bytes and the server will detect and decompress.
    • Client sender can gzip event batches (falls back to JSON) and uses keepalive to choose JSON vs compressed bytes.
  • Bug Fixes

    • Malformed, non-gzip, or overly-large compressed payloads now return a clear 400 response.
  • Tests

    • Added E2E and unit tests plus test-helper support for raw/gzipped request bodies and encoding behaviors.

BilalG1 added 2 commits May 4, 2026 16:38
Adblockers were dropping /analytics/events/batch requests because the
JSON body contains the substring "$click". The client now gzips the
payload and sends it as application/octet-stream; the server's body
schema gunzips via a yup .transform() before validation, and still
accepts plain JSON for older SDK clients.
@vercel
Copy link
Copy Markdown

vercel Bot commented May 4, 2026

The latest updates on your projects. Learn more about Vercel for GitHub.

Project Deployment Actions Updated (UTC)
stack-auth-hosted-components Ready Ready Preview, Comment May 5, 2026 4:27pm
stack-backend Ready Ready Preview, Comment May 5, 2026 4:27pm
stack-dashboard Ready Ready Preview, Comment May 5, 2026 4:27pm
stack-demo Ready Ready Preview, Comment May 5, 2026 4:27pm
stack-docs Ready Ready Preview, Comment May 5, 2026 4:27pm
stack-preview-backend Ready Ready Preview, Comment May 5, 2026 4:27pm
stack-preview-dashboard Ready Ready Preview, Comment May 5, 2026 4:27pm

@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai Bot commented May 4, 2026

Note

Reviews paused

It looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the reviews.auto_review.auto_pause_after_reviewed_commits setting.

Use the following commands to manage reviews:

  • @coderabbitai resume to resume automatic reviews.
  • @coderabbitai review to trigger a single review.

Use the checkboxes below for quick actions:

  • ▶️ Resume reviews
  • 🔍 Trigger review
📝 Walkthrough

Walkthrough

Accept gzipped JSON analytics batches as binary (application/octet-stream); backend decodes/enforces compressed and decompressed size limits and validates parsed JSON; client can gzip batches via a new helper; tests and E2E helper support sending/ asserting raw gzipped payloads and rejection cases.

Changes

Analytics Compression Support

Layer / File(s) Summary
Backend constants & imports
apps/backend/src/app/api/latest/analytics/events/batch/route.tsx
Adds node:zlib import and defines MAX_COMPRESSED_BYTES and MAX_DECOMPRESSED_BYTES.
Binary decode transform
apps/backend/src/app/api/latest/analytics/events/batch/route.tsx
Adds maybeDecodeBinaryBody(originalValue) to detect ArrayBuffer/Uint8Array, enforce compressed/decompressed caps, gunzip via zlib.gunzipSync, JSON.parse the result, and map errors to StatusError(StatusError.BadRequest, ...).
Schema wiring
apps/backend/src/app/api/latest/analytics/events/batch/route.tsx
Applies .transform((_v, original) => maybeDecodeBinaryBody(original)) to the request body schema so decoded JSON is validated downstream.
Client encoding helper
packages/stack-shared/src/interface/client-interface.ts
Adds encodeAnalyticsBody(jsonBody, { keepalive }) that returns either gzipped Uint8Array with Content-Type: application/octet-stream (using CompressionStream/Blob/Response) or falls back to plain JSON with application/json.
Client usage
packages/stack-shared/src/interface/client-interface.ts
sendAnalyticsEventBatch now uses encodeAnalyticsBody() and forwards the helper’s body and Content-Type to the request.
Test helper raw body support
apps/e2e/tests/backend/backend-helpers.ts
niceBackendFetch gains rawBody?: Uint8Array and rawContentType?: string; enforces mutual exclusivity with body, requires rawContentType only with rawBody, and forwards rawBody with rawContentType ?? "application/octet-stream".
E2E tests
apps/e2e/tests/backend/endpoints/api/v1/analytics-events-batch.test.ts
Adds gzipSync/randomBytes imports and tests: accept gzipped JSON (200 { inserted: 1 }); reject invalid gzip bytes (400 "Invalid encoded analytics body"); reject compressed-size and decompressed-size cap exceedance returning 400 with appropriate messages.
Unit tests
packages/stack-shared/src/interface/client-interface.test.ts
Adds tests verifying encodeAnalyticsBody/sendAnalyticsEventBatch behavior for gzipped body vs keepalive and for compression API absence/failure falling back to JSON.

Sequence Diagram(s)

sequenceDiagram
participant Client as Client
participant Network as Network
participant Server as Server
participant Decompressor as Decompressor
participant DB as DB

Client->>Client: encodeAnalyticsBody(json) (optional gzip)
Client->>Network: POST /api/.../events/batch (body + Content-Type)
Network->>Server: deliver request
Server->>Decompressor: maybeDecodeBinaryBody(originalBody)
Decompressor-->>Server: parsed JSON or error
Server->>DB: insert events (on success)
DB-->>Server: insert result
Server-->>Network: HTTP 200 or 400
Network-->>Client: response
Loading

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~25 minutes

"I hop and gzip with glee,
small bytes tucked snug as can be,
through tunnels of zlib I dart,
events arrive — a compact art,
carrot-coded cheers for the new start!"

🚥 Pre-merge checks | ✅ 4 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 12.50% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (4 passed)
Check name Status Explanation
Title check ✅ Passed The title clearly and concisely summarizes the main change: adding gzip encoding to analytics event batch bodies to bypass adblockers.
Description check ✅ Passed The description is comprehensive and well-structured, covering summary, approach, files, scope, and test plan. It clearly explains the problem, solution, and implementation details.
Linked Issues check ✅ Passed Check skipped because no linked issues were found for this pull request.
Out of Scope Changes check ✅ Passed Check skipped because no linked issues were found for this pull request.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
📝 Generate docstrings
  • Create stacked PR
  • Commit on current branch
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Commit unit tests in branch gzip-analytics-batch-body

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@greptile-apps
Copy link
Copy Markdown
Contributor

greptile-apps Bot commented May 4, 2026

Greptile Summary

This PR gzips the analytics event batch body before sending it to the server, preventing keyword-matching adblockers from silently dropping requests that contain tokens like $click. The server gains a yup .transform() that detects ArrayBuffer/Uint8Array bodies (sent as application/octet-stream), gunzips them, and JSON-parses before normal schema validation; the existing JSON path is untouched for backward compatibility.

  • Server (route.tsx): maybeDecodeBinaryBody is added as a yup transform on the body schema; hard caps of 1 MB compressed / 8 MB decompressed guard against zip-bomb abuse via zlib.gunzipSync's maxOutputLength.
  • Client (client-interface.ts): encodeAnalyticsBody uses the browser-native CompressionStream API to gzip before sending; it correctly falls back to plain JSON when CompressionStream is unavailable or when keepalive: true is requested (page-unload flushes must start synchronously and cannot await async compression).
  • Tests: Four new e2e tests cover the gzip happy path, invalid gzip bytes, the 1 MB compressed cap, and the 8 MB decompressed cap; four unit tests cover the encoding logic including CompressionStream fallback paths.

Confidence Score: 5/5

Safe to merge — the gzip encoding path is well-guarded, the JSON back-compat path is untouched, and all error branches are covered by new tests.

The server-side transform correctly propagates StatusError through yup's cast chain (non-ValidationError errors are re-thrown as-is in both yupValidate and validateSmartRequest), so the custom 400 messages reach the client intact. Size limits are enforced at the C++ layer via maxOutputLength. The client-side fallback for keepalive requests is intentional and well-documented. No logic errors, security gaps, or missing guards were found in the changed paths.

No files require special attention.

Important Files Changed

Filename Overview
apps/backend/src/app/api/latest/analytics/events/batch/route.tsx Adds maybeDecodeBinaryBody transform to the body schema to gunzip application/octet-stream payloads; includes 1 MB compressed and 8 MB decompressed size caps, correct StatusError propagation through the yup transform chain.
packages/stack-shared/src/interface/client-interface.ts Adds encodeAnalyticsBody helper that gzips the JSON payload via CompressionStream for non-keepalive flushes; keepalive=true intentionally falls back to plain JSON to avoid losing page-unload batches.
apps/e2e/tests/backend/endpoints/api/v1/analytics-events-batch.test.ts Adds four new e2e tests: happy-path gzip, invalid gzip, oversized compressed body, and zip-bomb decompression — covering both size-cap branches added in the server route.
packages/stack-shared/src/interface/client-interface.test.ts Adds unit tests for sendAnalyticsEventBatch encoding: gzip + octet-stream for normal flushes, plain JSON for keepalive, and two fallback cases (missing/broken CompressionStream).
apps/e2e/tests/backend/backend-helpers.ts Adds rawBody/rawContentType options to niceBackendFetch to allow binary payloads in tests; includes a guard that prevents body and rawBody from being passed together.

Reviews (2): Last reviewed commit: "Merge origin/dev into gzip-analytics-bat..." | Re-trigger Greptile

Comment thread packages/stack-shared/src/interface/client-interface.ts
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick comments (2)
packages/stack-shared/src/interface/client-interface.ts (1)

139-145: ⚡ Quick win

Silent catch {} violates the "NEVER try-catch-all" guideline

The bare catch swallows any CompressionStream runtime error without logging or surfacing it. The feature-unavailability case is already handled by the guard at line 136 (returns plain JSON when CompressionStreamCtor is absent). Actual compression errors (e.g., mid-stream failure) should propagate; the outer try/catch in sendAnalyticsEventBatch already converts them to Result.error, which is the correct silent-fail path for analytics.

As per coding guidelines: "NEVER try-catch-all … Use runAsynchronously or runAsynchronouslyWithAlert instead."

♻️ Proposed fix — remove the internal catch and rely on the outer error boundary
-  try {
-    const stream = new Blob([jsonBody]).stream().pipeThrough(new CompressionStreamCtor("gzip"));
-    const buffer = await new Response(stream).arrayBuffer();
-    return { body: new Uint8Array(buffer), contentType: "application/octet-stream" };
-  } catch {
-    return { body: jsonBody, contentType: "application/json" };
-  }
+  const stream = new Blob([jsonBody]).stream().pipeThrough(new CompressionStreamCtor("gzip"));
+  const buffer = await new Response(stream).arrayBuffer();
+  return { body: new Uint8Array(buffer), contentType: "application/octet-stream" };

sendAnalyticsEventBatch already wraps the call in try { … } catch (e) { return Result.error(e) }, so a propagated compression error will be swallowed there rather than here — maintaining the silent-failure semantic for analytics without hiding the error path from the type system.

🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In `@packages/stack-shared/src/interface/client-interface.ts` around lines 139 -
145, Remove the bare catch that swallows compression errors inside the
blob-compression block and let exceptions propagate to the outer error boundary
in sendAnalyticsEventBatch; specifically, in the block that uses
CompressionStreamCtor to gzip the Blob (the try that creates `stream = new
Blob([jsonBody]).stream().pipeThrough(new CompressionStreamCtor("gzip"))` and
builds the Uint8Array), delete the internal catch so runtime compression
failures surface to the existing try/catch in sendAnalyticsEventBatch (which
converts them to Result.error), while keeping the earlier guard that handles
missing CompressionStreamCtor and the fallback JSON return when that guard
triggers.
apps/e2e/tests/backend/endpoints/api/v1/analytics-events-batch.test.ts (1)

206-218: 💤 Low value

Consider using toMatchInlineSnapshot for consistency once the error body shape is confirmed

The other error-path tests all use toMatchInlineSnapshot. This one asserts only res.status. Once you've run the test and seen the actual 400 body, locking it with an inline snapshot would align with the guideline ("prefer .toMatchInlineSnapshot over other selectors, if possible").

🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In `@apps/e2e/tests/backend/endpoints/api/v1/analytics-events-batch.test.ts`
around lines 206 - 218, The test "rejects a binary body that isn't valid gzip"
only asserts res.status but other error-path tests use toMatchInlineSnapshot;
update this test (the it block named "rejects a binary body that isn't valid
gzip") to assert the full response body using expect(await
res.json()).toMatchInlineSnapshot(...) (or expect(await res.text()...) if the
body is text) after running the test once to capture the actual 400 response
shape, and replace the current expect(res.status).toBe(400) with the inline
snapshot assertion to keep consistency with the other error-path tests.
🤖 Prompt for all review comments with AI agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

Nitpick comments:
In `@apps/e2e/tests/backend/endpoints/api/v1/analytics-events-batch.test.ts`:
- Around line 206-218: The test "rejects a binary body that isn't valid gzip"
only asserts res.status but other error-path tests use toMatchInlineSnapshot;
update this test (the it block named "rejects a binary body that isn't valid
gzip") to assert the full response body using expect(await
res.json()).toMatchInlineSnapshot(...) (or expect(await res.text()...) if the
body is text) after running the test once to capture the actual 400 response
shape, and replace the current expect(res.status).toBe(400) with the inline
snapshot assertion to keep consistency with the other error-path tests.

In `@packages/stack-shared/src/interface/client-interface.ts`:
- Around line 139-145: Remove the bare catch that swallows compression errors
inside the blob-compression block and let exceptions propagate to the outer
error boundary in sendAnalyticsEventBatch; specifically, in the block that uses
CompressionStreamCtor to gzip the Blob (the try that creates `stream = new
Blob([jsonBody]).stream().pipeThrough(new CompressionStreamCtor("gzip"))` and
builds the Uint8Array), delete the internal catch so runtime compression
failures surface to the existing try/catch in sendAnalyticsEventBatch (which
converts them to Result.error), while keeping the earlier guard that handles
missing CompressionStreamCtor and the fallback JSON return when that guard
triggers.

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 0abc892c-5217-4f5c-a2b3-32a49009cc75

📥 Commits

Reviewing files that changed from the base of the PR and between 9f79bfb and fe34b10.

📒 Files selected for processing (4)
  • apps/backend/src/app/api/latest/analytics/events/batch/route.tsx
  • apps/e2e/tests/backend/backend-helpers.ts
  • apps/e2e/tests/backend/endpoints/api/v1/analytics-events-batch.test.ts
  • packages/stack-shared/src/interface/client-interface.ts

- Remove inner try/catch in encodeAnalyticsBody; runtime compression
  errors now propagate to the existing Result.error in
  sendAnalyticsEventBatch instead of being swallowed silently.
- Convert the invalid-gzip test to toMatchInlineSnapshot for parity
  with other error-path tests.
- Add a regression test that gzips a 9 MB zero buffer to exercise the
  MAX_DECOMPRESSED_BYTES guard.
Unintentional drift from a mid-task file rewrite — the literal U+FFFD
character was written into the source instead of the original �
escape. Functionally identical, but restores the original bytes.
Comment thread packages/stack-shared/src/interface/client-interface.ts Outdated
Comment thread packages/stack-shared/src/interface/client-interface.ts
Comment thread apps/e2e/tests/backend/backend-helpers.ts
- Skip gzip when keepalive=true: pagehide/visibilitychange flushes
  must dispatch fetch synchronously before page tear-down; awaiting
  async compression first lets the browser cancel the request.
- Re-add a runtime fallback in encodeAnalyticsBody so
  partial/broken CompressionStream support falls back to plain JSON
  instead of dropping the batch via Result.error.
- Add e2e regression test for the MAX_COMPRESSED_BYTES (1 MB) cap.
- Add unit tests in client-interface.test.ts covering gzip happy
  path, missing CompressionStream, keepalive skip, and runtime
  compression error.
- Throw in niceBackendFetch when rawContentType is provided without
  rawBody, mirroring the body/rawBody guard.
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🤖 Prompt for all review comments with AI agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

Inline comments:
In `@apps/e2e/tests/backend/backend-helpers.ts`:
- Line 158: Remove the unnecessary type cast on rawBody in the request options
object: rawBody is already a Uint8Array (an ArrayBufferView) and is assignable
to BodyInit, so drop the "as BodyInit" cast and return the object with body:
rawBody when rawBody !== undefined; update the expression that currently reads
"...rawBody !== undefined ? { body: rawBody as BodyInit } : {}" to use rawBody
directly to satisfy the no-cast guideline and preserve typing.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: a2e48ceb-9b48-430b-bd0d-ed0c98292047

📥 Commits

Reviewing files that changed from the base of the PR and between 52e480b and 81ed062.

📒 Files selected for processing (4)
  • apps/e2e/tests/backend/backend-helpers.ts
  • apps/e2e/tests/backend/endpoints/api/v1/analytics-events-batch.test.ts
  • packages/stack-shared/src/interface/client-interface.test.ts
  • packages/stack-shared/src/interface/client-interface.ts
🚧 Files skipped from review as they are similar to previous changes (1)
  • packages/stack-shared/src/interface/client-interface.ts

Comment thread apps/e2e/tests/backend/backend-helpers.ts
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🤖 Prompt for all review comments with AI agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

Inline comments:
In `@packages/stack-shared/src/interface/client-interface.ts`:
- Around line 148-156: The catch is too broad and hides unexpected errors;
change the bare catch to catch (err) and only swallow expected
compression-support failures (e.g., err instanceof TypeError or (typeof
DOMException !== "undefined" && err instanceof DOMException) or err.name ===
"NotSupportedError"/"TypeError"), then return the JSON fallback (body: jsonBody,
contentType: "application/json"); for any other error rethrow it so failures in
stream creation or response handling are not silently ignored. Reference the
CompressionStreamCtor/stream/new Response(jsonBody) block and
EventTracker._flush() fallback behavior when applying this check.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: e6bd7ae7-9da5-426e-8aff-5e14b8daa352

📥 Commits

Reviewing files that changed from the base of the PR and between 81ed062 and cd4a451.

📒 Files selected for processing (4)
  • apps/backend/src/app/api/latest/analytics/events/batch/route.tsx
  • apps/e2e/tests/backend/backend-helpers.ts
  • apps/e2e/tests/backend/endpoints/api/v1/analytics-events-batch.test.ts
  • packages/stack-shared/src/interface/client-interface.ts
🚧 Files skipped from review as they are similar to previous changes (2)
  • apps/backend/src/app/api/latest/analytics/events/batch/route.tsx
  • apps/e2e/tests/backend/endpoints/api/v1/analytics-events-batch.test.ts

Comment thread packages/stack-shared/src/interface/client-interface.ts
@BilalG1 BilalG1 requested a review from mantrakp04 May 5, 2026 17:32
@BilalG1 BilalG1 merged commit b0812c8 into dev May 5, 2026
40 of 45 checks passed
@BilalG1 BilalG1 deleted the gzip-analytics-batch-body branch May 5, 2026 23:38
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants