(do not review or merge yet) Bun migration#951
Conversation
📝 WalkthroughSummary by CodeRabbit
WalkthroughThe changes migrate the project from Node/npm/tsx to the Bun runtime across CI, build, test, packaging, and runtime artifacts. GitHub Actions workflows and Dockerfile use pinned Bun v1.3.13 and bun install/build/test commands. Numerous package.json scripts, build scripts, worker bootstrap modules, and test files are updated to Bun APIs (bun:test, mock/spyOn) and bun build outputs; new bun-specific files (bunfig.toml, bun-entry, build-for-bun.sh) and binary packaging/zipping steps are added. Several synchronous LMDB calls, native addon resolution, and small API surface adjustments (exports, enum addition) are included. Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes Possibly related PRs
Suggested reviewers
Poem
🚥 Pre-merge checks | ✅ 3 | ❌ 2❌ Failed checks (2 warnings)
✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Comment |
There was a problem hiding this comment.
Actionable comments posted: 9
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
packages/core/pvm-interpreter/ops-dispatchers/three-regs-dispatcher.test.ts (1)
30-36:⚠️ Potential issue | 🔴 CriticalRemove duplicate
mockAllMethods(bitOps)call.Line 35 duplicates the spyOn setup from line 31. Bun throws "spyOn(target, prop) does not support accessor properties yet" when spyOn is applied twice to the same object method before
mock.restore(). This causes test initialization to fail.Proposed change
before(() => { mockAllMethods(bitOps); mockAllMethods(booleanOps); mockAllMethods(moveOps); mockAllMethods(mathOps); - mockAllMethods(bitOps); mockAllMethods(shiftOps); mockAllMethods(bitRotationOps); });🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/core/pvm-interpreter/ops-dispatchers/three-regs-dispatcher.test.ts` around lines 30 - 36, The test setup calls mockAllMethods(bitOps) twice causing duplicate spies; remove the redundant second invocation of mockAllMethods(bitOps) in the before() block so bitOps is only mocked once, leaving the other mocks (booleanOps, moveOps, mathOps, shiftOps) unchanged and preventing duplicate spyOn calls that cause the accessor property error.
🧹 Nitpick comments (15)
patches/@MatrixAI%2Fquic@2.0.9.patch (1)
45-48: Consider loweringECONNREFUSEDlog level to reduce noise.This condition can be frequent in UDP workflows;
infomay become noisy in production.debug(or rate-limited logging) would be safer operationally.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@patches/`@matrixai%2Fquic@2.0.9.patch around lines 45 - 48, Lower the noisy ECONNREFUSED log by changing the event handler on this.socket: in the error listener registered via this.socket.on('error', (err) => { ... }), replace this.logger.info(...) with this.logger.debug(...) (or use your project's rate-limited logging helper) so ECONNREFUSED messages are logged at debug level instead of info; keep the existing message text (e.g., `Ignoring ECONNREFUSED on socket: ${err.message}`) for context.packages/core/pvm-interpreter/ops-dispatchers/one-offset-dispatcher.test.ts (2)
24-25: Filter prototype entries before spying
Object.getOwnPropertyNamesincludes"constructor"and may include non-callable properties. Guarding these keeps the setup robust and avoids accidental spy targets.Proposed diff
function mockAllMethods(obj: object) { const target = obj as Record<string, (...args: unknown[]) => unknown>; for (const method of Object.getOwnPropertyNames(Object.getPrototypeOf(obj))) { + if (method === "constructor") continue; + if (typeof target[method] !== "function") continue; spyOn(target, method).mockImplementation(mockFn); } }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/core/pvm-interpreter/ops-dispatchers/one-offset-dispatcher.test.ts` around lines 24 - 25, The test currently iterates over Object.getOwnPropertyNames(Object.getPrototypeOf(obj)) and calls spyOn(target, method).mockImplementation(mockFn) for every entry, which can include "constructor" and non-callable properties; update the loop to skip non-function keys and "constructor" by checking each method from Object.getOwnPropertyNames(Object.getPrototypeOf(obj)) and only calling spyOn when method !== 'constructor' and typeof target[method] === 'function' (and optionally ensure method in target), so spyOn/ mockImplementation is only applied to actual callable methods on target.
1-3: Unify test lifecycle imports underbun:testLine 3 mixes
node:testhooks withbun:testmocks (line 1). In Bun 1.2, this mixing is not officially supported;node:testis only partially implemented and lacks mocking support. Consolidate all test utilities tobun:testfor consistency and reliability.Proposed diff
-import { mock, spyOn } from "bun:test"; +import { afterAll, beforeAll, beforeEach, describe, it, mock, spyOn } from "bun:test"; import assert from "node:assert"; -import { after, before, beforeEach, describe, it } from "node:test"; @@ - before(() => { + beforeAll(() => { mockAllMethods(branchOps); }); @@ - after(() => { + afterAll(() => { mock.restore(); });Also applies to: 29-35
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/core/pvm-interpreter/ops-dispatchers/one-offset-dispatcher.test.ts` around lines 1 - 3, The test file mixes bun:test and node:test imports; consolidate all test utilities to bun:test by replacing the node:test import of hooks (after, before, beforeEach, describe, it) and node:assert with a single import from "bun:test" (e.g., import { mock, spyOn, after, before, beforeEach, describe, it, assert } from "bun:test") so all mocks, hooks and assertions come from bun:test and remove the separate node imports; update any references to these symbols (mock, spyOn, after, before, beforeEach, describe, it, assert) accordingly.packages/core/pvm-interpreter/ops-dispatchers/one-reg-one-ext-imm-dispatcher.test.ts (1)
23-25: HardenmockAllMethodsto filter callable properties.Line 24 iterates all prototype keys without filtering. Spying on
constructorand future non-callable properties (getters/setters) will be brittle. Filter to function-type properties and skipconstructor.Suggested patch
function mockAllMethods(obj: object) { const target = obj as Record<string, (...args: unknown[]) => unknown>; - for (const method of Object.getOwnPropertyNames(Object.getPrototypeOf(obj))) { - spyOn(target, method).mockImplementation(mockFn); - } + const proto = Object.getPrototypeOf(obj); + for (const method of Object.getOwnPropertyNames(proto)) { + if (method === "constructor") continue; + const descriptor = Object.getOwnPropertyDescriptor(proto, method); + if (!descriptor || typeof descriptor.value !== "function") continue; + spyOn(target, method).mockImplementation(mockFn); + } }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/core/pvm-interpreter/ops-dispatchers/one-reg-one-ext-imm-dispatcher.test.ts` around lines 23 - 25, The helper mockAllMethods currently iterates prototype keys without filtering, which causes spies to be created for non-callable properties like 'constructor' or getters/setters; update the loop that uses Object.getOwnPropertyNames(Object.getPrototypeOf(obj)) to skip the 'constructor' key and only call spyOn(target, method).mockImplementation(mockFn) when typeof target[method] === 'function' (and the property exists on target), ensuring you filter to callable properties before mocking; keep the existing target and mockFn usage.bin/rpc/src/server.ts (1)
236-249: Optional: guard shutdown promise settlement for clarity.Current logic can resolve from both timeout and close callback paths. A small guard makes lifecycle behavior easier to reason about.
♻️ Suggested cleanup
- await new Promise<void>((resolve) => { + await new Promise<void>((resolve) => { + let settled = false; + const finish = () => { + if (settled) return; + settled = true; + resolve(); + }; for (const ws of this.wss.clients) { ws.terminate(); } const timer = setTimeout(() => { this.logger.warn`wss.close() callback did not fire within 1s; proceeding`; - resolve(); + finish(); }, 1000); this.wss.close(() => { clearTimeout(timer); - resolve(); + finish(); }); });🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@bin/rpc/src/server.ts` around lines 236 - 249, The shutdown promise for this.wss can call resolve from both the timeout and the wss.close callback; add a simple guard (e.g., a local boolean like "settled") so only the first path resolves and the other is ignored, and ensure the timer is cleared when settled; update the block that iterates this.wss.clients, the timeout handler, and the this.wss.close callback to check/set that guard before calling resolve and logging.bin/rpc/test/e2e-setup.ts (2)
22-30: Consider whether import should halt on first block failure.The loop continues processing blocks even after an error. If the intent is to set up a consistent test state, a single failed block import may invalidate subsequent tests. Consider whether to
breakorprocess.exit(-1)on the first error.If partial imports are acceptable for the E2E test suite, this is fine as-is.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@bin/rpc/test/e2e-setup.ts` around lines 22 - 30, The import loop using startBlocksReader and api.importBlock currently logs errors via resultToString but continues processing subsequent blocks; decide whether a failed import should halt setup—if so, modify the loop around reader and api.importBlock to stop on first error (e.g., break the loop or call process.exit(-1)) after logging the failure; if the preferred behavior is to fail fast in the E2E setup, ensure the code path after detecting res.isError triggers an immediate exit (or throws) so downstream tests aren't run against a partially imported chain.
44-145: Consider generating the file list dynamically.The hardcoded 100-element array could be replaced with a glob or range-based generation to reduce maintenance burden when test vectors change.
♻️ Optional: dynamic file list generation
const blocksToImport = Array.from({ length: 100 }, (_, i) => withRelPath(`test-vectors/w3f-davxy_072/traces/storage/${String(i + 1).padStart(8, "0")}.json`) );🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@bin/rpc/test/e2e-setup.ts` around lines 44 - 145, The blocksToImport array is hardcoded with 100 entries; replace it with a dynamic generator to avoid maintenance (use the existing withRelPath helper). Update the declaration of blocksToImport to programmatically produce the sequence (e.g., Array.from or a small for loop that builds paths with String(i+1).padStart(8,'0') and calls withRelPath) or use a glob library to discover files; ensure the new implementation still produces the same ordered list and keeps the variable name blocksToImport unchanged so callers work as-is.bin/convert/build-for-npm.sh (1)
10-12: Script assumes execution frombin/convertdirectory.The
ROOT=$(cd ../.. && pwd)computation assumes the script is run from thebin/convertdirectory. If invoked from elsewhere, paths will be incorrect. Consider using the script's own location:♻️ More robust path resolution
-# Resolve the top-level project directory (two levels up from bin/convert) -ROOT=$(cd ../.. && pwd) +# Resolve the top-level project directory (relative to this script's location) +SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)" +ROOT="$(cd "$SCRIPT_DIR/../.." && pwd)"🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@bin/convert/build-for-npm.sh` around lines 10 - 12, The script computes ROOT with ROOT=$(cd ../.. && pwd) which breaks if the script is invoked from another CWD; update ROOT calculation to derive the script's directory from the script path (use BASH_SOURCE or $0), then resolve ROOT relative to that directory (e.g. script-dir/../..) and recompute DIST_FOLDER from that ROOT so ROOT and DIST_FOLDER are always correct regardless of current working directory; update references in build-for-npm.sh where ROOT and DIST_FOLDER are set.packages/workers/api-node/protocol.ts (1)
92-103: Consider adding a timeout for the Ready handshake.If the worker fails to send the
Readymessage (e.g., crashes during bootstrap),spawnWorkerwill hang indefinitely waiting foronReady. Consider adding a timeout that rejects with a clear error.♻️ Optional: Add timeout for robustness
+ const readyTimeout = setTimeout(() => { + worker.off("message", onReady); + // Optionally reject via workerFinished or log a warning + logger.warn`(${protocol.name}) Worker did not send Ready within timeout`; + }, 10000); // 10 second timeout + const onReady = (incoming: unknown) => { if ( incoming !== null && typeof incoming === "object" && "kind" in incoming && incoming.kind === WorkerControlPlaneMsg.Ready ) { + clearTimeout(readyTimeout); worker.off("message", onReady); worker.postMessage(msg, [msg.parentPort, ...transferredPorts]); } };🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/workers/api-node/protocol.ts` around lines 92 - 103, The Ready handshake can hang if the worker never sends WorkerControlPlaneMsg.Ready; update the spawnWorker flow by adding a timeout when you register onReady: start a timer (e.g., setTimeout) alongside registering onReady, and if the timeout fires reject the spawnWorker promise with a clear error message, remove the message listener via worker.off("message", onReady), and perform any cleanup (e.g., worker.terminate() if appropriate); when onReady receives the Ready message clear the timer and proceed to call worker.off("message", onReady) and worker.postMessage(msg, [msg.parentPort, ...transferredPorts]). Ensure the timeout duration is configurable or documented and that the rejection includes context (worker id/role) for debugging.packages/core/codec/index.test.ts (1)
17-28: Usage ofinstanceofviolates coding guidelines, but appears justified here.Per CODESTYLE.md, the codebase avoids
instanceof/type detection via theinoperator. However, this helper is specifically addressing Bun'sdeepStrictEqualcompatibility issues with private fields, as noted in the comment. For test utilities working around runtime quirks, this pragmatic approach is reasonable.Consider adding a note that this pattern should not be copied to production code.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/core/codec/index.test.ts` around lines 17 - 28, The test helper assertCodecEqual uses instanceof checks for BitVec and BytesBlob to work around Bun's deepStrictEqual private-field issues; add an explicit comment above the function (or above the instanceof branches) stating this is a test-only pragmatic exception to the CODESTYLE.md rule and must not be copied into production code, referencing assertCodecEqual, BitVec, BytesBlob and CODESTYLE.md so reviewers understand the rationale and the exception scope..github/workflows/utils-benchmarks.yml (1)
28-42: Manual Bun installation for self-hosted runner is appropriate.The script handles version pinning and reinstallation correctly. The comment at lines 44-46 clearly documents why Node.js is retained for benchmark execution (Bun JIT timing noise ±50-175% vs Node ±<7%).
Minor: SC2086 suggests quoting
$INSTALL_DIRon line 42. While low-risk in this context, quoting prevents edge cases.🛡️ Quote variable for safety
- echo "$INSTALL_DIR" >> $GITHUB_PATH + echo "$INSTALL_DIR" >> "$GITHUB_PATH"🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.github/workflows/utils-benchmarks.yml around lines 28 - 42, The echo append is not quoting the target path which can trigger SC2086 word-splitting; update the echo invocation that writes INSTALL_DIR to GITHUB_PATH to quote the target variable (change echo "$INSTALL_DIR" >> $GITHUB_PATH to echo "$INSTALL_DIR" >> "$GITHUB_PATH") and ensure other uses of INSTALL_DIR/GITHUB_PATH (e.g., mkdir -p "$INSTALL_DIR") remain quoted for safety..github/workflows/release-bump-version.yml (1)
81-91: For loop overfindoutput is fragile (SC2044).The static analysis correctly flags this pattern. Files with spaces or special characters in paths could cause issues. Consider using
find -execor a while-read loop.♻️ Safer iteration pattern
- name: Update workspace packages versions run: | VERSION=${{ steps.version.outputs.version }} - for pkg in $(find . -name "package.json" -not -path "./node_modules/*" -not -path "./dist/*" -not -path "./test-vectors/*" -not -path "./package.json"); do + find . -name "package.json" \ + -not -path "./node_modules/*" \ + -not -path "./dist/*" \ + -not -path "./test-vectors/*" \ + -not -path "./package.json" \ + -print0 | while IFS= read -r -d '' pkg; do echo "Updating $pkg" bun -e " const fs = require('fs'); const p = JSON.parse(fs.readFileSync('$pkg', 'utf8')); p.version = '$VERSION'; fs.writeFileSync('$pkg', JSON.stringify(p, null, 2) + '\n'); " done🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.github/workflows/release-bump-version.yml around lines 81 - 91, The for-loop over the output of find (the block that sets VERSION and iterates with for pkg in $(find ...)) is fragile with filenames containing spaces/special chars; replace it with a safe iteration using null-delimited output (e.g., use find ... -print0 and a while IFS= read -r -d '' pkg; do ... done loop) or use find -exec to update each package.json, and keep the inner update logic that reads JSON and sets p.version = '$VERSION' (the bun -e update block) but reference $pkg safely via the chosen safe iteration method..github/workflows/vectors-test-suites.yml (1)
75-89: Quote shell variables to prevent word splitting.The static analysis tool flagged unquoted variables. While these specific variables are unlikely to contain spaces, quoting them is a defensive best practice.
🔧 Proposed fix for shell quoting
- name: Setup Bun run: | BUN_VERSION="1.3.13" INSTALL_DIR="$HOME/.bun/bin" mkdir -p "$INSTALL_DIR" # Force pinned version: reinstall if missing or version mismatch - if ! "$INSTALL_DIR/bun" --version 2>/dev/null | grep -qx "$BUN_VERSION"; then + if ! "$INSTALL_DIR/bun" --version 2>/dev/null | grep -qxF "$BUN_VERSION"; then rm -f /tmp/bun.zip rm -rf /tmp/bun-extract - curl -fsSL -o /tmp/bun.zip "https://github.com/oven-sh/bun/releases/download/bun-v$BUN_VERSION/bun-linux-x64.zip" + curl -fsSL -o /tmp/bun.zip "https://github.com/oven-sh/bun/releases/download/bun-v${BUN_VERSION}/bun-linux-x64.zip" python3 -c "import zipfile; zipfile.ZipFile('/tmp/bun.zip').extractall('/tmp/bun-extract')" mv -f /tmp/bun-extract/bun-linux-x64/bun "$INSTALL_DIR/bun" chmod +x "$INSTALL_DIR/bun" fi echo "$INSTALL_DIR" >> $GITHUB_PATH🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.github/workflows/vectors-test-suites.yml around lines 75 - 89, Quote all shell variable usages and file paths to prevent word splitting: wrap BUN_VERSION where interpolated in strings, and quote INSTALL_DIR and GITHUB_PATH in assignments and commands (mkdir -p "$INSTALL_DIR", if ! "$INSTALL_DIR/bun" --version 2>/dev/null | grep -qx "$BUN_VERSION"; then, rm -f "/tmp/bun.zip", python3 extract from "/tmp/bun.zip" to "/tmp/bun-extract", curl -fsSL -o "/tmp/bun.zip" "...$BUN_VERSION..." , mv -f "/tmp/bun-extract/bun-linux-x64/bun" "$INSTALL_DIR/bun", chmod +x "$INSTALL_DIR/bun", and echo "$INSTALL_DIR" >> "$GITHUB_PATH") so all uses of INSTALL_DIR, GITHUB_PATH, and temp file paths are quoted.packages/core/pvm-interpreter/ops-dispatchers/two-regs-two-imms-dispatcher.test.ts (2)
1-3: Unify this file onbun:testAPIs.Lines 1 and 3 mix
bun:test(for mocks) withnode:test(for hooks). Bun's mocking features are not officially supported when imported fromnode:test; consolidate tobun:testfor all test imports to avoid unsupported interactions.Proposed change
-import { mock, spyOn } from "bun:test"; +import { after, before, beforeEach, describe, it, mock, spyOn } from "bun:test"; import assert from "node:assert"; -import { after, before, beforeEach, describe, it } from "node:test";🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/core/pvm-interpreter/ops-dispatchers/two-regs-two-imms-dispatcher.test.ts` around lines 1 - 3, The test file mixes bun:test and node:test imports which can break Bun's mocking; change all test-related imports to bun:test so mocks and hooks come from the same runtime: replace the node:test import of after, before, beforeEach, describe, it with bun:test equivalents and ensure any uses of after/before/beforeEach/describe/it and imported helpers (mock, spyOn) reference the bun:test module (look for the top-level import lines and any tests using after/before/beforeEach/describe/it in two-regs-two-imms-dispatcher.test.ts).
82-85: Consider adding type guards inmockAllMethodsto avoid spying on non-method prototype properties.The loop currently spies on all prototype property names returned by
Object.getOwnPropertyNames, includingconstructor. While LoadOps and DynamicJumpOps currently expose only methods, the code is brittle to future class shape changes. Adding a type check and skippingconstructormakes the setup more robust:Suggested change
function mockAllMethods(obj: object) { const target = obj as Record<string, (...args: unknown[]) => unknown>; for (const method of Object.getOwnPropertyNames(Object.getPrototypeOf(obj))) { + if (method === "constructor") continue; + const candidate = (target as Record<string, unknown>)[method]; + if (typeof candidate === "function") { spyOn(target, method).mockImplementation(mockFn); + } } }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/core/pvm-interpreter/ops-dispatchers/two-regs-two-imms-dispatcher.test.ts` around lines 82 - 85, The test helper currently calls spyOn for every prototype property name, which can include non-methods like "constructor"; update the mockAllMethods logic (the loop that uses Object.getOwnPropertyNames(Object.getPrototypeOf(obj)) and calls spyOn(target, method).mockImplementation(mockFn)) to first skip the "constructor" property and to check that target[method] is a function (typeof target[method] === 'function') before calling spyOn, ensuring only actual methods are spied on.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@benchmarks/README.md`:
- Line 10: Remove the leading shell prompt character from the command sample so
it conforms to markdownlint MD014; change the line containing "$ bun run --cwd
packages/misc/benchmark start" to "bun run --cwd packages/misc/benchmark start"
(remove the "$ " prefix) in the README so the example is a plain code/command
snippet.
In `@bin/jam/build-for-npm.sh`:
- Around line 66-68: Update the pinned dependency versions in the dependency
list: change "lmdb" from "3.1.3" to "3.5.4" and update "@typeberry/native" from
"0.2.0-74dd7d7" to "0.2.0-e2a2c06" while leaving "@matrixai/quic" at "2.0.9";
locate the entries for "lmdb" and "@typeberry/native" in the dependency block
and replace their version strings accordingly so the build-for-npm script
packages the updated releases.
- Around line 27-37: The bootstrap build outputs currently use ".ts" but should
be ".js": update the bun build invocations that write
"$DIST_FOLDER/bootstrap-importer.ts", "bootstrap-network.ts", and
"bootstrap-generator.ts" to use ".js" extensions so the compiled JS matches the
main entry point; then update the WORKER constant values in the three worker
index modules (the WORKER exports in importer/index.ts, jam-network/index.ts,
and block-authorship/index.ts) to reference "./bootstrap-*.js" instead of
"./bootstrap-*.ts" so runtime imports point to the compiled files.
In `@bin/lib/package.json`:
- Around line 12-43: The package.json export entries are still pointing at .ts
files because createDistPackageJson only prepends "./bin/lib/" and doesn't
convert .ts to .js, and the dtsPath logic incorrectly tries to replace /\.js$/
on a .ts path; update the createDistPackageJson function to map each export
value by first constructing the dist path used at runtime (e.g. produce a jsPath
from the original value by prepending "./bin/lib/" and replacing a trailing
".ts" with ".js") and then derive the dtsPath by replacing the trailing ".ts"
with ".d.ts" (or replacing ".js" with ".d.ts" if you already converted),
ensuring all exported paths in the generated package.json reference .js for
runtime and .d.ts for typings.
In `@bin/test-runner/state-transition/state-transition.ts`:
- Line 141: Update the warning message string in the logger.warn call (the
template literal that references options.path) to fix the typo and punctuation:
change "Successfuly run a test vector with empty post state!." to a clear,
grammatically correct phrase such as "Successfully ran a test vector with an
empty post state. Please verify: ${options.path}". Ensure the corrected message
replaces the existing template literal in state-transition.ts where logger.warn
is invoked.
- Around line 83-87: The code calling findIndex to locate myBlockIndex may
return -1 if no matching .bin file is found, causing allBlocks.slice(0, -1) to
produce an incorrect previousBlocks array. To fix this, add a guard after
determining myBlockIndex in the state transition logic to check if it is -1 and
handle this case appropriately, such as throwing an error or skipping
processing, before calling slice. This change should be applied around the code
dealing with myBlockIndex and previousBlocks.
In `@packages/jam/database-lmdb/states.ts`:
- Around line 107-112: The write is non-atomic because states.putSync is
executed outside the values.transactionSync; wrap both the states and values
updates inside a single root.db.transactionSync so either both commit or none
do. Specifically, move the call to this.states.putSync(headerHash.raw,
stateLeafs.raw) and the loop that puts into this.values into one
root.db.transactionSync callback (similar to the pattern in blocks.ts), and
preserve the existing error handling that returns a StateUpdateError if the
transaction throws; this ensures getState won't see a persisted state row
without its required values.
In `@packages/jam/fuzz-proto/v1/handler.test.ts`:
- Around line 1-3: The test file imports 'mock' from "bun:test" but pulls
'describe' and 'it' from "node:test", which causes incompatible mock behavior;
update the imports in handler.test.ts to use only the bun:test API by importing
describe and it (and any other test helpers you need) from "bun:test" instead of
"node:test", and remove the node:assert import (or switch assertions to
bun:test's expect) so that mock, describe, and it all come from the same
runtime.
In `@packages/jam/safrole/safrole.test.ts`:
- Around line 1-3: The test imports use bun's mock/spyOn but pull describe, it,
beforeEach, afterEach from node:test; update the import so all test utilities
come from "bun:test" instead: replace the separate imports so mock, spyOn,
describe, it, beforeEach, afterEach are all imported from "bun:test" (locate the
top-level import line in safrole.test.ts and consolidate the named imports
accordingly) to ensure consistent test runtime and mocking behavior.
---
Outside diff comments:
In `@packages/core/pvm-interpreter/ops-dispatchers/three-regs-dispatcher.test.ts`:
- Around line 30-36: The test setup calls mockAllMethods(bitOps) twice causing
duplicate spies; remove the redundant second invocation of
mockAllMethods(bitOps) in the before() block so bitOps is only mocked once,
leaving the other mocks (booleanOps, moveOps, mathOps, shiftOps) unchanged and
preventing duplicate spyOn calls that cause the accessor property error.
---
Nitpick comments:
In @.github/workflows/release-bump-version.yml:
- Around line 81-91: The for-loop over the output of find (the block that sets
VERSION and iterates with for pkg in $(find ...)) is fragile with filenames
containing spaces/special chars; replace it with a safe iteration using
null-delimited output (e.g., use find ... -print0 and a while IFS= read -r -d ''
pkg; do ... done loop) or use find -exec to update each package.json, and keep
the inner update logic that reads JSON and sets p.version = '$VERSION' (the bun
-e update block) but reference $pkg safely via the chosen safe iteration method.
In @.github/workflows/utils-benchmarks.yml:
- Around line 28-42: The echo append is not quoting the target path which can
trigger SC2086 word-splitting; update the echo invocation that writes
INSTALL_DIR to GITHUB_PATH to quote the target variable (change echo
"$INSTALL_DIR" >> $GITHUB_PATH to echo "$INSTALL_DIR" >> "$GITHUB_PATH") and
ensure other uses of INSTALL_DIR/GITHUB_PATH (e.g., mkdir -p "$INSTALL_DIR")
remain quoted for safety.
In @.github/workflows/vectors-test-suites.yml:
- Around line 75-89: Quote all shell variable usages and file paths to prevent
word splitting: wrap BUN_VERSION where interpolated in strings, and quote
INSTALL_DIR and GITHUB_PATH in assignments and commands (mkdir -p
"$INSTALL_DIR", if ! "$INSTALL_DIR/bun" --version 2>/dev/null | grep -qx
"$BUN_VERSION"; then, rm -f "/tmp/bun.zip", python3 extract from "/tmp/bun.zip"
to "/tmp/bun-extract", curl -fsSL -o "/tmp/bun.zip" "...$BUN_VERSION..." , mv -f
"/tmp/bun-extract/bun-linux-x64/bun" "$INSTALL_DIR/bun", chmod +x
"$INSTALL_DIR/bun", and echo "$INSTALL_DIR" >> "$GITHUB_PATH") so all uses of
INSTALL_DIR, GITHUB_PATH, and temp file paths are quoted.
In `@bin/convert/build-for-npm.sh`:
- Around line 10-12: The script computes ROOT with ROOT=$(cd ../.. && pwd) which
breaks if the script is invoked from another CWD; update ROOT calculation to
derive the script's directory from the script path (use BASH_SOURCE or $0), then
resolve ROOT relative to that directory (e.g. script-dir/../..) and recompute
DIST_FOLDER from that ROOT so ROOT and DIST_FOLDER are always correct regardless
of current working directory; update references in build-for-npm.sh where ROOT
and DIST_FOLDER are set.
In `@bin/rpc/src/server.ts`:
- Around line 236-249: The shutdown promise for this.wss can call resolve from
both the timeout and the wss.close callback; add a simple guard (e.g., a local
boolean like "settled") so only the first path resolves and the other is
ignored, and ensure the timer is cleared when settled; update the block that
iterates this.wss.clients, the timeout handler, and the this.wss.close callback
to check/set that guard before calling resolve and logging.
In `@bin/rpc/test/e2e-setup.ts`:
- Around line 22-30: The import loop using startBlocksReader and api.importBlock
currently logs errors via resultToString but continues processing subsequent
blocks; decide whether a failed import should halt setup—if so, modify the loop
around reader and api.importBlock to stop on first error (e.g., break the loop
or call process.exit(-1)) after logging the failure; if the preferred behavior
is to fail fast in the E2E setup, ensure the code path after detecting
res.isError triggers an immediate exit (or throws) so downstream tests aren't
run against a partially imported chain.
- Around line 44-145: The blocksToImport array is hardcoded with 100 entries;
replace it with a dynamic generator to avoid maintenance (use the existing
withRelPath helper). Update the declaration of blocksToImport to
programmatically produce the sequence (e.g., Array.from or a small for loop that
builds paths with String(i+1).padStart(8,'0') and calls withRelPath) or use a
glob library to discover files; ensure the new implementation still produces the
same ordered list and keeps the variable name blocksToImport unchanged so
callers work as-is.
In `@packages/core/codec/index.test.ts`:
- Around line 17-28: The test helper assertCodecEqual uses instanceof checks for
BitVec and BytesBlob to work around Bun's deepStrictEqual private-field issues;
add an explicit comment above the function (or above the instanceof branches)
stating this is a test-only pragmatic exception to the CODESTYLE.md rule and
must not be copied into production code, referencing assertCodecEqual, BitVec,
BytesBlob and CODESTYLE.md so reviewers understand the rationale and the
exception scope.
In `@packages/core/pvm-interpreter/ops-dispatchers/one-offset-dispatcher.test.ts`:
- Around line 24-25: The test currently iterates over
Object.getOwnPropertyNames(Object.getPrototypeOf(obj)) and calls spyOn(target,
method).mockImplementation(mockFn) for every entry, which can include
"constructor" and non-callable properties; update the loop to skip non-function
keys and "constructor" by checking each method from
Object.getOwnPropertyNames(Object.getPrototypeOf(obj)) and only calling spyOn
when method !== 'constructor' and typeof target[method] === 'function' (and
optionally ensure method in target), so spyOn/ mockImplementation is only
applied to actual callable methods on target.
- Around line 1-3: The test file mixes bun:test and node:test imports;
consolidate all test utilities to bun:test by replacing the node:test import of
hooks (after, before, beforeEach, describe, it) and node:assert with a single
import from "bun:test" (e.g., import { mock, spyOn, after, before, beforeEach,
describe, it, assert } from "bun:test") so all mocks, hooks and assertions come
from bun:test and remove the separate node imports; update any references to
these symbols (mock, spyOn, after, before, beforeEach, describe, it, assert)
accordingly.
In
`@packages/core/pvm-interpreter/ops-dispatchers/one-reg-one-ext-imm-dispatcher.test.ts`:
- Around line 23-25: The helper mockAllMethods currently iterates prototype keys
without filtering, which causes spies to be created for non-callable properties
like 'constructor' or getters/setters; update the loop that uses
Object.getOwnPropertyNames(Object.getPrototypeOf(obj)) to skip the 'constructor'
key and only call spyOn(target, method).mockImplementation(mockFn) when typeof
target[method] === 'function' (and the property exists on target), ensuring you
filter to callable properties before mocking; keep the existing target and
mockFn usage.
In
`@packages/core/pvm-interpreter/ops-dispatchers/two-regs-two-imms-dispatcher.test.ts`:
- Around line 1-3: The test file mixes bun:test and node:test imports which can
break Bun's mocking; change all test-related imports to bun:test so mocks and
hooks come from the same runtime: replace the node:test import of after, before,
beforeEach, describe, it with bun:test equivalents and ensure any uses of
after/before/beforeEach/describe/it and imported helpers (mock, spyOn) reference
the bun:test module (look for the top-level import lines and any tests using
after/before/beforeEach/describe/it in two-regs-two-imms-dispatcher.test.ts).
- Around line 82-85: The test helper currently calls spyOn for every prototype
property name, which can include non-methods like "constructor"; update the
mockAllMethods logic (the loop that uses
Object.getOwnPropertyNames(Object.getPrototypeOf(obj)) and calls spyOn(target,
method).mockImplementation(mockFn)) to first skip the "constructor" property and
to check that target[method] is a function (typeof target[method] ===
'function') before calling spyOn, ensuring only actual methods are spied on.
In `@packages/workers/api-node/protocol.ts`:
- Around line 92-103: The Ready handshake can hang if the worker never sends
WorkerControlPlaneMsg.Ready; update the spawnWorker flow by adding a timeout
when you register onReady: start a timer (e.g., setTimeout) alongside
registering onReady, and if the timeout fires reject the spawnWorker promise
with a clear error message, remove the message listener via
worker.off("message", onReady), and perform any cleanup (e.g.,
worker.terminate() if appropriate); when onReady receives the Ready message
clear the timer and proceed to call worker.off("message", onReady) and
worker.postMessage(msg, [msg.parentPort, ...transferredPorts]). Ensure the
timeout duration is configurable or documented and that the rejection includes
context (worker id/role) for debugging.
In `@patches/`@matrixai%2Fquic@2.0.9.patch:
- Around line 45-48: Lower the noisy ECONNREFUSED log by changing the event
handler on this.socket: in the error listener registered via
this.socket.on('error', (err) => { ... }), replace this.logger.info(...) with
this.logger.debug(...) (or use your project's rate-limited logging helper) so
ECONNREFUSED messages are logged at debug level instead of info; keep the
existing message text (e.g., `Ignoring ECONNREFUSED on socket: ${err.message}`)
for context.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: dc8b2d09-8d5f-4e2f-94db-420b3491b93e
⛔ Files ignored due to path filters (2)
bun.lockis excluded by!**/*.lockpackage-lock.jsonis excluded by!**/package-lock.json
📒 Files selected for processing (142)
.github/workflows/build-qa.yml.github/workflows/build-src.yml.github/workflows/e2e-jam.yml.github/workflows/e2e-rpc.yml.github/workflows/publish-pages.yml.github/workflows/publish.yml.github/workflows/release-bump-version.yml.github/workflows/utils-benchmarks.yml.github/workflows/vectors-test-suites.ymlDockerfileREADME.mdbenchmarks/README.mdbin/convert/build-for-npm.shbin/convert/package.jsonbin/ipc2rpc/package.jsonbin/jam/build-for-bun.shbin/jam/build-for-npm.shbin/jam/bun-entry.tsbin/jam/package.jsonbin/jam/test/e2e.tsbin/jam/tiny-network.tsbin/lib/package.jsonbin/lib/scripts/build-lib.tsbin/lib/scripts/extract-examples.tsbin/pvm/package.jsonbin/rpc/package.jsonbin/rpc/src/server.tsbin/rpc/test/e2e-setup.tsbin/tci/package.jsonbin/test-runner/common.test.tsbin/test-runner/common.tsbin/test-runner/index.tsbin/test-runner/jam-conformance-071.tsbin/test-runner/jam-conformance-072.tsbin/test-runner/javajam-071.tsbin/test-runner/package.jsonbin/test-runner/reporter.tsbin/test-runner/state-transition/state-transition.tsbin/test-runner/w3f-072.tsbin/test-runner/w3f-davxy-071.tsbin/test-runner/w3f-davxy-072.tsbin/test-runner/w3f/ed25519.tsbin/test-runner/w3f/erasure-coding.tsbin/test-runner/w3f/shuffling.tsbin/test-runner/w3f/trie.tsbunfig.tomlpackage.jsonpackages/configs/package.jsonpackages/core/bytes/package.jsonpackages/core/codec/descriptors.tspackages/core/codec/index.test.tspackages/core/codec/package.jsonpackages/core/collections/package.jsonpackages/core/collections/sized-array.test.tspackages/core/concurrent/package.jsonpackages/core/crypto/package.jsonpackages/core/erasure-coding/package.jsonpackages/core/hash/package.jsonpackages/core/json-parser/index.test.tspackages/core/json-parser/package.jsonpackages/core/listener/package.jsonpackages/core/logger/package.jsonpackages/core/mmr/package.jsonpackages/core/networking/certificate.tspackages/core/networking/package.jsonpackages/core/numbers/package.jsonpackages/core/ordering/package.jsonpackages/core/pvm-host-calls/package.jsonpackages/core/pvm-interface/package.jsonpackages/core/pvm-interpreter-ananas/package.jsonpackages/core/pvm-interpreter/ops-dispatchers/no-args-dispatcher.test.tspackages/core/pvm-interpreter/ops-dispatchers/one-imm-dispatcher.test.tspackages/core/pvm-interpreter/ops-dispatchers/one-offset-dispatcher.test.tspackages/core/pvm-interpreter/ops-dispatchers/one-reg-one-ext-imm-dispatcher.test.tspackages/core/pvm-interpreter/ops-dispatchers/one-reg-one-imm-dispatcher.test.tspackages/core/pvm-interpreter/ops-dispatchers/one-reg-one-imm-one-offset-dispatcher.test.tspackages/core/pvm-interpreter/ops-dispatchers/one-reg-two-imms-dispatcher.test.tspackages/core/pvm-interpreter/ops-dispatchers/three-regs-dispatcher.test.tspackages/core/pvm-interpreter/ops-dispatchers/two-imms-dispatcher.test.tspackages/core/pvm-interpreter/ops-dispatchers/two-regs-dispatcher.test.tspackages/core/pvm-interpreter/ops-dispatchers/two-regs-one-imm-dispatcher.test.tspackages/core/pvm-interpreter/ops-dispatchers/two-regs-one-offset-dispatcher.test.tspackages/core/pvm-interpreter/ops-dispatchers/two-regs-two-imms-dispatcher.test.tspackages/core/pvm-interpreter/package.jsonpackages/core/shuffling/package.jsonpackages/core/trie/package.jsonpackages/core/utils/package.jsonpackages/extensions/ipc/package.jsonpackages/jam/block-json/package.jsonpackages/jam/block/package.jsonpackages/jam/config-node/node-config.test.tspackages/jam/config-node/package.jsonpackages/jam/config/package.jsonpackages/jam/database-lmdb/blocks.tspackages/jam/database-lmdb/package.jsonpackages/jam/database-lmdb/states.test.tspackages/jam/database-lmdb/states.tspackages/jam/database/package.jsonpackages/jam/executor/package.jsonpackages/jam/fuzz-proto/package.jsonpackages/jam/fuzz-proto/v1/handler.test.tspackages/jam/in-core/package.jsonpackages/jam/jam-host-calls/general/log.tspackages/jam/jam-host-calls/package.jsonpackages/jam/jamnp-s/package.jsonpackages/jam/node/index.tspackages/jam/node/package.jsonpackages/jam/rpc-client/package.jsonpackages/jam/rpc-validation/package.jsonpackages/jam/safrole/package.jsonpackages/jam/safrole/safrole.test.tspackages/jam/state-json/package.jsonpackages/jam/state-merkleization/package.jsonpackages/jam/state-vectors/package.jsonpackages/jam/state/package.jsonpackages/jam/transition/disputes/package.jsonpackages/jam/transition/externalities/accumulate-fetch-externalities.test.tspackages/jam/transition/package.jsonpackages/misc/benchmark/package.jsonpackages/workers/api-node/package.jsonpackages/workers/api-node/protocol.tspackages/workers/api/package.jsonpackages/workers/block-authorship/bootstrap-generator.mjspackages/workers/block-authorship/bootstrap-generator.tspackages/workers/block-authorship/bootstrap-main.tspackages/workers/block-authorship/generator.test.tspackages/workers/block-authorship/index.tspackages/workers/block-authorship/package.jsonpackages/workers/block-authorship/ticket-generator.test.tspackages/workers/comms-authorship-network/package.jsonpackages/workers/importer/bootstrap-importer.mjspackages/workers/importer/bootstrap-importer.tspackages/workers/importer/bootstrap-main.tspackages/workers/importer/index.tspackages/workers/importer/package.jsonpackages/workers/jam-network/bootstrap-main.tspackages/workers/jam-network/bootstrap-network.mjspackages/workers/jam-network/bootstrap-network.tspackages/workers/jam-network/index.tspackages/workers/jam-network/package.jsonpatches/@matrixai%2Fquic@2.0.9.patchstart.sh
💤 Files with no reviewable changes (8)
- packages/workers/importer/bootstrap-importer.mjs
- packages/core/collections/sized-array.test.ts
- packages/workers/block-authorship/bootstrap-generator.mjs
- packages/workers/jam-network/bootstrap-network.mjs
- packages/workers/block-authorship/bootstrap-main.ts
- packages/workers/jam-network/bootstrap-main.ts
- packages/workers/importer/bootstrap-main.ts
- bin/test-runner/reporter.ts
|
|
||
| ``` | ||
| $ npm start -w @typeberry/benchmark | ||
| $ bun run --cwd packages/misc/benchmark start |
There was a problem hiding this comment.
Remove shell prompt prefix to satisfy markdownlint MD014.
Line 10 uses $ before the command; this triggers the documented lint warning.
💡 Suggested doc fix
-$ bun run --cwd packages/misc/benchmark start
+bun run --cwd packages/misc/benchmark start📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| $ bun run --cwd packages/misc/benchmark start | |
| bun run --cwd packages/misc/benchmark start |
🧰 Tools
🪛 markdownlint-cli2 (0.22.1)
[warning] 10-10: Dollar signs used before commands without showing output
(MD014, commands-show-output)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@benchmarks/README.md` at line 10, Remove the leading shell prompt character
from the command sample so it conforms to markdownlint MD014; change the line
containing "$ bun run --cwd packages/misc/benchmark start" to "bun run --cwd
packages/misc/benchmark start" (remove the "$ " prefix) in the README so the
example is a plain code/command snippet.
| bun build --target=node $EXTERNALS \ | ||
| --outfile "$DIST_FOLDER/bootstrap-importer.ts" \ | ||
| "$ROOT/packages/workers/importer/bootstrap-importer.ts" | ||
|
|
||
| # Build all workers separately and then the main binary | ||
| $BUILD ./packages/workers/importer/index.ts -o $DIST_FOLDER/importer | ||
| $BUILD ./packages/workers/jam-network/index.ts -o $DIST_FOLDER/jam-network | ||
| $BUILD ./packages/workers/block-authorship/index.ts -o $DIST_FOLDER/block-authorship | ||
| bun build --target=node $EXTERNALS \ | ||
| --outfile "$DIST_FOLDER/bootstrap-network.ts" \ | ||
| "$ROOT/packages/workers/jam-network/bootstrap-network.ts" | ||
|
|
||
| # copy some files that should be there | ||
| cp ./LICENSE $DIST_FOLDER/ | ||
| cp ./README.md $DIST_FOLDER/ | ||
| bun build --target=node $EXTERNALS \ | ||
| --outfile "$DIST_FOLDER/bootstrap-generator.ts" \ | ||
| "$ROOT/packages/workers/block-authorship/bootstrap-generator.ts" |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Search for references to these bootstrap worker files
rg -n "bootstrap-importer|bootstrap-network|bootstrap-generator" --type=ts -C2Repository: FluffyLabs/typeberry
Length of output: 841
🏁 Script executed:
# Find ALL references to bootstrap worker files across the entire codebase
rg "bootstrap-importer|bootstrap-network|bootstrap-generator" -g '!node_modules' -C3Repository: FluffyLabs/typeberry
Length of output: 1946
🏁 Script executed:
# Check what the actual output looks like - examine the build process more
cat -n bin/jam/build-for-npm.sh | head -50Repository: FluffyLabs/typeberry
Length of output: 2119
🏁 Script executed:
# Look for any references that might indicate how these files are used in production
rg "WORKER|worker" packages/workers --type=ts -B2 -A2Repository: FluffyLabs/typeberry
Length of output: 31816
Bootstrap files have inconsistent extensions with main entry point.
The build script outputs bootstrap-*.ts files with .ts extensions, while the main entry point uses .js extension. Since bun build --target=node produces compiled JavaScript, the bootstrap outputs should use .js for consistency with the main entry point.
Changing the extension requires updating both the build script output filenames and the source files that reference them:
Required changes
- Build script (
bin/jam/build-for-npm.shlines 28, 32, 36): Change.tsto.js - Source files (packages/workers/*/index.ts): Update all
WORKERconstant references from./bootstrap-*.tsto./bootstrap-*.jspackages/workers/importer/index.tsline 4packages/workers/jam-network/index.tsline 3packages/workers/block-authorship/index.tsline 4
🧰 Tools
🪛 Shellcheck (0.11.0)
[info] 27-27: Double quote to prevent globbing and word splitting.
(SC2086)
[info] 31-31: Double quote to prevent globbing and word splitting.
(SC2086)
[info] 35-35: Double quote to prevent globbing and word splitting.
(SC2086)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@bin/jam/build-for-npm.sh` around lines 27 - 37, The bootstrap build outputs
currently use ".ts" but should be ".js": update the bun build invocations that
write "$DIST_FOLDER/bootstrap-importer.ts", "bootstrap-network.ts", and
"bootstrap-generator.ts" to use ".js" extensions so the compiled JS matches the
main entry point; then update the WORKER constant values in the three worker
index modules (the WORKER exports in importer/index.ts, jam-network/index.ts,
and block-authorship/index.ts) to reference "./bootstrap-*.js" instead of
"./bootstrap-*.ts" so runtime imports point to the compiled files.
| "lmdb": "3.1.3", | ||
| "@matrixai/quic": "2.0.9" | ||
| "@matrixai/quic": "2.0.9", | ||
| "@typeberry/native": "0.2.0-74dd7d7" |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Check latest versions and security advisories for the pinned dependencies
echo "=== Checking npm registry for latest versions ==="
curl -s https://registry.npmjs.org/lmdb/latest | jq -r '.version'
curl -s https://registry.npmjs.org/@matrixai/quic/latest | jq -r '.version'
echo "=== Checking GitHub security advisories ==="
gh api graphql -f query='
{
securityVulnerabilities(first: 5, ecosystem: NPM, package: "lmdb") {
nodes {
advisory { summary severity publishedAt }
vulnerableVersionRange
firstPatchedVersion { identifier }
}
}
}'Repository: FluffyLabs/typeberry
Length of output: 219
🏁 Script executed:
echo "=== Checking npm registry for `@typeberry/native` latest version ==="
curl -s https://registry.npmjs.org/@typeberry/native/latest | jq -r '.version'
echo "=== Checking for security advisories for `@matrixai/quic` ==="
curl -s "https://api.osv.dev/v1/query?package=@matrixai/quic&ecosystem=npm" | jq '.vulnerabilities'
echo "=== Checking lmdb changelog for major changes between 3.1.3 and 3.5.4 ==="
curl -s https://registry.npmjs.org/lmdb | jq '.versions | keys[] | select(. >= "3.1.3" and . <= "3.5.4")' | tail -20Repository: FluffyLabs/typeberry
Length of output: 444
🏁 Script executed:
echo "=== Fetching lmdb release notes from npm ==="
curl -s https://registry.npmjs.org/lmdb | jq '.versions["3.5.4"].description' | head -20
echo "=== Checking lmdb GitHub releases for recent major changes ==="
curl -s https://api.github.com/repos/kriszyp/lmdb-js/releases | jq -r '.[] | select(.tag_name >= "v3.2.0") | "\(.tag_name): \(.name)\n\(.body | split("\n")[0:3] | join(" "))"' | head -40Repository: FluffyLabs/typeberry
Length of output: 523
🏁 Script executed:
echo "=== Fetching lmdb CHANGELOG from GitHub ==="
curl -s https://raw.githubusercontent.com/kriszyp/lmdb-js/main/CHANGELOG.md 2>/dev/null | head -100 || echo "No CHANGELOG found"
echo "=== Checking lmdb package.json for breaking changes markers ==="
curl -s https://registry.npmjs.org/lmdb/3.5.4 | jq '.keywords, .engines'Repository: FluffyLabs/typeberry
Length of output: 306
Update lmdb to version 3.5.4 and @typeberry/native to 0.2.0-e2a2c06.
The pinned dependency lmdb is 4 minor versions behind (3.1.3 → 3.5.4), and @typeberry/native is on an older prerelease commit (0.2.0-74dd7d7 → 0.2.0-e2a2c06). @matrixai/quic is current at 2.0.9. While no security vulnerabilities exist for any of these versions, updating to the latest releases ensures compatibility with current standards and bug fixes.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@bin/jam/build-for-npm.sh` around lines 66 - 68, Update the pinned dependency
versions in the dependency list: change "lmdb" from "3.1.3" to "3.5.4" and
update "@typeberry/native" from "0.2.0-74dd7d7" to "0.2.0-e2a2c06" while leaving
"@matrixai/quic" at "2.0.9"; locate the entries for "lmdb" and
"@typeberry/native" in the dependency block and replace their version strings
accordingly so the build-for-npm script packages the updated releases.
| "./block": "./exports/block.ts", | ||
| "./block-json": "./exports/block-json.ts", | ||
| "./bytes": "./exports/bytes.ts", | ||
| "./codec": "./exports/codec.ts", | ||
| "./collections": "./exports/collections.ts", | ||
| "./config": "./exports/config.ts", | ||
| "./config-node": "./exports/config-node.ts", | ||
| "./crypto": "./exports/crypto.ts", | ||
| "./database": "./exports/database.ts", | ||
| "./erasure-coding": "./exports/erasure-coding.ts", | ||
| "./fuzz-proto": "./exports/fuzz-proto.ts", | ||
| "./hash": "./exports/hash.ts", | ||
| "./importer": "./exports/importer.ts", | ||
| "./jam-host-calls": "./exports/jam-host-calls.ts", | ||
| "./json-parser": "./exports/json-parser.ts", | ||
| "./logger": "./exports/logger.ts", | ||
| "./mmr": "./exports/mmr.ts", | ||
| "./numbers": "./exports/numbers.ts", | ||
| "./ordering": "./exports/ordering.ts", | ||
| "./package.json": "./package.json", | ||
| "./pvm-host-calls": "./exports/pvm-host-calls.js", | ||
| "./pvm-interface": "./exports/pvm-interface.js", | ||
| "./pvm-interpreter": "./exports/pvm-interpreter.js", | ||
| "./shuffling": "./exports/shuffling.js", | ||
| "./state": "./exports/state.js", | ||
| "./state-json": "./exports/state-json.js", | ||
| "./state-merkleization": "./exports/state-merkleization.js", | ||
| "./state-vectors": "./exports/state-vectors.js", | ||
| "./transition": "./exports/transition.js", | ||
| "./trie": "./exports/trie.js", | ||
| "./utils": "./exports/utils.js", | ||
| "./workers-api": "./exports/workers-api.js" | ||
| "./pvm-host-calls": "./exports/pvm-host-calls.ts", | ||
| "./pvm-interface": "./exports/pvm-interface.ts", | ||
| "./pvm-interpreter": "./exports/pvm-interpreter.ts", | ||
| "./shuffling": "./exports/shuffling.ts", | ||
| "./state": "./exports/state.ts", | ||
| "./state-json": "./exports/state-json.ts", | ||
| "./state-merkleization": "./exports/state-merkleization.ts", | ||
| "./state-vectors": "./exports/state-vectors.ts", | ||
| "./transition": "./exports/transition.ts", | ||
| "./trie": "./exports/trie.ts", | ||
| "./utils": "./exports/utils.ts", | ||
| "./workers-api": "./exports/workers-api.ts" |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Check if the build script transforms exports for the published package
cat bin/lib/scripts/build-lib.ts 2>/dev/null | head -100
# Check the tsconfig for output settings
cat tsconfig.lib.json 2>/dev/null | head -50Repository: FluffyLabs/typeberry
Length of output: 3791
🏁 Script executed:
# Look for the export transformation logic in the build script
grep -A 20 "exports" bin/lib/scripts/build-lib.ts | head -60
# Also check if there's a .npmignore or any other publishing configuration
ls -la bin/lib/ | grep -E "(npmignore|\.gitignore)"Repository: FluffyLabs/typeberry
Length of output: 1983
🏁 Script executed:
# Find the section that actually transforms the exports field
sed -n '/function.*[Pp]ackage\|exports.*=/,/^}/p' bin/lib/scripts/build-lib.ts | head -150Repository: FluffyLabs/typeberry
Length of output: 5595
🏁 Script executed:
# Get the rest of the createDistPackageJson function
sed -n '/function createDistPackageJson/,/^}/p' bin/lib/scripts/build-lib.ts | tail -80Repository: FluffyLabs/typeberry
Length of output: 2365
The build script doesn't transform .ts exports to .js—the published package will have broken exports.
The source package.json exports point to .ts files, but the createDistPackageJson function only prepends ./bin/lib/ without converting .ts to .js. Since TypeScript compilation produces .js files in dist/lib/, the published package.json will reference non-existent .ts paths. The dtsPath logic also fails because it tries to replace .js$ when the path contains .ts.
The build script needs to replace .ts with .js in export paths:
const jsPath = `./bin/lib${value.startsWith("./") ? value.slice(1) : value}`.replace(/\.ts$/, '.js');🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@bin/lib/package.json` around lines 12 - 43, The package.json export entries
are still pointing at .ts files because createDistPackageJson only prepends
"./bin/lib/" and doesn't convert .ts to .js, and the dtsPath logic incorrectly
tries to replace /\.js$/ on a .ts path; update the createDistPackageJson
function to map each export value by first constructing the dist path used at
runtime (e.g. produce a jsPath from the original value by prepending
"./bin/lib/" and replacing a trailing ".ts" with ".js") and then derive the
dtsPath by replacing the trailing ".ts" with ".d.ts" (or replacing ".js" with
".d.ts" if you already converted), ensuring all exported paths in the generated
package.json reference .js for runtime and .d.ts for typings.
| // Match by filename - each .json test file has a corresponding .bin block file. | ||
| const testBinName = `${path.basename(options.path, ".json")}.bin`; | ||
| const myBlockIndex = allBlocks.findIndex((block) => block._fileName === testBinName); | ||
| const previousBlocks = allBlocks.slice(0, myBlockIndex); | ||
|
|
There was a problem hiding this comment.
Guard against unmatched .bin test file before computing history.
If findIndex returns -1, Line 86 becomes allBlocks.slice(0, -1), which silently builds an incorrect previousBlocks set and can skew transition verification.
Suggested fix
// Match by filename - each .json test file has a corresponding .bin block file.
const testBinName = `${path.basename(options.path, ".json")}.bin`;
const myBlockIndex = allBlocks.findIndex((block) => block._fileName === testBinName);
+ assert.ok(myBlockIndex >= 0, `Missing matching block file: ${testBinName} for ${options.path}`);
const previousBlocks = allBlocks.slice(0, myBlockIndex);🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@bin/test-runner/state-transition/state-transition.ts` around lines 83 - 87,
The code calling findIndex to locate myBlockIndex may return -1 if no matching
.bin file is found, causing allBlocks.slice(0, -1) to produce an incorrect
previousBlocks array. To fix this, add a guard after determining myBlockIndex in
the state transition logic to check if it is -1 and handle this case
appropriately, such as throwing an error or skipping processing, before calling
slice. This change should be applied around the code dealing with myBlockIndex
and previousBlocks.
| // some conformance test vectors have an empty state, we run them, yet do not perform any assertions. | ||
| if (testContent.post_state.keyvals.length === 0) { | ||
| options.test.skip(`Successfuly run a test vector with empty post state!. Please verify: ${options.path}`); | ||
| logger.warn`Successfuly run a test vector with empty post state!. Please verify: ${options.path}`; |
There was a problem hiding this comment.
Fix warning message typo and punctuation.
Line 141 contains a typo (Successfuly) and awkward punctuation (state!.), reducing log clarity.
Suggested fix
- logger.warn`Successfuly run a test vector with empty post state!. Please verify: ${options.path}`;
+ logger.warn`Successfully ran a test vector with empty post state. Please verify: ${options.path}`;📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| logger.warn`Successfuly run a test vector with empty post state!. Please verify: ${options.path}`; | |
| logger.warn`Successfully ran a test vector with empty post state. Please verify: ${options.path}`; |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@bin/test-runner/state-transition/state-transition.ts` at line 141, Update the
warning message string in the logger.warn call (the template literal that
references options.path) to fix the typo and punctuation: change "Successfuly
run a test vector with empty post state!." to a clear, grammatically correct
phrase such as "Successfully ran a test vector with an empty post state. Please
verify: ${options.path}". Ensure the corrected message replaces the existing
template literal in state-transition.ts where logger.warn is invoked.
| this.states.putSync(headerHash.raw, stateLeafs.raw); | ||
| this.values.transactionSync(() => { | ||
| for (const [hash, val] of values) { | ||
| this.values.putSync(hash.raw, val.raw); | ||
| } | ||
| }); |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
rg -n -C4 'updateAndCommit|states\.putSync|values\.transactionSync|root\.db\.transactionSync' packages/jam/database-lmdb/states.tsRepository: FluffyLabs/typeberry
Length of output: 1694
🏁 Script executed:
# Check if root.db.transactionSync exists and how it's used
rg -n 'root\.db\.transactionSync' packages/jam/database-lmdb/ -A 3 -B 1
# Also search for transactionSync usage patterns
rg -n '\.transactionSync\(' packages/jam/database-lmdb/ -B 1 -A 3Repository: FluffyLabs/typeberry
Length of output: 1369
🏁 Script executed:
# Find and examine the getState method
rg -n 'getState' packages/jam/database-lmdb/states.ts -A 10Repository: FluffyLabs/typeberry
Length of output: 703
🏁 Script executed:
# Check the class structure to understand root.db, states, values relationship
ast-grep --pattern $'class $_ {
$$$
root: $_
states: $_
values: $_
$$$
}' packages/jam/database-lmdb/states.tsRepository: FluffyLabs/typeberry
Length of output: 46
🏁 Script executed:
# Get the complete getState implementation, especially the error handling
rg -n 'getState' packages/jam/database-lmdb/states.ts -A 20 | head -40Repository: FluffyLabs/typeberry
Length of output: 1084
🏁 Script executed:
# Also check if there's any error handling or try-catch around the values.transactionSync in updateAndCommit
rg -n 'updateAndCommit' packages/jam/database-lmdb/states.ts -A 25 | head -50Repository: FluffyLabs/typeberry
Length of output: 2195
Make state and value writes atomic to prevent partial commits.
Lines 107–112 execute two separate DB operations: states.putSync on line 107 succeeds first, then values.transactionSync on line 108–112. If the values transaction fails (e.g., I/O error), the state key is already durable in the states store while values may be incomplete. The catch block (lines 113–116) logs the error and returns a StateUpdateError, but the state row persists. On the next getState call, it retrieves the leaf nodes successfully but throws "Missing required value" (line 150) when the getter callback accesses a missing value.
Wrap both operations in a single root.db.transactionSync call, matching the pattern used in blocks.ts (lines 64–67), to ensure atomicity:
Proposed fix
- this.states.putSync(headerHash.raw, stateLeafs.raw);
- this.values.transactionSync(() => {
- for (const [hash, val] of values) {
- this.values.putSync(hash.raw, val.raw);
- }
- });
+ this.root.db.transactionSync(() => {
+ this.states.putSync(headerHash.raw, stateLeafs.raw);
+ for (const [hash, val] of values) {
+ this.values.putSync(hash.raw, val.raw);
+ }
+ });🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@packages/jam/database-lmdb/states.ts` around lines 107 - 112, The write is
non-atomic because states.putSync is executed outside the
values.transactionSync; wrap both the states and values updates inside a single
root.db.transactionSync so either both commit or none do. Specifically, move the
call to this.states.putSync(headerHash.raw, stateLeafs.raw) and the loop that
puts into this.values into one root.db.transactionSync callback (similar to the
pattern in blocks.ts), and preserve the existing error handling that returns a
StateUpdateError if the transaction throws; this ensures getState won't see a
persisted state row without its required values.
| import { mock } from "bun:test"; | ||
| import assert from "node:assert"; | ||
| import { describe, it, type Mock, mock } from "node:test"; | ||
| import { describe, it } from "node:test"; |
There was a problem hiding this comment.
🧩 Analysis chain
🌐 Web query:
In Bun 1.2, is it fully supported to mix mockfrombun:testwithdescribe/itfromnode:testin one test file, and are mock lifecycle/reset semantics identical to purebun:test usage?
💡 Result:
No, in Bun 1.2, it is not fully supported to mix mock from bun:test with describe/it from node:test in one test file. Bun's test runner primarily supports its own bun:test API for full mocking capabilities, including mock, while node:test support is partial and lacks proper mock implementation (e.g., mock.fn throws TypeError). No examples or documentation show successful mixing in a single file, and open issues confirm node:test mock is unimplemented even in later versions like 1.3.1. Mock lifecycle/reset semantics cannot be identical since node:test mock does not function properly, while bun:test mock uses mock.reset, mock.restore, etc., with known issues like module mock leakage across tests/files requiring workarounds.
Citations:
- 1: Support node:test mock oven-sh/bun#24255
- 2: https://bun.com/docs/test/mocks
- 3: https://bun.com/docs/test
- 4: https://bun.com/reference/node/test
- 5: Support the
node:testbuilt-in API oven-sh/bun#5090
Move test imports to use only bun:test API.
Mixing mock from bun:test with describe/it from node:test is not fully supported in Bun. The node:test runner lacks proper mock implementation (e.g., mock.fn throws TypeError), and mock lifecycle/reset semantics differ from bun:test, causing runtime failures and potential test leakage.
Required fix
-import { mock } from "bun:test";
+import { describe, it, mock } from "bun:test";
import assert from "node:assert";
-import { describe, it } from "node:test";📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| import { mock } from "bun:test"; | |
| import assert from "node:assert"; | |
| import { describe, it, type Mock, mock } from "node:test"; | |
| import { describe, it } from "node:test"; | |
| import { describe, it, mock } from "bun:test"; | |
| import assert from "node:assert"; |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@packages/jam/fuzz-proto/v1/handler.test.ts` around lines 1 - 3, The test file
imports 'mock' from "bun:test" but pulls 'describe' and 'it' from "node:test",
which causes incompatible mock behavior; update the imports in handler.test.ts
to use only the bun:test API by importing describe and it (and any other test
helpers you need) from "bun:test" instead of "node:test", and remove the
node:assert import (or switch assertions to bun:test's expect) so that mock,
describe, and it all come from the same runtime.
| import { mock, spyOn } from "bun:test"; | ||
| import assert from "node:assert"; | ||
| import { afterEach, beforeEach, describe, it, mock } from "node:test"; | ||
| import { afterEach, beforeEach, describe, it } from "node:test"; |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Check if other test files in the repo also mix bun:test and node:test imports
rg -l "from ['\"]bun:test['\"]" --type ts | xargs -I{} sh -c 'rg "from ['\''\"](node:test|bun:test)['\''\"]\b" {} | sort -u' | sort | uniq -c | sort -rn | head -20Repository: FluffyLabs/typeberry
Length of output: 46
🏁 Script executed:
#!/bin/bash
# First, check if the file exists and see its actual imports
echo "=== Checking safrole.test.ts ==="
head -10 "packages/jam/safrole/safrole.test.ts"
echo -e "\n=== Searching for files with bun:test imports ==="
fd -e test.ts -e test.js | head -20 | xargs -I{} sh -c 'grep -l "bun:test" {} 2>/dev/null && echo " ^ uses bun:test"' || true
echo -e "\n=== Checking what test utilities bun:test provides ==="
find . -name "*.ts" -type f | xargs grep -h "from ['\"]bun:test['\"]" | sort -u | head -10Repository: FluffyLabs/typeberry
Length of output: 740
🏁 Script executed:
#!/bin/bash
# Check what other test files import and their patterns
echo "=== Sample of test file imports ==="
fd -e test.ts | head -10 | xargs -I{} sh -c 'echo "File: {}" && head -5 {}'Repository: FluffyLabs/typeberry
Length of output: 2423
Consolidate test utilities to use bun:test for consistency.
The file imports mock and spyOn from bun:test but describe, it, beforeEach, and afterEach from node:test. Since bun:test provides all of these utilities and the file is already using Bun's mocking API, consolidate all test utilities to bun:test for consistency.
🔧 Proposed fix
-import { mock, spyOn } from "bun:test";
+import { afterEach, beforeEach, describe, it, mock, spyOn } from "bun:test";
import assert from "node:assert";
-import { afterEach, beforeEach, describe, it } from "node:test";📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| import { mock, spyOn } from "bun:test"; | |
| import assert from "node:assert"; | |
| import { afterEach, beforeEach, describe, it, mock } from "node:test"; | |
| import { afterEach, beforeEach, describe, it } from "node:test"; | |
| import { afterEach, beforeEach, describe, it, mock, spyOn } from "bun:test"; | |
| import assert from "node:assert"; |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@packages/jam/safrole/safrole.test.ts` around lines 1 - 3, The test imports
use bun's mock/spyOn but pull describe, it, beforeEach, afterEach from
node:test; update the import so all test utilities come from "bun:test" instead:
replace the separate imports so mock, spyOn, describe, it, beforeEach, afterEach
are all imported from "bun:test" (locate the top-level import line in
safrole.test.ts and consolidate the named imports accordingly) to ensure
consistent test runtime and mocking behavior.
|
@fluffylabs-bot benchmark |
|
✅ Benchmark workflow triggered successfully! 🎉 🔗 Check the Actions tab for workflow progress. |
Picofuzz Benchmark Resultsfallback
safrole
storage
storage_light
🤖 Automated benchmark |
There was a problem hiding this comment.
Actionable comments posted: 1
🧹 Nitpick comments (1)
Dockerfile (1)
4-6: Update the build-stage comment to match the actual Bun invocation.This stage now delegates to
bin/jam/build-for-bun.sh, and that script usesbun build --compilewithout--bytecode. Keeping the comment in sync will avoid confusion when tracking artifact differences.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@Dockerfile` around lines 4 - 6, Update the build-stage comment to accurately describe the current invocation: note that the stage delegates to bin/jam/build-for-bun.sh and that the script runs `bun build --compile` (not `--compile --bytecode`); change the comment text in the Dockerfile's build stage to reflect delegation to bin/jam/build-for-bun.sh and the absence of `--bytecode` in the actual bun invocation so readers aren't misled about how the artifact is produced.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@bin/jam/build-for-bun.sh`:
- Around line 9-11: The VERSION line currently reads package.json relative to
SCRIPT_DIR which points to bin/jam; change it to read the repo root manifest
instead by referencing ROOT_DIR (use ROOT_DIR/package.json in the bun -e require
call) so VERSION is derived from the root manifest regardless of where the
script is invoked; update the VERSION assignment that references SCRIPT_DIR to
use ROOT_DIR and ensure the command still prints the version.
---
Nitpick comments:
In `@Dockerfile`:
- Around line 4-6: Update the build-stage comment to accurately describe the
current invocation: note that the stage delegates to bin/jam/build-for-bun.sh
and that the script runs `bun build --compile` (not `--compile --bytecode`);
change the comment text in the Dockerfile's build stage to reflect delegation to
bin/jam/build-for-bun.sh and the absence of `--bytecode` in the actual bun
invocation so readers aren't misled about how the artifact is produced.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: 4bc48b64-4f03-437e-ac45-e55cb99f83fa
📒 Files selected for processing (3)
Dockerfilebin/jam/build-for-bun.shstart.sh
💤 Files with no reviewable changes (1)
- start.sh
| SCRIPT_DIR="$(cd -- "$(dirname -- "${BASH_SOURCE[0]}")" && pwd)" | ||
| ROOT_DIR="$(cd -- "$SCRIPT_DIR/../.." && pwd)" | ||
| VERSION="$(cd "$SCRIPT_DIR" && bun -e 'console.log(require("./package.json").version)')" |
There was a problem hiding this comment.
Read the version from the repo root manifest.
SCRIPT_DIR points at bin/jam, so Line 11 resolves ./package.json relative to that directory. In this PR, the Docker build copies the root manifest to /build/package.json and runs this script from /build/bin/jam, so this can fail outright or pick up the wrong version if a local manifest ever appears.
🐛 Proposed fix
-VERSION="$(cd "$SCRIPT_DIR" && bun -e 'console.log(require("./package.json").version)')"
+VERSION="$(cd "$ROOT_DIR" && bun -e 'console.log(require("./package.json").version)')"📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| SCRIPT_DIR="$(cd -- "$(dirname -- "${BASH_SOURCE[0]}")" && pwd)" | |
| ROOT_DIR="$(cd -- "$SCRIPT_DIR/../.." && pwd)" | |
| VERSION="$(cd "$SCRIPT_DIR" && bun -e 'console.log(require("./package.json").version)')" | |
| SCRIPT_DIR="$(cd -- "$(dirname -- "${BASH_SOURCE[0]}")" && pwd)" | |
| ROOT_DIR="$(cd -- "$SCRIPT_DIR/../.." && pwd)" | |
| VERSION="$(cd "$ROOT_DIR" && bun -e 'console.log(require("./package.json").version)')" |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@bin/jam/build-for-bun.sh` around lines 9 - 11, The VERSION line currently
reads package.json relative to SCRIPT_DIR which points to bin/jam; change it to
read the repo root manifest instead by referencing ROOT_DIR (use
ROOT_DIR/package.json in the bun -e require call) so VERSION is derived from the
root manifest regardless of where the script is invoked; update the VERSION
assignment that references SCRIPT_DIR to use ROOT_DIR and ensure the command
still prints the version.
View all
Benchmarks summary: 83/83 OK ✅ |
|
@fluffylabs-bot benchmark |
|
✅ Benchmark workflow triggered successfully! 🎉 🔗 Check the Actions tab for workflow progress. |
|
AI summary of the migration attempt for posteriority. Bun migration — findings & decisionBranch: TL;DR
Performance findingsBenchmarks run via picofuzz over a UDS, no Docker, both binaries built from the same working tree ( Picofuzz Default PVM:
|
| Dataset | bun median | node median | bun/node |
|---|---|---|---|
| safrole | 9.37 ms | 5.09 ms | 1.84× |
| storage | 20.90 ms | 10.63 ms | 1.97× |
| storage_light | 18.27 ms | 7.74 ms | 2.36× |
--pvm built-in (pure TypeScript)
| Dataset | bun median | node median | bun/node |
|---|---|---|---|
| safrole | 11.02 ms | 5.50 ms | 2.00× |
| storage | 55.08 ms | 45.00 ms | 1.22× |
| storage_light | 25.96 ms | 18.33 ms | 1.42× |
Interpretation
- The pure-JS PVM (built-in) on the heavy storage workload shows only a 1.22× gap. That's the JS-execution gap between JSC and V8 on this codebase: real, but not catastrophic.
- The WASM PVM (ananas) on the same data shows 1.97×. The increase from 1.22× → 1.97× when switching from built-in to ananas can only come from WASM execution being slower under JSC than V8.
- Industry-aligned: V8's WASM (Liftoff baseline + TurboFan) is one of the most aggressively tuned WASM implementations anywhere; JSC's BBQ/OMG is fine but materially behind.
- safrole stays around 2× regardless of PVM. Likely worker-thread postMessage overhead for the bandersnatch path, which sits on a different thread than the importer worker.
What this rules out
- It is not a JIT-disabled-in-Docker problem. The compiled binary has full JSC tier-up. We verified this with the same setup that ships in the perf rig.
- It is not a
NODE_ENV=development/ OTEL / startup overhead problem. We removed those in the multi-stage Dockerfile and the gap persists at steady state. --bytecode,--minify, JSC env vars (BUN_JSC_*) — none of these can close a WASM-execution gap, which is where most of the time goes.
Migration costs encountered
Independent of the perf result, the migration accumulated real operational tax:
Test infrastructure
- Test vectors had to run with
--smol(effectively reduces JIT tier-up to keep memory low) because JSC under-counts WebAssembly memory for GC pressure. With 1000+ tests in one process, stale wasm instances pile up until the JS heap hitsRangeError: Out of memory. Seebin/test-runner/index.ts:18-44. - Test sharding had to be added to keep each process short-lived enough not to OOM. Default 4 shards, configurable via
BUN_TEST_SHARDS. Seebin/test-runner/index.ts. - Flaky tests under bun's WASM runtime appeared and had to be carved into a separate
continue-on-errorjob. (Removed inae9ad841after carving was no longer needed; the underlying flakiness was masked rather than fixed.) - Bandersnatch wasm reset hack:
(bandersnatchWasm as { resetWasm?: () => Promise<void> }).resetWasm?.()is called before every test to work around bun WASM state leakage. Seebin/test-runner/common.ts:483. - Inter-shard delays (
BUN_TEST_SHARD_DELAY_MS) needed on self-hosted runners because the kernel doesn't reclaim pages fast enough between shards — without the delay the last shard sometimes hitsmprotect failed: Cannot allocate memory.
Native packaging
@matrixai/quicneeded a patch to work with bun (patches/@matrixai%2Fquic@2.0.9.patch). One vendor patch on a critical native dep is a long-term maintenance liability whenever quic upgrades.@matrixai/quicdoes not shiplinux-arm64so the bun-compiled binary cannot be built for that platform. CI today only buildslinux-x64so it hasn't surfaced, but it constrains future multi-arch.- Native addon resolution had to be patched at runtime in
bin/jam/bun-entry.tsbecausebun build --compileputs the bundled JS inside/$bunfs/root/where ordinaryrequire()resolution fails for sibling.nodefiles. We patch bothModule._resolveFilenameandprocess.dlopen. - lmdb prebuild filename differs by libc:
node.napi.glibc.nodeon linux vsnode.napi.nodeon darwin. Originalbuild-for-bun.shonly worked on darwin; the fix needed a per-platform case statement (added in this branch's Docker work; the bug had been latent onmainbecause no release-event had triggered the binary build job yet). - Bandersnatch native filename uses
${PLATFORM_GNU}(with-gnusuffix on linux), not${PLATFORM}. Same latent-bug story.
Benchmarking
- Benchmarks couldn't run under bun. benny's sample timing is wildly noisy under bun's JIT (±50–175%). Forced to keep
packages/misc/benchmarkonnode + tsxeven after the rest of the codebase moved to bun. See comment in.github/workflows/utils-benchmarks.yml:43-46.
Build / dist
- Triple-nested
bun runchain in the original Docker image (start.sh→bun run start→bun run --cwd bin/jam start→bun run ./index.ts), each fork re-parses TS. Replaced with a multi-stage Dockerfile that ships the compiled binary directly — but that uncovered the lmdb/bandersnatch latent bugs above. --bytecodeis incompatible with the current entry point because of the top-levelawait import(...)inbin/jam/bun-entry.ts. Could be refactored, but didn't ship since perf gain is startup-only.NODE_ENV=developmentwas hardcoded in the dev convenience script and propagated into the Docker container. Fixed by switching to the compiled binary as ENTRYPOINT.
Why staying on node is the right call
- Direct measured cost is 1.2×–2.4× slower fuzzing depending on workload. There is no Docker / config / flag combination that closes that — the dominant component is JSC's WASM engine vs V8's, and we have no leverage over either upstream.
- The expected upside of bun (faster startup, smaller binary, better DX) doesn't outweigh the perf regression for a fuzzer / node implementation whose primary published number is import throughput.
- The maintenance overhead of the migration is real and ongoing: vendored quic patch, sharding infra, JIT-off carve-out, native-addon path patching, latent build bugs surfaced by Docker. Each of these is recurring tax, not a one-time cost.
- Tests already had to run with JIT effectively off to fit in memory. So even on bun we don't get bun's headline runtime characteristics for the test suite — we just get a different set of tradeoffs.
- Future-proofing is worse: blocked on
linux-arm64for@matrixai/quic; benchmarks have to stay on node anyway because of JIT noise; the WASM gap is unlikely to close without a substantial JSC investment.
Closes #941