Skip to content

Commit 6ab1327

Browse files
prestwichclaude
andauthored
fix(rpc): consolidate hot storage reads and document consistency model (#118)
* fix(rpc): consolidate hot_reader_at_block to single MDBX transaction Opens the read transaction before resolving the block ID so that hash→number lookup and subsequent data query share the same MDBX snapshot. Eliminates a race where a reorg between the two transactions could produce inconsistent results. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * fix(rpc): consolidate resolve_evm_block to single MDBX transaction Header lookup and RevmRead construction now share one MDBX read transaction. This eliminates a race where a reorg between two separate transactions could yield a header from one fork and EVM state from another. Follows the same pattern used by the block-processor crate. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * docs(rpc): document atomic ordering guarantees on BlockTags Adds module-level and struct-level rustdoc explaining Acquire/Release ordering, multi-tag update order, rewind safety, and the race window between tag resolution and storage queries. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * docs(rpc): document hot/cold consistency model on StorageRpcCtx Adds module-level and struct-level rustdoc explaining the two-tier architecture, query routing table, resolve-then-query pattern, cold lag characteristics, and deferred hash-based verification. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> --------- Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
1 parent e58c656 commit 6ab1327

3 files changed

Lines changed: 162 additions & 14 deletions

File tree

crates/rpc/src/config/ctx.rs

Lines changed: 94 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,60 @@
11
//! RPC context wrapping [`UnifiedStorage`].
2+
//!
3+
//! # Consistency Model
4+
//!
5+
//! The RPC layer reads from two storage tiers with different consistency
6+
//! guarantees:
7+
//!
8+
//! - **Hot storage** (MDBX): synchronous writes, MVCC snapshot isolation,
9+
//! authoritative source for state and headers. Reads open an MDBX read
10+
//! transaction that provides a consistent point-in-time snapshot.
11+
//!
12+
//! - **Cold storage** (async task): eventually consistent. Writes are
13+
//! dispatched asynchronously after hot storage commits via
14+
//! `append_blocks()`, so cold may lag by milliseconds to seconds under
15+
//! normal operation.
16+
//!
17+
//! # Query Routing
18+
//!
19+
//! | Query type | Resolution | Data source | Staleness risk |
20+
//! |---|---|---|---|
21+
//! | State (`getBalance`, `getStorageAt`, `getCode`, `getTransactionCount`) | Hot tag → height | Hot | Low — single tier, single transaction |
22+
//! | EVM execution (`eth_call`, `estimateGas`) | Hot tag → height | Hot | Low — single tier, single transaction |
23+
//! | Block/header queries | Hot tag → height | Cold | Medium — cold lag |
24+
//! | Transaction queries | Hot tag or hash → height | Cold | Medium — cold lag |
25+
//! | Receipt queries | Hot tag or hash → height | Cold | Medium — cold lag |
26+
//! | Log queries (`getLogs`) | Hot tag → height range | Cold | Medium — cold lag |
27+
//! | Filter changes | Hot tag → latest | Cold | Medium — reorg detection via ring buffer mitigates |
28+
//!
29+
//! # Resolve-then-Query Pattern
30+
//!
31+
//! Most endpoints follow a two-step pattern:
32+
//!
33+
//! 1. **Resolve** a block tag or hash to a concrete block number (reads
34+
//! atomic tag values or queries hot storage's `HeaderNumbers` table).
35+
//! 2. **Query** the resolved block number against hot or cold storage.
36+
//!
37+
//! For hot-only queries (state, EVM), both steps share a single MDBX
38+
//! read transaction, eliminating races between resolution and query.
39+
//!
40+
//! For cold queries, the resolved number is passed to cold storage.
41+
//! Between resolution and cold query, tags can advance or a reorg can
42+
//! replace the block at that height. The caller gets a consistent view
43+
//! of the **resolved** height but may miss a newer block. This is
44+
//! acceptable per JSON-RPC spec — clients retry on stale data.
45+
//!
46+
//! # Future Work: Hash-based Consistency Verification
47+
//!
48+
//! For queries where correctness is critical (e.g., `eth_call`), a
49+
//! stronger guarantee is possible: after resolving the block number,
50+
//! read the block hash from hot storage and pass both (number, hash) to
51+
//! cold. Cold verifies the hash matches before returning data, catching
52+
//! reorgs that replaced the block between resolution and query.
53+
//!
54+
//! This was deferred because: (a) the reorg window is small
55+
//! (milliseconds), (b) it adds one extra hot storage read per query,
56+
//! and (c) for read-only queries the impact of returning stale data is
57+
//! low. See ENG-1901 for the full trade-off analysis.
258
359
use crate::{
460
config::{
@@ -51,8 +107,16 @@ pub(crate) struct EvmBlockContext<Db> {
51107

52108
/// RPC context backed by [`UnifiedStorage`].
53109
///
54-
/// Provides access to hot storage (state), cold storage (blocks/txs/receipts),
55-
/// block tag resolution, and optional transaction forwarding.
110+
/// Provides access to hot storage (state, headers), cold storage
111+
/// (blocks, transactions, receipts, logs), block tag resolution,
112+
/// filter/subscription management, and optional transaction forwarding.
113+
///
114+
/// Hot-only queries (state reads, EVM execution) open a single MDBX
115+
/// read transaction for both block resolution and data access. Cold
116+
/// queries resolve a block number from hot storage, then pass it to
117+
/// the cold read handle.
118+
///
119+
/// See the module-level documentation for the full consistency model.
56120
///
57121
/// # Construction
58122
///
@@ -283,8 +347,12 @@ impl<H: HotKv> StorageRpcCtx<H> {
283347

284348
/// Resolve a [`BlockId`] to a header and revm database in one pass.
285349
///
286-
/// Fetches the header from hot storage and creates a revm-compatible
287-
/// database snapshot at the resolved block height.
350+
/// Opens a single MDBX read transaction and uses it for both the
351+
/// header lookup and the revm-compatible database snapshot. This
352+
/// ensures the header and EVM state come from the same database
353+
/// snapshot, eliminating races where a reorg between two separate
354+
/// transactions could yield a header from one fork and state from
355+
/// another.
288356
///
289357
/// For `Pending` block IDs, remaps to `Latest` and synthesizes a
290358
/// next-block header (incremented number, timestamp +12s, projected
@@ -294,14 +362,29 @@ impl<H: HotKv> StorageRpcCtx<H> {
294362
id: BlockId,
295363
) -> Result<EvmBlockContext<RevmRead<H::RoTx>>, EthError>
296364
where
297-
<H::RoTx as HotKvRead>::Error: DBErrorMarker,
365+
<H::RoTx as HotKvRead>::Error: DBErrorMarker + std::error::Error + Send + Sync + 'static,
298366
{
299367
let pending = id.is_pending();
300368
let id = if pending { BlockId::latest() } else { id };
301369

302-
let sealed = self.resolve_header(id)?.ok_or(EthError::BlockNotFound(id))?;
303-
let db = self.revm_state_at_height(sealed.number)?;
370+
// Single MDBX read transaction for both header lookup and
371+
// RevmRead construction. `hot_reader()` returns
372+
// `Result<H::RoTx, StorageError>` — the `?` converts through
373+
// `StorageError -> EthError::Hot`.
374+
let reader = self.hot_reader()?;
375+
376+
let sealed = match id {
377+
BlockId::Hash(h) => {
378+
reader.header_by_hash(&h.block_hash).map_err(|e| ResolveError::Db(Box::new(e)))?
379+
}
380+
BlockId::Number(tag) => {
381+
let height = self.resolve_block_tag(tag);
382+
reader.get_header(height).map_err(|e| ResolveError::Db(Box::new(e)))?
383+
}
384+
}
385+
.ok_or(EthError::BlockNotFound(id))?;
304386

387+
let height = sealed.number;
305388
let parent_hash = sealed.hash();
306389
let mut header = sealed.into_inner();
307390

@@ -315,6 +398,10 @@ impl<H: HotKv> StorageRpcCtx<H> {
315398

316399
let spec_id = self.spec_id_for_header(&header);
317400

401+
// Wrap the same reader in RevmRead — no range validation needed
402+
// because we already proved the height exists by fetching the header.
403+
let db = StateBuilder::new_with_database(RevmRead::at_height(reader, height)).build();
404+
318405
Ok(EvmBlockContext { header, db, spec_id })
319406
}
320407
}

crates/rpc/src/config/resolve.rs

Lines changed: 49 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -1,8 +1,42 @@
1-
//! Block tag tracking and BlockId resolution.
1+
//! Block tag tracking and `BlockId` resolution.
22
//!
33
//! [`BlockTags`] holds externally-updated atomic values for Latest, Safe,
44
//! and Finalized block numbers. The RPC context owner is responsible for
55
//! updating these as the chain progresses.
6+
//!
7+
//! # Atomic Ordering Guarantees
8+
//!
9+
//! All tag loads use [`Acquire`] ordering and all stores use [`Release`]
10+
//! ordering. This establishes a happens-before relationship: any data
11+
//! written before a tag update is visible to readers who observe the new
12+
//! tag value.
13+
//!
14+
//! Multi-tag updates ([`BlockTags::update_all`]) store in order:
15+
//! finalized → safe → latest. This ensures readers never observe a
16+
//! state where `latest` is behind `finalized` — they may see a
17+
//! slightly stale view (old latest with new finalized), but never an
18+
//! inverted one.
19+
//!
20+
//! During reorgs, [`BlockTags::rewind_to`] uses [`fetch_min`] with the
21+
//! same finalized → safe → latest order, atomically capping each tag
22+
//! without risk of a reader seeing `latest > finalized` while values
23+
//! are being decreased.
24+
//!
25+
//! # Race Window
26+
//!
27+
//! Between [`StorageRpcCtx::resolve_block_tag`] and the subsequent
28+
//! storage query, tags can advance. The caller gets a point-in-time
29+
//! snapshot of the tag values — not a guarantee that those values are
30+
//! still current by the time the query executes. For hot-only queries,
31+
//! the MDBX read transaction provides snapshot isolation. For cold
32+
//! queries, the resolved number may reference a block that cold storage
33+
//! has not yet received (returning `None`) or that a reorg has since
34+
//! replaced (returning stale data).
35+
//!
36+
//! [`Acquire`]: std::sync::atomic::Ordering::Acquire
37+
//! [`Release`]: std::sync::atomic::Ordering::Release
38+
//! [`fetch_min`]: std::sync::atomic::AtomicU64::fetch_min
39+
//! [`StorageRpcCtx::resolve_block_tag`]: crate::StorageRpcCtx::resolve_block_tag
640
741
use alloy::primitives::B256;
842
use signet_storage::StorageError;
@@ -29,8 +63,20 @@ pub struct SyncStatus {
2963

3064
/// Externally-updated block tag tracker.
3165
///
32-
/// Each tag is an `Arc<AtomicU64>` that the caller updates as the chain
33-
/// progresses. The RPC layer reads these atomically for tag resolution.
66+
/// Each tag is an [`Arc<AtomicU64>`] that the caller updates as the
67+
/// chain progresses. The RPC layer reads these atomically for tag
68+
/// resolution.
69+
///
70+
/// Tags are updated in a specific order to maintain consistency:
71+
/// - **Writes** ([`update_all`](Self::update_all)): finalized → safe →
72+
/// latest, so readers never see `latest` behind `finalized`.
73+
/// - **Rewinds** ([`rewind_to`](Self::rewind_to)): same order, using
74+
/// [`fetch_min`](AtomicU64::fetch_min) for atomic capping.
75+
/// - **Reads**: each tag is loaded independently with [`Acquire`]
76+
/// ordering. A reader may see a slightly stale combination (e.g.,
77+
/// new `finalized` but old `latest`) but never an inverted one.
78+
///
79+
/// [`Acquire`]: std::sync::atomic::Ordering::Acquire
3480
///
3581
/// # Example
3682
///

crates/rpc/src/eth/helpers.rs

Lines changed: 19 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -15,6 +15,7 @@ use alloy::{
1515
};
1616
use serde::Deserialize;
1717
use signet_cold::ColdReceipt;
18+
use signet_hot::db::HotDbRead;
1819
use signet_storage_types::ConfirmationMeta;
1920
use trevm::MIN_TRANSACTION_GAS;
2021

@@ -129,9 +130,17 @@ pub(crate) use response_tri;
129130

130131
/// Resolve a block ID and open a hot storage reader at that height.
131132
///
132-
/// Shared by account-state endpoints (`balance`, `storage_at`,
133-
/// `addr_tx_count`, `code_at`) which all follow the same
134-
/// resolve → open reader → query pattern.
133+
/// Opens a single MDBX read transaction and resolves the block ID
134+
/// within it, ensuring the resolution and subsequent data query share
135+
/// the same database snapshot.
136+
///
137+
/// For hash-based [`BlockId`]s, the hash→number lookup happens inside
138+
/// the returned transaction. For tag-based IDs, the atomic tag read
139+
/// is outside MDBX (unavoidable), but the snapshot is opened first so
140+
/// data is guaranteed to be at least as fresh as the resolved tag.
141+
///
142+
/// Used by account-state endpoints (`balance`, `storage_at`,
143+
/// `addr_tx_count`, `code_at`).
135144
pub(crate) fn hot_reader_at_block<H>(
136145
ctx: &crate::config::StorageRpcCtx<H>,
137146
id: BlockId,
@@ -140,8 +149,14 @@ where
140149
H: signet_hot::HotKv,
141150
<H::RoTx as signet_hot::model::HotKvRead>::Error: std::error::Error + Send + Sync + 'static,
142151
{
143-
let height = ctx.resolve_block_id(id).map_err(|e| e.to_string())?;
144152
let reader = ctx.hot_reader().map_err(|e| e.to_string())?;
153+
let height = match id {
154+
BlockId::Number(tag) => ctx.resolve_block_tag(tag),
155+
BlockId::Hash(h) => reader
156+
.get_header_number(&h.block_hash)
157+
.map_err(|e| e.to_string())?
158+
.ok_or_else(|| format!("block hash not found: {}", h.block_hash))?,
159+
};
145160
Ok((reader, height))
146161
}
147162

0 commit comments

Comments
 (0)