Skip to content

Commit 0b85ec9

Browse files
committed
fix(cache-with-ttl): close TOCTOU in getOrFetch inflight dedupe
`getOrFetch` did `const cached = await get(key)` *before* consulting the inflight map. `get()` itself awaits a cacache disk read, so two callers landing on a cold key could both suspend on the same I/O, both resolve `cached === undefined`, both skip the inflight check, and both start a fresh fetch. The `set()`-before-await comment was accurate but the check ordering was the actual race — the inflight guarantee was invisibly broken for every concurrent first-hit, defeating the feature the map exists to provide. Moves the inflight check before the persistent-cache lookup so a later caller joins an in-flight fetch instead of racing it. Adds a second check after the disk read for the case where a peer registered an inflight entry while we were suspended. Regression test asserts fetchCount === 1 for three parallel cold callers.
1 parent 496017d commit 0b85ec9

2 files changed

Lines changed: 49 additions & 9 deletions

File tree

src/cache-with-ttl.ts

Lines changed: 18 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -421,17 +421,28 @@ export function createTtlCache(options?: TtlCacheOptions): TtlCache {
421421
key: string,
422422
fetcher: () => Promise<T>,
423423
): Promise<T> {
424+
const fullKey = buildKey(key)
425+
426+
// Join an in-flight fetch before touching the persistent cache. If we
427+
// did the `await get(key)` first, two concurrent callers on a cold
428+
// key would both suspend on the same disk read, both see no cached
429+
// value, both skip this check, and both fire `fetcher()` — the exact
430+
// thundering-herd the inflight map is supposed to prevent.
431+
const preexisting = inflightRequests.get(fullKey)
432+
if (preexisting) {
433+
return (await preexisting) as T
434+
}
435+
424436
const cached = await get<T>(key)
425437
if (cached !== undefined) {
426438
return cached
427439
}
428440

429-
const fullKey = buildKey(key)
430-
431-
// Check if another request is already in flight
432-
const existing = inflightRequests.get(fullKey)
433-
if (existing) {
434-
return (await existing) as T
441+
// Re-check after the await: another caller may have registered an
442+
// in-flight fetch while we were reading the persistent cache.
443+
const rechecked = inflightRequests.get(fullKey)
444+
if (rechecked) {
445+
return (await rechecked) as T
435446
}
436447

437448
// Create promise with cleanup handlers
@@ -446,7 +457,7 @@ export function createTtlCache(options?: TtlCacheOptions): TtlCache {
446457
}
447458
})()
448459

449-
// Set the promise IMMEDIATELY before any await to prevent race
460+
// Register before awaiting so subsequent callers join this fetch.
450461
inflightRequests.set(fullKey, promise)
451462

452463
// Await and return (cleanup happens in finally block)

test/unit/cache-with-ttl.test.mts

Lines changed: 31 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -554,8 +554,37 @@ describe.sequential('cache-with-ttl', () => {
554554

555555
// All should return the value
556556
expect(results).toEqual(['value', 'value', 'value'])
557-
// But fetcher might be called multiple times due to race conditions
558-
expect(fetchCount).toBeGreaterThan(0)
557+
// And the inflight map dedupes to a single upstream call.
558+
// (Previously gated on `> 0` because getOrFetch let two concurrent
559+
// cold-cache callers both fire fetcher — the check-then-inflight
560+
// ordering was wrong. Tightened once the TOCTOU was closed.)
561+
expect(fetchCount).toBe(1)
562+
})
563+
564+
it('dedupes concurrent cold-cache fetches', async () => {
565+
// Regression: previously `await get(key)` ran BEFORE the inflight
566+
// map was consulted. Two callers could both suspend on the disk
567+
// lookup, both see no cached value, and both start a fresh fetch.
568+
let fetchCount = 0
569+
let resolveFetcher: (value: string) => void = () => {}
570+
const pendingValue = new Promise<string>(resolve => {
571+
resolveFetcher = resolve
572+
})
573+
const fetcher = async () => {
574+
fetchCount++
575+
return pendingValue
576+
}
577+
578+
const p1 = cache.getOrFetch('race-key', fetcher)
579+
const p2 = cache.getOrFetch('race-key', fetcher)
580+
const p3 = cache.getOrFetch('race-key', fetcher)
581+
// Microtask gap so the first caller can register the inflight
582+
// entry, even though its internal `await get()` is still pending.
583+
await Promise.resolve()
584+
resolveFetcher('joined')
585+
const [v1, v2, v3] = await Promise.all([p1, p2, p3])
586+
expect([v1, v2, v3]).toEqual(['joined', 'joined', 'joined'])
587+
expect(fetchCount).toBe(1)
559588
})
560589
})
561590

0 commit comments

Comments
 (0)