You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
fix(cache-with-ttl): cap in-memory memo layer with LRU eviction
`memoCache` was a plain `Map` with no size limit and no eviction. Every
`get()` cache-hit, every `set()`, and every `getAll()` wrote to it
unconditionally; expired entries were only reclaimed when the same key
was read again. In a long-lived process (devserver, VS Code extension,
daemon) that queries many distinct keys, this grows without bound — the
docstring explicitly calls memoCache "in-memory memoization for hot
data" but the implementation kept every key it had ever seen.
Adds a `memoMaxSize` option (default 1000) and routes all memo writes
through a `memoSet` helper that evicts the least-recently-used entry
(oldest Map insertion) when the cap is reached. Memo hits in `get()`
re-insert to bump recency so hot keys survive churn. The persistent
(cacache) layer is unaffected.
0 commit comments