Server Design
Low-level design for @stateloom/server — the memory-bounded server scope manager. Covers the LRU cache architecture, managed scope lifecycle, fork/getScope/dispose pattern, TTL eviction, request isolation, and framework SSR integration.
Overview
The server package provides a ServerScope that wraps a root Scope from @stateloom/core with LRU eviction and TTL-based expiration. Each fork() call creates a tracked child scope for per-request isolation, preventing memory leaks in long-running Node.js servers. The server scope itself implements Scope, so it can hold shared server-wide state while forked children hold request-specific state.
Architecture
Three-Layer Structure
The server package is organized into three internal layers:
- Public API:
createServerScope()— the only exported factory function - Scope Management:
ServerScope(interface),ManagedScope(interface), andManagedScopeImpl(class) manage the lifecycle of forked scopes - Cache Infrastructure:
LruCache<V>— a generic O(1) LRU cache with TTL support, used internally
LRU Cache Internals
The LruCache is backed by a Map (for O(1) key lookups) and a doubly-linked list (for O(1) eviction and reordering). Sentinel head/tail nodes simplify boundary logic:
Each node stores:
interface LruNode<V> {
readonly key: string;
value: V;
expiry: number; // Date.now() + ttl
prev: LruNode<V>;
next: LruNode<V>;
}All operations are O(1):
| Operation | Implementation |
|---|---|
set(key, value) | Map lookup + detach/attach head + evict tail if at capacity |
get(key) | Map lookup + detach/attach head (touch) + refresh expiry |
delete(key) | Map lookup + detach + fire onEvict |
sweep(now) | Walk from tail, removing expired nodes |
clear() | Walk all nodes, fire onEvict for each, reset sentinels |
Implementation Details
createServerScope Factory
The factory creates a root scope, an LRU cache, and returns a ServerScope object:
export function createServerScope(options?: ServerScopeOptions): ServerScope {
const ttl = options?.ttl ?? DEFAULT_TTL; // 300_000 (5 min)
const maxEntries = options?.maxEntries ?? DEFAULT_MAX_ENTRIES; // 10_000
const onEvict = options?.onEvict;
const root: Scope = createScope();
let nextId = 0;
let destroyed = false;
const cache = new LruCache<ManagedScope>({ maxEntries, ttl, onEvict });
// ...
}The root scope holds server-wide shared state. The cache tracks forked child scopes. The destroyed flag protects against use-after-destroy.
fork() — Creating Request Scopes
Key behaviors:
- Lazy TTL sweep:
cache.sweep(Date.now())is called on everyfork(), removing expired entries. This avoids background timers that would block process exit in serverless environments. - Monotonic IDs: IDs follow the pattern
ss_0,ss_1,ss_2, ... for fast generation and deterministic test output. No UUID overhead. - Capacity enforcement: If the cache is at
maxEntries, the LRU (least recently accessed) scope is evicted before inserting the new one.
ManagedScopeImpl — Thin Wrapper
ManagedScopeImpl is a thin delegation wrapper that adds an id property to a core Scope:
export class ManagedScopeImpl implements ManagedScope {
readonly id: string;
readonly #inner: Scope;
constructor(id: string, inner: Scope) {
this.id = id;
this.#inner = inner;
}
fork(): Scope {
return this.#inner.fork();
}
get<T>(subscribable: Subscribable<T>): T {
return this.#inner.get(subscribable);
}
set<T>(sig: Signal<T>, value: T): void {
this.#inner.set(sig, value);
}
serialize(): Record<string, unknown> {
return this.#inner.serialize();
}
}The inner scope is stored as a private field (#inner), preventing external access. Calling fork() on a managed scope returns a plain Scope — grandchildren are not tracked by the server scope's LRU cache.
getScope() — Scope Retrieval with Touch
getScope(id: string): ManagedScope | undefined {
assertAlive();
return cache.get(id);
}The cache.get() call touches the entry, moving it to the head of the LRU list and refreshing its TTL. This means that actively-used scopes are never evicted, even under capacity pressure.
dispose() — Explicit Cleanup
dispose(id: string): boolean {
assertAlive();
return cache.delete(id);
}Explicit disposal removes the scope from the cache and fires the onEvict callback. This is the preferred cleanup path for request handlers that know when they're done (e.g., after sending the response).
destroy() — Server Shutdown
After destruction, all methods throw Error('ServerScope has been destroyed'). The root scope's children are garbage-collected when the cache is cleared.
assertAlive Guard
function assertAlive(): void {
if (destroyed) {
throw new Error('ServerScope has been destroyed');
}
}Every public method calls assertAlive() first. This catches bugs where server code continues to use a scope after shutdown, rather than silently returning stale data.
Root Scope Delegation
The ServerScope itself implements Scope by delegating to the root scope:
get<T>(subscribable: Subscribable<T>): T {
assertAlive();
return root.get(subscribable);
},
set<T>(sig: Signal<T>, value: T): void {
assertAlive();
root.set(sig, value);
},
serialize(): Record<string, unknown> {
assertAlive();
return root.serialize();
}This allows the server scope to hold shared server-wide state (e.g., configuration signals) that forked request scopes inherit via scope prototypal inheritance.
Request Isolation Pattern
Each request gets its own scope that:
- Inherits shared server state from the root scope
- Overrides request-specific values without affecting other requests
- Serializes its combined state for client-side hydration
- Is disposed explicitly after the response, or lazily via TTL/LRU eviction
Design Decisions
Why Lazy TTL Sweep Instead of Background Timer
A setInterval-based sweep would keep the Node.js event loop active, preventing graceful process exit in serverless environments (AWS Lambda, Vercel Edge). Lazy sweeping on fork() means the process can exit cleanly when no requests are pending. The trade-off is that expired scopes may linger briefly until the next fork() call, but this is acceptable because the LRU capacity limit provides an absolute upper bound on memory.
Why Monotonic IDs Instead of UUIDs
Scope IDs are internal identifiers used for cache lookup and debugging. UUID generation adds overhead and randomness that provides no benefit in this context. Monotonic IDs (ss_0, ss_1) are faster to generate, shorter in log output, and produce deterministic ordering in tests.
Why ManagedScopeImpl Is a Class
Unlike the closure-based factories used in other packages, ManagedScopeImpl is a class because it needs to satisfy the ManagedScope interface (which extends Scope) while adding an id property. A class with private fields (#inner) is the cleanest way to express delegation without exposing the inner scope. The class has no inheritance hierarchy — it's a leaf class with pure delegation.
Why Grandchildren Are Not Tracked
Calling fork() on a ManagedScope returns a plain Scope, not another ManagedScope. Tracking grandchildren would add complexity (nested LRU, cascading eviction) with minimal benefit. If a request handler needs sub-scopes, it can fork the managed scope directly and manage the plain child scopes within the request lifecycle.
Why assertAlive Throws Instead of No-Op
After destroy(), returning default values (undefined, empty objects) would hide bugs in server shutdown sequences. Throwing forces the developer to fix the lifecycle ordering. Since destroy() is called during server shutdown, any subsequent scope access is a programming error, not a recoverable condition.
Performance Considerations
| Concern | Strategy | Cost |
|---|---|---|
| Cache operations | All LRU operations are O(1) via Map + doubly-linked list | O(1) per fork/get/dispose |
| TTL sweep | Walks from tail; stops at first non-expired node in most cases | O(e) where e = expired entries |
| Scope fork | Delegates to core Scope.fork() — shallow prototype chain | O(1) |
| ID generation | Monotonic counter + string concatenation | O(1) |
| Memory overhead | LruNode adds ~64 bytes per entry (key, value, expiry, prev, next) | ~640 KB at 10,000 entries |
| Sentinel nodes | Two empty objects — avoid null checks on every list operation | O(1) constant overhead |
Cross-References
- Architecture Overview — where the server package fits in the layer structure
- Core Design — scope and signal internals that server builds on
- Store Design — store middleware pipeline (server scopes can host stores)
- Adapters Overview — how framework SSR adapters integrate with server scopes
- API Reference:
@stateloom/server— consumer-facing documentation