@stateloom/server
Memory-bounded server scope for long-running Node.js servers. Prevents memory leaks from per-request scopes with LRU eviction and TTL-based expiration.
Install
pnpm add @stateloom/core @stateloom/servernpm install @stateloom/core @stateloom/serveryarn add @stateloom/core @stateloom/serverSize: ~0.5 KB gzipped
Overview
The package provides a createServerScope factory that wraps @stateloom/core's Scope with LRU capacity limits and TTL-based expiration. Each fork() creates a tracked child scope for a single request. Expired or overflowing scopes are automatically evicted.
Quick Start
import { createServerScope } from '@stateloom/server';
import { signal, runInScope } from '@stateloom/core';
const server = createServerScope({ ttl: 60_000, maxEntries: 1_000 });
const count = signal(0);
// Per-request handler
const reqScope = server.fork();
runInScope(reqScope, () => {
reqScope.set(count, 42);
reqScope.get(count); // 42
});
count.get(); // 0 — global state unchanged
// Cleanup
server.dispose(reqScope.id);Guide
Creating a Server Scope
Call createServerScope once at server startup. The returned scope manages all per-request child scopes:
import { createServerScope } from '@stateloom/server';
const serverScope = createServerScope({
ttl: 300_000, // 5 minutes (default)
maxEntries: 10_000, // LRU capacity (default)
});Per-Request Scopes
Each HTTP request gets its own isolated scope via fork(). The forked scope inherits parent values but can override them independently:
import { signal, runInScope, serializeScope } from '@stateloom/core';
const userSignal = signal<string | null>(null);
app.get('/api/data', async (req, res) => {
const reqScope = serverScope.fork();
runInScope(reqScope, () => {
reqScope.set(userSignal, req.user.id);
// All signal reads within this scope see the request-specific value
});
const data = reqScope.serialize();
res.json(data);
// Explicit cleanup (optional — TTL/LRU handles it automatically)
serverScope.dispose(reqScope.id);
});Shared Server State
The server scope itself acts as a root Scope. Values set on it are inherited by all forked children:
import { signal } from '@stateloom/core';
const config = signal({ apiUrl: '' });
// Set shared state on the root scope
serverScope.set(config, { apiUrl: 'https://api.example.com' });
// All forked scopes inherit this value
const child = serverScope.fork();
child.get(config); // { apiUrl: 'https://api.example.com' }Eviction and Cleanup
Scopes are evicted in two ways:
- TTL expiration: Scopes older than
ttlmilliseconds are swept lazily on the nextfork()call - LRU overflow: When the cache exceeds
maxEntries, the least-recently-used scope is evicted
Track evictions with the onEvict callback:
const serverScope = createServerScope({
ttl: 60_000,
maxEntries: 5_000,
onEvict: (scopeId) => {
console.log(`Evicted scope: ${scopeId}`);
},
});Graceful Shutdown
Call destroy() to clean up all managed scopes. After destruction, all methods throw:
process.on('SIGTERM', () => {
serverScope.destroy();
process.exit(0);
});API Reference
createServerScope(options?: ServerScopeOptions): ServerScope
Create a memory-bounded server scope.
Parameters:
| Parameter | Type | Description | Default |
|---|---|---|---|
options | ServerScopeOptions | Configuration object. | undefined |
options.ttl | number | Time-to-live in milliseconds for managed scopes. | 300_000 (5 min) |
options.maxEntries | number | Maximum tracked scopes before LRU eviction. | 10_000 |
options.onEvict | (scopeId: string) => void | Callback when a scope is evicted. | undefined |
Returns: ServerScope -- a new server scope.
import { createServerScope } from '@stateloom/server';
const server = createServerScope({
ttl: 60_000,
maxEntries: 1_000,
onEvict: (id) => console.log(`Evicted: ${id}`),
});Key behaviors:
- TTL eviction is lazy -- expired scopes are swept on the next
fork()call, not via a background timer - This prevents blocking process exit in serverless environments
- Managed scope IDs are monotonic (
ss_0,ss_1, ...) for deterministic output - After
destroy(), all methods throwError('ServerScope has been destroyed')
See also: ServerScope, ManagedScope
ServerScope (interface)
Memory-bounded scope manager extending Scope.
interface ServerScope extends Scope {
fork(): ManagedScope;
getScope(id: string): ManagedScope | undefined;
dispose(id: string): boolean;
readonly size: number;
destroy(): void;
get<T>(subscribable: Subscribable<T>): T;
set<T>(sig: Signal<T>, value: T): void;
serialize(): Record<string, unknown>;
}fork(): ManagedScope
Create a tracked child scope. Performs a lazy TTL sweep, then evicts the LRU entry if at capacity.
Returns: ManagedScope -- a new managed scope with a unique ID.
const reqScope = server.fork();
reqScope.id; // "ss_0"getScope(id: string): ManagedScope | undefined
Retrieve a managed scope by ID. Touches the entry (moves to head of LRU list).
Parameters:
| Parameter | Type | Description |
|---|---|---|
id | string | The scope ID. |
Returns: ManagedScope | undefined
dispose(id: string): boolean
Explicitly dispose a managed scope. Removes from cache and fires onEvict.
Parameters:
| Parameter | Type | Description |
|---|---|---|
id | string | The scope ID to dispose. |
Returns: boolean -- true if found and removed.
size: number
The number of currently tracked managed scopes.
destroy(): void
Shut down the server scope. Clears all managed scopes (firing onEvict for each), then marks as destroyed.
ManagedScope (interface)
A tracked child scope with a unique identifier. Extends Scope.
interface ManagedScope extends Scope {
readonly id: string;
}| Property | Type | Description |
|---|---|---|
id | string | Unique identifier (e.g., "ss_0"). |
Delegates fork(), get(), set(), and serialize() to the inner scope. Calling fork() on a ManagedScope returns a plain Scope -- grandchildren are not tracked by the server scope.
ServerScopeOptions (interface)
interface ServerScopeOptions {
readonly ttl?: number;
readonly maxEntries?: number;
readonly onEvict?: (scopeId: string) => void;
}| Property | Type | Description | Default |
|---|---|---|---|
ttl | number | Time-to-live in milliseconds. | 300_000 |
maxEntries | number | LRU capacity limit. | 10_000 |
onEvict | (scopeId: string) => void | Eviction callback. | undefined |
Patterns
Express Middleware
import express from 'express';
import { createServerScope } from '@stateloom/server';
import { signal, runInScope, serializeScope } from '@stateloom/core';
const app = express();
const server = createServerScope({ ttl: 60_000 });
const userId = signal<string | null>(null);
app.use((req, res, next) => {
const scope = server.fork();
req.scope = scope;
runInScope(scope, () => {
scope.set(userId, req.headers['x-user-id'] as string);
next();
});
});
app.get('/api/data', (req, res) => {
const data = req.scope.serialize();
server.dispose(req.scope.id);
res.json(data);
});Fastify Plugin
import Fastify from 'fastify';
import { createServerScope } from '@stateloom/server';
import { signal, runInScope } from '@stateloom/core';
const fastify = Fastify();
const server = createServerScope({ ttl: 30_000, maxEntries: 5_000 });
fastify.decorateRequest('scope', null);
fastify.addHook('onRequest', async (req) => {
req.scope = server.fork();
});
fastify.addHook('onResponse', async (req) => {
if (req.scope) server.dispose(req.scope.id);
});Monitoring Scope Usage
import { createServerScope } from '@stateloom/server';
const server = createServerScope({
ttl: 60_000,
maxEntries: 10_000,
onEvict: (id) => {
metrics.increment('stateloom.scope.evicted');
},
});
// Periodic health check
setInterval(() => {
metrics.gauge('stateloom.scope.active', server.size);
}, 10_000);How It Works
LRU Cache with TTL
Internally, the server scope uses a doubly-linked list + Map LRU cache:
- All operations are O(1):
set,get,delete, andsweep - Sentinel head/tail nodes simplify boundary logic (no null checks)
- TTL sweep walks from the tail (oldest) and evicts expired entries
- LRU eviction removes the tail entry when capacity is exceeded
get()touches the entry -- moves it to the head, resetting its TTL
Lazy TTL Sweep
TTL eviction happens lazily during fork(), not via a background timer. This design choice:
- Avoids
setIntervalthat could prevent Node.js process exit - Works correctly in serverless environments (no dangling timers)
- Adds negligible overhead to
fork()since sweeping expired entries is O(k) where k is the number of expired entries
Scope Hierarchy
- The root scope holds shared server state
- Managed scopes (from
fork()) are tracked for eviction - Grandchild scopes (from
managedScope.fork()) are plainScopeobjects -- not tracked
TypeScript
import { createServerScope } from '@stateloom/server';
import type { ServerScope, ManagedScope, ServerScopeOptions } from '@stateloom/server';
import { expectTypeOf } from 'vitest';
// createServerScope returns ServerScope
const server = createServerScope();
expectTypeOf(server).toMatchTypeOf<ServerScope>();
// fork returns ManagedScope
const child = server.fork();
expectTypeOf(child).toMatchTypeOf<ManagedScope>();
expectTypeOf(child.id).toEqualTypeOf<string>();
// getScope returns ManagedScope | undefined
const found = server.getScope('ss_0');
expectTypeOf(found).toEqualTypeOf<ManagedScope | undefined>();When to Use
| Scenario | Why @stateloom/server |
|---|---|
| Express/Fastify per-request state isolation | fork() creates isolated scopes automatically |
| Long-running Node.js servers | LRU + TTL prevents memory leaks |
| SSR with Next.js/Nuxt/SvelteKit | Per-request scope isolation |
| Serverless (Lambda, Cloudflare Workers) | Lazy TTL sweep -- no background timers |
| Monitoring scope lifecycle | onEvict callback for metrics |
For client-side scope isolation, use @stateloom/core's createScope() directly (no TTL/LRU needed). For state persistence across restarts, combine with @stateloom/persist or @stateloom/persist-redis.