Skip to content

@stateloom/server

Memory-bounded server scope for long-running Node.js servers. Prevents memory leaks from per-request scopes with LRU eviction and TTL-based expiration.

Install

bash
pnpm add @stateloom/core @stateloom/server
bash
npm install @stateloom/core @stateloom/server
bash
yarn add @stateloom/core @stateloom/server

Size: ~0.5 KB gzipped

Overview

The package provides a createServerScope factory that wraps @stateloom/core's Scope with LRU capacity limits and TTL-based expiration. Each fork() creates a tracked child scope for a single request. Expired or overflowing scopes are automatically evicted.

Quick Start

typescript
import { createServerScope } from '@stateloom/server';
import { signal, runInScope } from '@stateloom/core';

const server = createServerScope({ ttl: 60_000, maxEntries: 1_000 });
const count = signal(0);

// Per-request handler
const reqScope = server.fork();
runInScope(reqScope, () => {
  reqScope.set(count, 42);
  reqScope.get(count); // 42
});

count.get(); // 0 — global state unchanged

// Cleanup
server.dispose(reqScope.id);

Guide

Creating a Server Scope

Call createServerScope once at server startup. The returned scope manages all per-request child scopes:

typescript
import { createServerScope } from '@stateloom/server';

const serverScope = createServerScope({
  ttl: 300_000, // 5 minutes (default)
  maxEntries: 10_000, // LRU capacity (default)
});

Per-Request Scopes

Each HTTP request gets its own isolated scope via fork(). The forked scope inherits parent values but can override them independently:

typescript
import { signal, runInScope, serializeScope } from '@stateloom/core';

const userSignal = signal<string | null>(null);

app.get('/api/data', async (req, res) => {
  const reqScope = serverScope.fork();

  runInScope(reqScope, () => {
    reqScope.set(userSignal, req.user.id);
    // All signal reads within this scope see the request-specific value
  });

  const data = reqScope.serialize();
  res.json(data);

  // Explicit cleanup (optional — TTL/LRU handles it automatically)
  serverScope.dispose(reqScope.id);
});

Shared Server State

The server scope itself acts as a root Scope. Values set on it are inherited by all forked children:

typescript
import { signal } from '@stateloom/core';

const config = signal({ apiUrl: '' });

// Set shared state on the root scope
serverScope.set(config, { apiUrl: 'https://api.example.com' });

// All forked scopes inherit this value
const child = serverScope.fork();
child.get(config); // { apiUrl: 'https://api.example.com' }

Eviction and Cleanup

Scopes are evicted in two ways:

  1. TTL expiration: Scopes older than ttl milliseconds are swept lazily on the next fork() call
  2. LRU overflow: When the cache exceeds maxEntries, the least-recently-used scope is evicted

Track evictions with the onEvict callback:

typescript
const serverScope = createServerScope({
  ttl: 60_000,
  maxEntries: 5_000,
  onEvict: (scopeId) => {
    console.log(`Evicted scope: ${scopeId}`);
  },
});

Graceful Shutdown

Call destroy() to clean up all managed scopes. After destruction, all methods throw:

typescript
process.on('SIGTERM', () => {
  serverScope.destroy();
  process.exit(0);
});

API Reference

createServerScope(options?: ServerScopeOptions): ServerScope

Create a memory-bounded server scope.

Parameters:

ParameterTypeDescriptionDefault
optionsServerScopeOptionsConfiguration object.undefined
options.ttlnumberTime-to-live in milliseconds for managed scopes.300_000 (5 min)
options.maxEntriesnumberMaximum tracked scopes before LRU eviction.10_000
options.onEvict(scopeId: string) => voidCallback when a scope is evicted.undefined

Returns: ServerScope -- a new server scope.

typescript
import { createServerScope } from '@stateloom/server';

const server = createServerScope({
  ttl: 60_000,
  maxEntries: 1_000,
  onEvict: (id) => console.log(`Evicted: ${id}`),
});

Key behaviors:

  • TTL eviction is lazy -- expired scopes are swept on the next fork() call, not via a background timer
  • This prevents blocking process exit in serverless environments
  • Managed scope IDs are monotonic (ss_0, ss_1, ...) for deterministic output
  • After destroy(), all methods throw Error('ServerScope has been destroyed')

See also: ServerScope, ManagedScope


ServerScope (interface)

Memory-bounded scope manager extending Scope.

typescript
interface ServerScope extends Scope {
  fork(): ManagedScope;
  getScope(id: string): ManagedScope | undefined;
  dispose(id: string): boolean;
  readonly size: number;
  destroy(): void;
  get<T>(subscribable: Subscribable<T>): T;
  set<T>(sig: Signal<T>, value: T): void;
  serialize(): Record<string, unknown>;
}

fork(): ManagedScope

Create a tracked child scope. Performs a lazy TTL sweep, then evicts the LRU entry if at capacity.

Returns: ManagedScope -- a new managed scope with a unique ID.

typescript
const reqScope = server.fork();
reqScope.id; // "ss_0"

getScope(id: string): ManagedScope | undefined

Retrieve a managed scope by ID. Touches the entry (moves to head of LRU list).

Parameters:

ParameterTypeDescription
idstringThe scope ID.

Returns: ManagedScope | undefined

dispose(id: string): boolean

Explicitly dispose a managed scope. Removes from cache and fires onEvict.

Parameters:

ParameterTypeDescription
idstringThe scope ID to dispose.

Returns: boolean -- true if found and removed.

size: number

The number of currently tracked managed scopes.

destroy(): void

Shut down the server scope. Clears all managed scopes (firing onEvict for each), then marks as destroyed.


ManagedScope (interface)

A tracked child scope with a unique identifier. Extends Scope.

typescript
interface ManagedScope extends Scope {
  readonly id: string;
}
PropertyTypeDescription
idstringUnique identifier (e.g., "ss_0").

Delegates fork(), get(), set(), and serialize() to the inner scope. Calling fork() on a ManagedScope returns a plain Scope -- grandchildren are not tracked by the server scope.


ServerScopeOptions (interface)

typescript
interface ServerScopeOptions {
  readonly ttl?: number;
  readonly maxEntries?: number;
  readonly onEvict?: (scopeId: string) => void;
}
PropertyTypeDescriptionDefault
ttlnumberTime-to-live in milliseconds.300_000
maxEntriesnumberLRU capacity limit.10_000
onEvict(scopeId: string) => voidEviction callback.undefined

Patterns

Express Middleware

typescript
import express from 'express';
import { createServerScope } from '@stateloom/server';
import { signal, runInScope, serializeScope } from '@stateloom/core';

const app = express();
const server = createServerScope({ ttl: 60_000 });
const userId = signal<string | null>(null);

app.use((req, res, next) => {
  const scope = server.fork();
  req.scope = scope;
  runInScope(scope, () => {
    scope.set(userId, req.headers['x-user-id'] as string);
    next();
  });
});

app.get('/api/data', (req, res) => {
  const data = req.scope.serialize();
  server.dispose(req.scope.id);
  res.json(data);
});

Fastify Plugin

typescript
import Fastify from 'fastify';
import { createServerScope } from '@stateloom/server';
import { signal, runInScope } from '@stateloom/core';

const fastify = Fastify();
const server = createServerScope({ ttl: 30_000, maxEntries: 5_000 });

fastify.decorateRequest('scope', null);

fastify.addHook('onRequest', async (req) => {
  req.scope = server.fork();
});

fastify.addHook('onResponse', async (req) => {
  if (req.scope) server.dispose(req.scope.id);
});

Monitoring Scope Usage

typescript
import { createServerScope } from '@stateloom/server';

const server = createServerScope({
  ttl: 60_000,
  maxEntries: 10_000,
  onEvict: (id) => {
    metrics.increment('stateloom.scope.evicted');
  },
});

// Periodic health check
setInterval(() => {
  metrics.gauge('stateloom.scope.active', server.size);
}, 10_000);

How It Works

LRU Cache with TTL

Internally, the server scope uses a doubly-linked list + Map LRU cache:

  • All operations are O(1): set, get, delete, and sweep
  • Sentinel head/tail nodes simplify boundary logic (no null checks)
  • TTL sweep walks from the tail (oldest) and evicts expired entries
  • LRU eviction removes the tail entry when capacity is exceeded
  • get() touches the entry -- moves it to the head, resetting its TTL

Lazy TTL Sweep

TTL eviction happens lazily during fork(), not via a background timer. This design choice:

  1. Avoids setInterval that could prevent Node.js process exit
  2. Works correctly in serverless environments (no dangling timers)
  3. Adds negligible overhead to fork() since sweeping expired entries is O(k) where k is the number of expired entries

Scope Hierarchy

  • The root scope holds shared server state
  • Managed scopes (from fork()) are tracked for eviction
  • Grandchild scopes (from managedScope.fork()) are plain Scope objects -- not tracked

TypeScript

typescript
import { createServerScope } from '@stateloom/server';
import type { ServerScope, ManagedScope, ServerScopeOptions } from '@stateloom/server';
import { expectTypeOf } from 'vitest';

// createServerScope returns ServerScope
const server = createServerScope();
expectTypeOf(server).toMatchTypeOf<ServerScope>();

// fork returns ManagedScope
const child = server.fork();
expectTypeOf(child).toMatchTypeOf<ManagedScope>();
expectTypeOf(child.id).toEqualTypeOf<string>();

// getScope returns ManagedScope | undefined
const found = server.getScope('ss_0');
expectTypeOf(found).toEqualTypeOf<ManagedScope | undefined>();

When to Use

ScenarioWhy @stateloom/server
Express/Fastify per-request state isolationfork() creates isolated scopes automatically
Long-running Node.js serversLRU + TTL prevents memory leaks
SSR with Next.js/Nuxt/SvelteKitPer-request scope isolation
Serverless (Lambda, Cloudflare Workers)Lazy TTL sweep -- no background timers
Monitoring scope lifecycleonEvict callback for metrics

For client-side scope isolation, use @stateloom/core's createScope() directly (no TTL/LRU needed). For state persistence across restarts, combine with @stateloom/persist or @stateloom/persist-redis.