Core Design
Low-level design for @stateloom/core — the reactive kernel that every other package in the StateLoom ecosystem builds on.
Overview
The core package implements a push-pull hybrid reactive system with four primitives: signal, computed, effect, and batch, plus Scope for SSR isolation. At its heart is a dependency graph built from doubly-linked Link nodes that connect sources (signals, computed) to consumers (computed, effects). This document covers the internal data structures, algorithms, and scheduling strategy.
Dependency Graph Data Structure
The graph is built from three node types connected by Link objects. Each Link participates in two doubly-linked lists simultaneously — the source's subscriber list and the consumer's dependency list — enabling O(1) add/remove without Set allocation overhead.
Link Node
interface Link {
source: SourceNode;
consumer: ConsumerNode;
prevSub: Link | undefined; // source's subscriber list
nextSub: Link | undefined;
prevDep: Link | undefined; // consumer's dependency list
nextDep: Link | undefined;
version: number; // source version when last validated
}SourceNode and ConsumerNode
interface SourceNode {
firstSub: Link | undefined;
lastSub: Link | undefined;
version: number; // bumped on each value change
}
interface ConsumerNode {
firstDep: Link | undefined;
lastDep: Link | undefined;
flags: number; // dirty state + scheduling flags
notify(): void;
}A computed is both a SourceNode (downstream nodes depend on it) and a ConsumerNode (it tracks upstream dependencies). A signal is only a SourceNode. An effect is only a ConsumerNode.
Dirty State Machine
Each consumer carries a flags field that encodes its current state as a bitmask. The dirty state determines whether the consumer needs to recompute or re-execute.
Flag constants:
| Flag | Value | Meaning |
|---|---|---|
CLEAN | 0 | Up-to-date, no recomputation needed |
MAYBE_DIRTY | 1 | Upstream computed notified — needs version check |
DIRTY | 2 | Upstream signal changed — must recompute |
NOTIFIED | 4 | Effect is queued for execution (scheduling dedup) |
DISPOSED | 8 | Permanently stopped, should never run again |
Flags are combined with bitwise operations. For example, an effect that is dirty and queued has flags = DIRTY | NOTIFIED = 6.
Push-Pull Hybrid Algorithm
The core reactivity algorithm has two distinct phases:
Push Phase (Eager Dirty Marking)
When a signal value changes, dirty marks propagate eagerly through the subscriber list. This happens synchronously inside signal.set():
- Signal increments its
version propagateChange(source)walks the subscriber list- Each subscriber transitions:
CLEAN->DIRTY(withnotify()call), orMAYBE_DIRTY->DIRTY(silent upgrade) - Computed nodes that receive a notification call
propagateMaybeDirty()on their own subscriber list — this marks downstream consumers asMAYBE_DIRTY(not confirmed dirty yet)
Pull Phase (Lazy Recomputation)
Computed values recompute only when read. When a MAYBE_DIRTY computed is accessed via .get(), it walks its dependency list checking whether any source's version has actually changed:
// Simplified refresh logic in ComputedImpl
#refresh(): void {
if (flags & MAYBE_DIRTY) {
if (!checkSourcesChanged(consumer)) {
flags = CLEAN;
return; // no recomputation needed
}
}
// Recompute
startTracking(consumer);
const newValue = fn();
endTracking(consumer);
if (newValue !== cachedValue) {
version++;
propagateChange(source);
}
}checkSourcesChanged() is the key pull operation. For each dependency link, it calls refresh() on any computed sources (ensuring they are up-to-date first), then compares link.version against source.version. Any mismatch means the consumer must recompute.
Diamond Problem Resolution
Consider: signal A feeds computed B and computed C, both of which feed computed D.
- Push:
A.set()marksBDIRTY andCDIRTY - Push:
B.notify()marksDas MAYBE_DIRTY;C.notify()seesDalready has a dirty flag, no re-notification - Pull: When
D.get()is called,checkSourcesChanged()refreshesBfirst (which recomputes), then refreshesC(which recomputes), thenDrecomputes with both updated values
This guarantees D never sees an intermediate state where only B or only C has updated.
Link Recycling
During re-evaluation of a consumer, the dependency list from the previous evaluation is available for reuse. This avoids creating new Link objects (and the associated GC pressure) when the dependency set hasn't changed.
The algorithm uses a cursor-based scan:
startTracking(consumer)saves the old dependency chain and sets a cursor at its head- Each
track(source)call during evaluation:- Fast path: cursor matches this source — splice from old chain, append to new chain (O(1))
- Slow path: linear search through old chain for matching source
- Miss: create new
Link, subscribe to source
endTracking(consumer)removes all remaining old links (stale dependencies) from their source subscriber lists
The fast path handles the common case where dependencies are read in the same order across evaluations. The tracking context supports nesting via a trackingStack — computed nodes read during another computed's evaluation save and restore the parent's tracking state.
Signal Implementation
SignalImpl wraps a value with a SourceNode and a subscriber Set:
get(): Callstrack(source)for dependency tracking, returns valueset(value): Checks equality viaObject.is(or customequals). If changed: stores value, bumpssource.version, wraps propagation instartBatch/endBatch, callspropagateChange(source), schedules subscriber notificationsubscribe(callback): Adds callback to aSet. Returns an unsubscribe functionupdate(fn): Shorthand forset(fn(get()))
Each set() wraps its propagation in an internal batch. Outside an explicit batch(), this means effects and subscriber callbacks run synchronously at the end of set(). Inside an explicit batch(), the internal batch nests and flushing defers to the outermost batch.
Subscriber notifications use a #notificationQueued flag to deduplicate — multiple set() calls in a batch produce only one notification callback.
Computed Implementation
ComputedImpl acts as both source and consumer. Key internal fields:
#source: SourceNode & { refresh() }— downstream nodes depend on this#consumer: ConsumerNode— tracks upstream dependencies#fn: () => T— derivation function#value: T— cached result#hasValue: boolean— whether computed at least once
Creation: Consumer starts with flags = DIRTY, so the first get() triggers computation.
get(): Calls track(source) for downstream tracking, then #refresh():
- CLEAN: return cached value (no work)
- MAYBE_DIRTY: call
checkSourcesChanged()— if unchanged, mark CLEAN and return cached - DIRTY:
startTracking->fn()->endTracking. If new value differs from cached, bumpsource.version,propagateChange(), notify subscribers
notify() (called by upstream change): calls propagateMaybeDirty(source). This is the key difference from signals — a computed's change is uncertain until recomputation, so downstream gets MAYBE_DIRTY, not DIRTY.
Error handling: If fn() throws, endTracking runs in the catch path so tracking state is properly restored. The error propagates to the caller of get().
Effect Implementation
EffectImpl implements RunnableConsumer — a ConsumerNode with a run() method the scheduler can call.
- Creation: Starts with
flags = DIRTY, runs synchronously in the constructor notify(): CallsscheduleEffect(this)to queue for async executionrun(): Checks flags — skips if DISPOSED or CLEAN. For MAYBE_DIRTY, checks sources first. Runs cleanup from previous execution, thenstartTracking->fn()->endTracking. Iffnreturns a function, stores it as cleanupdispose(): Sets DISPOSED flag, runs final cleanup, callsremoveAllDeps()to disconnect from all sources
Effects support cleanup functions: if the effect body returns a function, that function is called before each re-execution and on disposal. This enables resource management (abort controllers, event listeners, timers).
Batch Implementation
Batching coalesces multiple signal writes into a single notification cycle:
Key behaviors:
- Nested batches: Only the outermost
endBatch()triggers flushing (depth must reach 0) - Flush order: Subscriber notifications first, then effects
- Values are immediate:
signal.get()inside a batch returns the new value before flushing - Error safety:
batch()uses try/finally so flushing always happens - Re-entrant: Effects that schedule more effects during flush are processed in subsequent rounds (uses
splice(0)pattern to drain the queue)
The effect scheduler uses queueMicrotask when outside a batch. The NOTIFIED flag prevents duplicate scheduling — an effect already in the queue is not added again. The DISPOSED flag causes queued effects to be skipped during flush.
Scope Implementation
Scopes provide per-request state isolation for SSR environments.
ScopeImpl uses a Map<Subscribable<unknown>, unknown> keyed by subscribable identity, with an optional parent scope for inheritance:
fork(): Createsnew ScopeImpl(this)— child inherits parent values, can override independentlyget(subscribable): Checks local Map, then parent chain, then global.get()set(signal, value): Stores in local Map only (does not propagate to parent)serialize(): Iterates the Map, mapping each subscribable to a stable auto-incrementing key via a module-levelWeakMap
runInScope(scope, fn): Sets a module-level currentScope, runs fn, restores previous scope in finally. Supports nesting — inner scopes take precedence.
Design Decisions
Why Doubly-Linked Lists Instead of Sets
Set<Link> would work but creates per-source overhead. The doubly-linked list gives O(1) add/remove with zero allocation (the Link object itself serves as the list node). This matters for fine-grained reactivity where thousands of signals may exist.
Why MAYBE_DIRTY Exists
Without MAYBE_DIRTY, every upstream change would force recomputation of all downstream computed nodes, even when the intermediate computed's value doesn't actually change (e.g., computed(() => items.get().length) when items are swapped but length stays the same). MAYBE_DIRTY enables the "check before recompute" optimization.
Why Each set() Has an Internal Batch
Wrapping every set() in startBatch/endBatch ensures that subscriber notifications and effects always run after the full propagation of a single write. Without this, subscriber callbacks could fire in the middle of propagation, seeing partially-updated state.
Why Effects Run Synchronously (Not via Microtask)
Each signal.set() wraps propagation in an internal batch, and endBatch() flushes effects synchronously. This means effects re-execute at the end of the triggering set() call. The queueMicrotask path only applies when effects are scheduled outside a batch (which is rare in practice). This design ensures predictable ordering and avoids the "stale read" problem where code after set() runs before effects.
Scope Isolation Model
Scopes provide isolated state universes for SSR. Each server request creates a scope so state is never shared between requests:
Performance Considerations
| Concern | Strategy | Complexity |
|---|---|---|
| Memory allocation | Link recycling reuses nodes across re-evaluations | O(1) per reused link |
| GC pressure | Doubly-linked lists avoid Map/Set overhead per dependency | O(1) add/remove |
| Glitch prevention | Push-pull hybrid ensures consistent state without topological sorting | No O(n log n) sort |
| Notification dedup | NOTIFIED flag prevents duplicate effect scheduling; #notificationQueued on signals prevents duplicate subscriber callbacks | O(1) flag check |
| Batch overhead | Simple counter — zero allocation for nesting | O(1) per nest level |
| Scope overhead | Map.has() per read (fast path); lazy allocation (only stores overridden values) | O(k) where k = scope depth |
| Bundle size | ~1.5 KB gzipped; zero platform APIs; uses only queueMicrotask, Object.is, Set, WeakMap | — |
Memory Patterns
| Primitive | Per-Instance Allocation | When GC-Eligible |
|---|---|---|
signal | 1 object + 1 SourceNode + 1 Set | When no external references remain |
computed | 1 object + 1 SourceNode + 1 ConsumerNode + 1 Set | When no external references and no subscribers |
effect | 1 object (ConsumerNode) + Links per dependency | After dispose() and removeAllDeps() |
Link | 1 object per dependency edge | When consumer is re-evaluated or disposed |
Scope | 1 object + 1 Map (entries only for overrides) | When scope reference is dropped |
Cross-References
- Architecture Overview — where core fits in the layer structure
- Design Philosophy — why push-pull hybrid was chosen
- Store Design — how stores wrap core signals
- Atom Design — how atoms resolve to core signals/computed in a scope
- Proxy Design — how per-property signals integrate with the graph
- API Reference:
@stateloom/core— consumer-facing documentation