← Whitepapers
Univault Technologies

PRN Implementation Specification

Scaling Phase-Coherent Resonance Networks: Resonance Routing, Hierarchical Aggregation, Cross-Shard Coordination, and Harmonic State Encoding

Phuong Tran and Anh Tran

Univault Technologies, LLC

[email protected]  ·  paragondao.org

Version 1.0 — March 2026

Abstract

This document is a companion to the Paragon Resonance Network (PRN) whitepaper. Where the whitepaper describes what PRN is — Phase-Coherent Resonance consensus, the dual-layer Kuramoto–BFT architecture, and the security model — this specification describes how it is built. We disclose the complete implementation details required to scale a PRN network from proof-of-concept to production: resonance-modulated transaction processing, content-addressed wave propagation for gossip routing, phase-scored cross-shard coordination via two-phase commit, hierarchical privacy-preserving aggregation through relay pre-computation, semantic shard assignment via k-means clustering, harmonic state encoding with frequency-domain state evolution, and the HFTP wire protocol for coefficient exchange. All algorithms are specified with exact formulas, numeric parameters, data structures, and pseudocode sufficient for independent implementation.

Keywords: Resonance Routing, Cross-Shard Coordination, Wave Propagation, Hierarchical Aggregation, Harmonic State Encoding, Phase-Coupled Oscillators, Semantic Sharding, HFTP Wire Protocol, Defensive Publication

Publication Notice. This document is published as a public technical disclosure to advance open health infrastructure. All techniques described herein are placed into the public record. This document should be cited as: Tran, P. & Tran, A. (2026). “PRN Implementation Specification: Scaling Phase-Coherent Resonance Networks.” Univault Technologies Technical Disclosure, v1.0.

Implementation Status. This specification documents algorithms from the PRN codebase at varying stages of maturity. The majority of systems described (Sections 2–13, 15–16) are implemented and tested. Three subsystems are currently commented out in the codebase and documented here as design specifications: the Kuramoto sinusoidal coupling module (Section 2.3, Eq. 6 — replaced by linear coupling in the active code), the self-healing engine (Section 14.1), and the reputation engine (Section 14.2). Section 1 (resonance-modulated transaction processing) and Section 3.4 (phase-scored peer selection) represent architecture-level design specifications that are defined in the node configuration but not yet exercised in test harnesses. All numeric parameters and formulas for these components are drawn from the codebase as written.


1. Resonance-Modulated Transaction Processing

1.1 Oscillator-Driven Latency Control

In a PRN network, each node operates as a coupled oscillator with a phase value \(\theta \in [0, 1]\). Transaction processing latency is not fixed but modulated by the node's current phase alignment with the network. The resonance factor is computed as:

$$R(\theta) = 0.5 + 0.5 \cdot \cos(\theta \cdot 2\pi)$$ (1)

At perfect resonance (\(\theta = 0\)), \(R = 1.0\) and operations execute at minimum latency. At anti-resonance (\(\theta = 0.5\)), \(R = 0.0\) and operations execute at maximum latency. This creates a natural rhythm in the network where synchronized nodes process transactions faster — an emergent incentive for phase alignment.

1.2 Operation-Specific Delay Formulas

Each transaction type has a specific delay formula parameterized by the resonance factor. Let \(R\) denote the resonance factor from Equation (1):

Operation Delay Formula (ms) Min Delay Max Delay
Vector search \(\max(2,\; \lfloor 30 \cdot (1 - 0.9R) \rfloor)\) 2 ms 30 ms
Vector insert \(\max(2,\; \lfloor 20 \cdot (1 - 0.9R) \rfloor)\) 2 ms 20 ms
Key-value GET \(\max(1,\; \lfloor 5 \cdot (1 - 0.9R) \rfloor)\) 1 ms 5 ms
Key-value PUT \(\max(1,\; \lfloor 10 \cdot (1 - 0.9R) \rfloor)\) 1 ms 10 ms
Cross-shard \(\max(5,\; \lfloor 50 \cdot (1 - 0.9R) \rfloor)\) 5 ms 50 ms

1.3 Resonance-Modulated Success Rates

Cross-shard transactions use a two-phase commit protocol where the success probability of each phase is modulated by the resonance factor:

$$P_{\text{prepare}} = 0.95 + 0.05 \cdot R(\theta)$$ (2)
$$P_{\text{commit}} = 0.98 + 0.02 \cdot R(\theta)$$ (3)

At perfect resonance, both phases succeed with probability 1.0. At anti-resonance, prepare succeeds at 95% and commit at 98%. The processing delay for both phases follows: \(\lfloor 20 \cdot (1 - 0.8R) \rfloor\) milliseconds.

Design rationale. Resonance modulation creates an emergent incentive: nodes that synchronize their oscillator phase with the network process transactions faster and more reliably. This replaces explicit staking rewards with a physics-inspired performance gradient. The network self-organizes toward synchronization because synchronized nodes are more useful.

1.4 Batch Processing Parameters

Each node processes up to 100 transactions per processing interval. The processing interval is set by the node's PCR synchronization interval. Individual buffered transactions time out after 5,000 ms (5 seconds). Nodes set a maximum of 1,000 event listeners to handle concurrent transaction streams.


2. Content-Addressed Wave Propagation

2.1 Message-to-Wave Mapping

PRN uses a novel gossip protocol where message content deterministically maps to wave properties, and the interference pattern between the message's wave and each peer's oscillator state determines propagation. This replaces random gossip with topology-aware, content-dependent dissemination.

Given a message payload \(M\), the wave properties are derived as follows:

  1. Compute \(h = \text{SHA-256}(M)\) and extract the first 8 hexadecimal characters as an integer \(n\).
  2. Amplitude: \(A = \frac{n \bmod 1000}{1000} \cdot A_{\max}\), where \(A_{\max} = 10.0\). Range: \([0, 10.0)\).
  3. Phase: \(\phi = \frac{(n \bmod 360) \cdot \pi}{180}\). Range: \([0, 2\pi)\). This maps the hash deterministically to a point on the unit circle.
  4. Frequency: \(f = f_{\text{base}}\), where \(f_{\text{base}} = 1.0\).

The resulting wave message is the tuple \(\langle A, f, \phi, M \rangle\).

2.2 Interference-Based Peer Selection

For each candidate peer node with oscillator state \((\phi_{\text{node}}, f_{\text{node}})\), the interference score is computed as:

$$I = \cos(\phi_{\text{msg}} - \phi_{\text{node}}) \cdot \left(1 - |1 - \frac{f_{\text{msg}}}{f_{\text{node}}}|\right)$$ (4)

The first factor, \(\cos(\Delta\phi)\), ranges from \(-1\) (destructive interference, exactly anti-phase) to \(+1\) (constructive interference, perfectly in-phase). The second factor is a frequency match term: 1.0 when frequencies are identical, decreasing as they diverge, and going negative when the frequency ratio exceeds 2.0 or drops below 0.

Retransmission rule: A message is forwarded only to peers where \(I > 0\) (constructive interference). Peers with destructive interference (\(I \leq 0\)) do not receive the retransmission. This creates content-dependent routing: the same message propagates along different paths depending on the current phase distribution of the network.

2.3 Network Self-Organization

Phase coherence across the network is measured using the Kuramoto order parameter:

$$r = \frac{1}{N}\sqrt{\left(\sum_{j=1}^{N} \sin \theta_j\right)^2 + \left(\sum_{j=1}^{N} \cos \theta_j\right)^2}$$ (5)

When \(r\) drops below the target coherence threshold of 0.8, the network triggers frequency adjustment. Each node's frequency is pulled toward the average of its neighbors via linear coupling:

$$f_i^{(t+1)} = f_i^{(t)} + K \cdot (\bar{f}_{\text{neighbors}} - f_i^{(t)})$$ (6)

where \(K\) is the coupling strength parameter from the node's PCR configuration, range \([0, 1]\). This is a linear coupling model (contrast with the sinusoidal Kuramoto coupling used in the consensus layer) chosen for computational efficiency in the gossip layer.


3. Frequency-Based Peer Discovery and Routing

3.1 Oscillator Frequency Space

Each PRN node maintains a center frequency clamped to the range [400, 500]. This bounded frequency space ensures that all nodes operate within a finite resonance domain. A peer is classified as resonant if the absolute difference between its frequency and the querying node's center frequency falls within a configurable tolerance:

$$\text{isResonant}(p) = |f_p - f_{\text{center}}| \leq \tau$$ (7)

3.2 Network Stability Scoring

The network stability metric is derived from a 60-sample moving average of peer frequency drift. When a peer's frequency is updated, the drift is computed as \(\delta = |f_{\text{new}} - f_{\text{old}}|\) and fed into the moving average. Stability is then:

$$S = \begin{cases} 1.0 & \text{if } \bar{\delta} < 0.1 \\ 1 - \bar{\delta} & \text{otherwise} \end{cases}$$ (8)

Note that the stability score is not clamped at zero: if the average drift exceeds 1.0, the score becomes negative, signaling to the system that the network is in an unstable state requiring corrective action.

3.3 Resonant Peer Caching

The list of resonant peers is cached with a TTL of 1,000 ms (1 second). The cache is invalidated on any of: (a) a peer frequency update, (b) a center frequency adjustment, (c) a tolerance change, or (d) stale data cleanup removing any peer. Stale peers are evicted after 300,000 ms (5 minutes) without a frequency update. The PeerId-to-string conversion cache is cleared when it exceeds 10,000 entries to bound memory usage.

3.4 Phase-Scored Peer Selection

When selecting a peer for cross-shard communication, the system scores candidates using circular phase distance:

$$\text{score}(\theta_p, \theta_{\text{self}}) = 1 - \frac{d_{\text{circ}}(\theta_p, \theta_{\text{self}})}{\pi}$$ (9)

where the circular distance is:

$$d_{\text{circ}}(a, b) = \begin{cases} |a - b| & \text{if } |a - b| \leq \pi \\ 2\pi - |a - b| & \text{if } |a - b| > \pi \end{cases}$$ (10)

Scores range from 0 (exactly anti-phase) to 1.0 (perfectly phase-aligned). The highest-scoring peer in the target shard is selected as the routing destination. Shard membership and phase information are encoded in libp2p protocol strings using the format /shard-{id}/ and /phase-{value}/.

Phase convention note: The codebase uses two phase representations. The PCR node configuration initializes phase in the range \([0, 1]\) (fractional oscillator cycle). The vector store and peer selection modules use \([0, 2\pi]\) (radians). Conversion is \(\theta_{\text{rad}} = \theta_{\text{frac}} \cdot 2\pi\). Equations 9–10 above use the radian convention.


4. Cross-Shard Transaction Coordination

4.1 Two-Phase Commit Protocol

Cross-shard transactions use a two-phase commit (2PC) protocol with resonance-enhanced peer routing. The protocol tracks the following state per transaction:

CrossShardTxStatus {
  id:             string          // unique transaction identifier
  originShard:    string          // initiating shard
  targetShards:   string[]        // destination shards
  status:         'pending' | 'prepared' | 'committed' | 'aborted'
  preparedShards: string[]        // shards that completed prepare
  waitingFor:     string[]        // shards still awaited
  startTime:      number          // epoch milliseconds
  completionTime: number          // epoch milliseconds (on finalization)
}

Phase 1 (Prepare): The coordinator creates a transaction status with waitingFor = [...targetShardIds] and forwards the transaction to each target shard. As each shard completes preparation, it sends a SHARD_PREPARED message. The coordinator removes that shard from waitingFor and adds it to preparedShards.

Phase 2 (Commit/Abort): When ALL target shards AND the origin shard appear in preparedShards, the transaction is finalized. The coordinator broadcasts the commit decision to all involved shards.

4.2 Resonance-Weighted Coordinator Selection

When routing a transaction to a target shard, the coordinator selects the optimal peer using a weighted quality score:

$$Q(p) = 0.6 \cdot R_{\text{freq}}(p) + 0.4 \cdot C_{\text{phase}}(p)$$ (11)

where \(R_{\text{freq}}\) is a binary resonance indicator (1.0 if the peer is frequency-resonant per Equation 7, else 0.2) and \(C_{\text{phase}}\) is the peer's phase coherence (continuous, range [0, 1], defaults to 0.5 if unknown). The 60/40 weighting prioritizes frequency resonance while allowing phase coherence to break ties. Non-resonant peers receive a floor score of 0.2 (not zero), ensuring reachability even in degraded conditions. The quality range is [0.12, 0.52] for non-resonant peers (when \(R_{\text{freq}} = 0.2\): \(Q_{\min} = 0.6 \cdot 0.2 + 0.4 \cdot 0 = 0.12\), \(Q_{\max} = 0.6 \cdot 0.2 + 0.4 \cdot 0.8 = 0.52\)) and [0.6, 1.0] for resonant peers.

4.3 Timeout and Deadlock Management

The system employs a three-layer liveness guarantee:

  1. Eager deadlock detection: Every new cross-shard transaction proposal triggers inline deadlock checking before processing.
  2. Periodic sweep: Every 5,000 ms (5 seconds), all pending transactions are scanned for timeouts.
  3. Hard timeout: Any pending transaction older than 15,000 ms (15 seconds) is automatically aborted.

4.4 Wait-For Graph Deadlock Detection

Deadlock detection builds a wait-for graph and performs DFS cycle detection:

  1. Collect all transactions with status === 'pending'.
  2. Build a directed graph: for each pending transaction \(T\), for each shard \(s\) in \(T\).waitingFor, find other pending transactions that have \(s\) in their preparedShards. Add a directed edge \(T \to T'\).
  3. Run DFS with a recursion stack. If a node is encountered that is already in the recursion stack, a cycle is detected.
  4. Resolution policy: Sort all deadlocked transactions by startTime ascending. Abort the newest transaction (last in sorted order). This preserves the transaction that has been running longest.

4.5 Voting and Finalization

Block proposals require a supermajority vote. The system defines three vote types: APPROVE, REJECT, and ABSTAIN. The implementation defines two voting thresholds: the protocol-level threshold is \(\geq 66\%\) (used in the primary processVotes path) and the standalone tally threshold is \(\geq 66.66\%\) (used in the tallyVotes utility). With 100 voters, 66 approvals pass the first threshold but fail the second — an intentional two-tier acceptance gate where protocol-level voting is slightly more permissive than the standalone tally. Note: classical BFT requires strictly > 2/3 (66.67%) for quorum intersection safety. The 66% protocol-level threshold is a deliberate engineering choice that trades a marginal safety bound reduction for faster finalization in the health data context, where the cost of delayed health updates is weighed against the probability of > 33% Byzantine nodes in a breathing-attested network. A vote is valid only if the voter's ID appears in the shard's member list and the vote carries a non-empty signature. In the current implementation, vote signatures are validated for presence (non-empty check); the transport layer (libp2p Noise with Ed25519) provides cryptographic authentication of the peer identity, ensuring that votes received on an authenticated stream originate from the claimed voter. Leader election is deterministic: nodes are sorted lexicographically by ID, and the first node is the leader.

Cross-shard transactions are included in block finalization only if their global status is 'committed'. Regular (intra-shard) transactions are always included. The consensus adapter emits a transactionsFinalized event after a 50 ms asynchronous delay to simulate consensus completion.

4.6 Cross-Shard Metrics

The system tracks the following metrics as a module-level singleton: total transactions, committed transactions, aborted transactions, timed-out transactions (latency exceeding 15,000 ms), deadlocked transactions, running average latency, total latency, and maximum observed latency.


5. Resonance-Based Block Validation

5.1 Block Structure

A PRN block (ResonanceBlock) consists of:

ResonanceBlock {
  header: {
    previousBlockHash:  string
    merkleRoot:         string
    timestamp:          number
    blockHeight:        number
    networkPhaseState:  PhaseState
    validatorSet:       string[]
  }
  transitions:     Transition[]        // ordered transaction list
  resonanceProof:  ResonanceProof      // phase coherence evidence
  hash:            string
}

Each Transition carries a harmonicState — a set of frequency components representing the transaction's harmonic signature. Each transition may also carry a crossShard boolean flag.

5.2 Resonance Proof Validation

The ResonanceProof contains a phaseCoherenceScore, an array of FrequencyAlignment entries, and an array of HarmonicSignature validator signatures. Validation checks:

5.3 Frequency Alignment Calculation

For each frequency component in the block's transitions, alignment is computed against the network's phase state. Two components are considered matching if their frequency difference is \(< 0.001\). The alignment score is:

$$\text{alignment} = 1 - \frac{|\Delta A| + |\Delta \phi|}{2}$$ (12)

where \(\Delta A\) is the amplitude difference and \(\Delta \phi\) is the phase difference between the transition's component and the network's corresponding component.


6. Harmonic State Encoding

6.1 Data-to-Harmonic Conversion

Arbitrary data is converted to harmonic representations (frequency/amplitude/phase triplets) through the following algorithm:

  1. Vectorization: Serialize the data to JSON and compute its string length \(L\). Generate a vector of length \(C\) (the configured component limit) where each element is: \(v_i = \sin(i \cdot 0.1 + L \cdot 0.01) \cdot 0.5 + 0.5\). Note that \(L\) parameterizes the sine function but the vector dimension is determined by the component limit, not the string length.
  2. Segmentation: Divide the vector into 8 equal segments.
  3. Component extraction: For each segment \(k\) (0-indexed):
    • Amplitude \(A_k\) = mean of absolute values in the segment. Skip if \(A_k < 0.1\).
    • Frequency offset: \(\delta_k = (v_{\text{first}} \cdot v_{\text{last}}) \bmod 0.2\), where \(v_{\text{first}}\) and \(v_{\text{last}}\) are the first and last values of the segment.
    • Frequency: \(f_k = f_{\text{base}} \cdot (1 + k \cdot 0.2 + \delta_k)\), where \(f_{\text{base}}\) defaults to 450.
    • Phase: \(\phi_k = \left(\sum_{i} v_i \cdot i\right) \bmod 1.0\), where the sum is over all values in the segment.
  4. Coherence index: \(C = \sum A_k / N_{\text{components}}\), where \(N_{\text{components}}\) is the number of segments with amplitude \(\geq 0.1\).

6.2 Content Identification

A deterministic content hash is computed using a DJB2-variant algorithm: starting with hash = 0, for each character \(c\) in the JSON-serialized data, compute hash = ((hash << 5) - hash) + charCode(c). The result is formatted as hid-{hex}.

6.3 State Evolution Rules

Harmonic states evolve through two operations, each with distinct blending rules:

Transition application (applying a new transaction to existing state): For each frequency component in the transition, find a matching component in the existing state (frequency difference \(< 0.001\)). If matched, blend with an 80/20 ratio:

$$f' = 0.8 \cdot f_{\text{existing}} + 0.2 \cdot f_{\text{transition}}$$ (13)

In the canonical PhaseState implementation, the harmonics array stores only frequency values, so the 80/20 blend applies to frequency alone. An alternative implementation path (applyTransitionsToState in the block layer) tracks full frequency/amplitude/phase triplets and uses a 50/50 blend for all three components. Non-matching components are added to the state as new entries in both paths.

State merge (merging two states during synchronization): Matching components are blended with a 50/50 ratio:

$$f' = 0.5 \cdot f_A + 0.5 \cdot f_B$$ (14)

Non-matching components from either state are added to the merged result.

6.4 Harmonic Resonance Scoring

The resonance between two harmonic states is computed by matching their frequency components (within 0.001 tolerance) and scoring each match:

$$\text{score}_k = 1 - \frac{|\Delta A_k| + |\Delta \phi_k|}{2}$$ (15)

The overall resonance is the arithmetic mean of all component scores.

6.5 Harmonic Similarity Search

The state store supports similarity search via harmonics. Candidate vectors are retrieved by sorting on \(|f_{\text{base}} - f_{\text{target}}|\). For each candidate, a resonance score is computed:

$$S = \frac{1}{N} \sum_{k} \left(1 - \frac{\Delta\phi_k}{2\pi}\right) \cdot \frac{\min(A_k, A'_k)}{\max(A_k, A'_k)} \cdot (1 - \Delta f_k)$$ (16)

where components are matched if their frequency difference is \(< 10\%\). Candidates with overall resonance below 0.3 are excluded from results.


7. Phase-Augmented Vector Embeddings

7.1 Extended Vector Storage

The PRN vector store extends standard vector databases by adding phase and natural frequency as first-class attributes on every embedding:

VectorEmbedding {
  id:               string
  vector:           Float32Array     // L2-normalized
  phase:            number           // range [0, 2*PI]
  naturalFrequency: number
  lastUpdated:      number           // epoch ms
  metadata:         Record<string, any>
}

The default index type is HNSW (Hierarchical Navigable Small World). The store supports up to 10,000,000 vectors. The default similarity metric is cosine similarity. All vectors are L2-normalized on insertion.

7.2 Frequency-Domain Query Filters

Queries can filter by phase and frequency using min/max ranges or value/tolerance pairs, enabling frequency-domain filtering alongside traditional vector similarity search. This allows queries such as “find the 10 nearest vectors with phase between 1.0 and 2.0 radians and natural frequency within 5 Hz of 450.”

7.3 Performance Metrics

The vector store maintains a sliding window of the last 1,000 query times and computes percentile statistics (p50, p95, p99) for performance monitoring.


8. Semantic Shard Assignment

8.1 Shard Count Calculation

The number of shards is computed dynamically based on the node count and configuration parameters:

$$n_{\text{shards}} = \min\!\left(\max\!\left(\left\lceil \frac{N}{n_{\max}} \right\rceil,\; \lfloor n_{\text{target}} \rfloor\right),\; \left\lfloor \frac{N}{n_{\min}} \right\rfloor \lor 1\right)$$ (17)

where \(N\) is the node count, \(n_{\max}\) is the maximum nodes per shard, \(n_{\min}\) is the minimum nodes per shard, and \(n_{\text{target}}\) is the desired shard count.

8.2 K-Means Clustering on Semantic Vectors

Nodes are assigned to shards via k-means clustering on their semantic vectors:

  1. Initialization: Select initial cluster centers by evenly spacing through the node list (deterministic, not random). Specifically, center \(i\) is the node at index \(\lfloor i \cdot N / k \rfloor\).
  2. Assignment: Each node is assigned to the nearest center using Euclidean distance on Float32Array semantic vectors.
  3. Update: Each center is recomputed as the component-wise mean of its assigned nodes' vectors.
  4. Iteration: Steps 2–3 repeat for exactly 5 iterations (hardcoded, no convergence check).
  5. Fallback: Nodes without semantic vectors are assigned round-robin: shardIndex = nodeIndex % shardCount.

8.3 Incremental Resharding

On network membership changes, the system computes a change percentage:

$$\Delta = \frac{N_{\text{new}} + N_{\text{departed}}}{N_{\text{existing}}}$$ (18)

If \(\Delta\) exceeds the configured resharding threshold, a full reshard is triggered with forceShardsCount = existingShardCount + 1. Otherwise, an incremental update is performed: departed nodes are removed, and new nodes are assigned to the shard with the fewest current members (greedy load balancing). The shard assignment version number increments on every update.

8.4 Shard Protocol

Shard assignments are distributed via a custom libp2p protocol registered at /paragon/shard-coord/1.0.0. The coordinator broadcasts JSON-encoded shard assignments to all connected peers. Peers accept newer assignments based on version number or timestamp comparison.


9. HFTP Wire Protocol

9.1 Node Types

The HFTP network recognizes four node types: server (registry), haven (crisis intervention endpoint), builder (third-party application), and relay (hierarchical aggregation point).

9.2 Protocol Constants

Constant Value Purpose
HEARTBEAT_INTERVAL_MS15,000Heartbeat frequency
NODE_TIMEOUT_MS45,000Eviction threshold (3× heartbeat)
AGGREGATE_BROADCAST_MS30,000Aggregate broadcast frequency
MAX_COEFFICIENTS128Max GLE coefficient vector length
MAX_NODE_ID_LENGTH64Max nodeId string length
MAX_REGION_LENGTH32Max region string length
MAX_MESSAGE_SIZE4,096 bytesWire message size limit
MAX_LOCAL_PEER_COUNT10,000Max peers behind a relay
MAX_TIMESTAMP_AGE_MS60,000Clock drift tolerance

9.3 Client-to-Registry Messages

Four message types flow from nodes to the registry:

RegisterMessage  { type: 'register',      nodeId, nodeType, region, timestamp }
HealthMessage    { type: 'health',        nodeId, coefficients[128], breathingAttested, healthSummary?, timestamp }
HeartbeatMessage { type: 'heartbeat',     nodeId, timestamp }
RelayStatusMsg   { type: 'relay-status',  nodeId, localPeerCount, localAggregate, timestamp }

The coefficients field carries up to 128 DCT-II coefficients (validated as length ≤ MAX_COEFFICIENTS; serialized as JSON; Float64 in memory, Float32 on wire per the codebase convention of 512 bytes at maximum length). This is the only health data that leaves the device. The optional healthSummary contains derived values: classification (string), stressIndicator (float [0,1]), and breathingDepth (float [0,1]).

9.4 Registry-to-Client Messages

WelcomeMessage   { type: 'welcome',    registryId, nodeCount, timestamp }
PeersMessage     { type: 'peers',      peers: NodeInfo[], timestamp }
AggregateMessage { type: 'aggregate',  activeAgents, avgStressLevel, avgBreathingDepth, dominantClassification, timestamp }
ErrorMessage     { type: 'error',      code, message, timestamp }

9.5 Connection Handshake

The HFTP connection sequence is:

  1. Client opens WebSocket to registry URL.
  2. On open, client immediately sends a RegisterMessage with its nodeId, nodeType, and region.
  3. Registry validates the message, checks for duplicate nodeId across all active WebSocket sessions (rejects with DUPLICATE_NODE error if found), binds the WebSocket session to the nodeId.
  4. Registry responds with WelcomeMessage, then broadcasts an updated PeersMessage to all connected nodes.
  5. Client starts a heartbeat interval at HEARTBEAT_INTERVAL_MS (15s).

Session binding security: After registration, all subsequent messages on that WebSocket must carry the same nodeId. A mismatch triggers a NODE_ID_MISMATCH error. This prevents impersonation without requiring cryptographic key exchange at the transport layer.

9.6 Message Validation

All inbound messages pass through a two-stage validation pipeline: (1) JSON parse, (2) structural validation. The validation rules are:

9.7 Reconnection

Clients implement exponential backoff reconnection: delay = min(baseDelay × 2^attempts, 60000), where baseDelay = 5000 ms and the cap is 60 seconds. The attempt counter resets to 0 on successful connection. Configurable maxReconnectAttempts (0 = unlimited).


10. Hierarchical Privacy-Preserving Aggregation

10.1 Two-Tier Relay Architecture

The HFTP network implements a two-tier topology where relay nodes act as local registries for their connected peers while simultaneously connecting as clients to the central registry:

[Haven Phones] ──ws──> [Relay Node] ──ws──> [Central Registry]
[Builder Apps] ──ws──> [Relay Node] ──ws──> [Central Registry]

Each relay runs a full HFTPRegistry instance locally (with ID relay-{nodeId}) and an HFTPNodeClient upstream. This dual role enables the relay to serve its local peers even if the upstream connection is lost.

10.2 Relay Aggregate Pre-Computation

Every 10,000 ms (10 seconds), the relay computes a local aggregate from its connected peers and sends it upstream as a RelayStatusMessage. The aggregate contains:

The central registry never receives individual health data from relay-connected peers. It receives only the pre-computed aggregate. This is the core privacy mechanism: individual coefficients stay within the relay's local network.

10.3 Weighted Network Aggregate

The central registry computes the network-wide aggregate using a weighted average across two data sources:

  1. Direct nodes (nodes connected directly to the registry with health data): each contributes weight = 1.
  2. Relay nodes (with local aggregates): each contributes weight = relayPeerCount.
totalSources = directCount + sum(relay.relayPeerCount for each relay with aggregate)
weightedStress = sum(direct.stressIndicator) + sum(relay.avgStressLevel * relay.relayPeerCount)
avgStressLevel = weightedStress / totalSources
dominantClassification = classification with highest count (relay weighted by peerCount)

Privacy-by-design. The registry stores coefficients internally for aggregate computation but strips them when broadcasting peer lists. The getNodes() method destructures each internal node to exclude coefficients, healthSummary, and localAggregate before returning. No node ever receives another node's raw coefficient data through the registry.

10.4 Merged Peer Visibility

When global peers are received from upstream, the relay merges them with its local peers for broadcast to local clients:

globalFiltered = globalPeers.filter(p => p.nodeId !== selfNodeId)
allPeers = [...localPeers, ...globalFiltered]

This gives local peers visibility of the entire network without requiring direct connections to the central registry. The relay filters its own nodeId from the global list to prevent self-reference.

10.5 Upstream Disconnection Resilience

When the upstream registry becomes unreachable, the relay continues operating: local peers remain connected and can communicate with each other. The upstream client uses the exponential backoff reconnection described in Section 9.7. When the connection is restored, the relay re-syncs automatically. The relay's HTTP status endpoint reports 'online' or 'upstream-disconnected' accordingly.


11. Breathing Attestation as Participation Proof

Each node in the HFTP network carries a breathingAttested boolean that is broadcast to all peers in the peer list. This flag is set to true when a node submits valid health data with the breathingAttested field set to true in its HealthMessage. On fresh registration, the flag defaults to false.

This constitutes a novel form of proof of biological participation: nodes prove they have a living human behind them by submitting breathing-derived GLE coefficients. The attestation status is visible to all peers, creating a network-wide map of which nodes have active human participants versus automated or dormant nodes.


12. Node Eviction and Lifecycle

The registry runs an eviction sweep every \(\frac{\text{NODE\_TIMEOUT\_MS}}{3}\) (i.e., every 15,000 ms with the default 45-second timeout). Any node whose lastSeen timestamp is older than NODE_TIMEOUT_MS is removed. If any nodes are evicted, an updated peer list is broadcast to all remaining nodes.

The heartbeat handler silently ignores heartbeats from unknown nodes (no error response), allowing graceful handling of out-of-order messages during reconnection.

The registry enforces a configurable maxNodes capacity. When at capacity, new registrations from unknown nodeIds are rejected with a REGISTRY_FULL error. Re-registrations from known nodeIds are always accepted.


13. Network Discovery

PRN nodes discover each other through three parallel mechanisms:

The DiscoveryManager aggregates all three mechanisms, synchronizing every 60 seconds. Discovered peers are registered with the node's API, and multiaddresses are constructed as /ip4/{address}/tcp/{port}.


14. Self-Healing and Reputation

14.1 Self-Healing Engine

The self-healing engine runs diagnostic checks every 300,000 ms (5 minutes):

14.2 Reputation Engine

Peer reputation scores use exponential decay with additive updates. The update formula is:

$$R_{\text{new}} = \text{clamp}\!\left(0,\; 100,\; R_{\text{current}} \cdot \gamma + \delta\right)$$ (19)

where \(\gamma = 0.95\) is the decay factor and \(\delta\) is the reputation change (positive or negative). Scores are clamped to [0, 100]. The default initial score is 100. Peers with reputation \(\geq 75\) are classified as trusted.

14.3 Node Reconnection

Nodes implement reconnection with exponential backoff: maximum 5 attempts with delays of [1000, 5000, 15000, 30000, 60000] ms. Reconnection status is checked every 30,000 ms (30 seconds).


15. Resource Marketplace

15.1 Resource Types

The PRN marketplace supports five resource types: COMPUTATION, STORAGE, BANDWIDTH, AI_TRAINING, and MODEL_PARAMETERS. Each resource offer and request carries metadata specifying hardware requirements (CPU cores, GPU type, GPU memory, storage type, bandwidth, model type).

15.2 Matching Algorithm

The marketplace uses a greedy first-match algorithm: iterate all pending requests (or active offers), check for resource type match, verify that offer.pricePerUnit ≤ request.maxPricePerUnit and offer.quantity ≥ request.quantity, plus hardware requirements (minimum CPU cores, minimum GPU memory, GPU-required flag, specific model type). The first qualifying match is selected.

15.3 Escrow Payment Model

On transaction creation, the buyer's funds move from available to reserved. On completion, reserved funds transfer to the seller's balance and available. On failure, reserved funds return to the buyer's available. Wallet addresses follow the format ew-{first 8 chars of nodeId}. Expired offers and requests are cleaned up every 60,000 ms (60 seconds).


16. PCR Configuration Reference

The complete Phase-Coupled Resonance configuration for a PRN node:

PCRConfig {
  frequency:          number    // oscillator frequency (Hz)
  couplingStrength:    number    // coupling strength, range [0, 1]
  maxPhaseOffset:      number    // max allowed phase offset before correction
  discoveryInterval:   number    // peer discovery interval (ms)
  syncInterval:        number    // phase synchronization interval (ms)
  tolerance:           number    // phase difference tolerance
}

PCRNodeConfig {
  initialPhase:        number    // initial phase value, range [0, 1]
  pcr:                 PCRConfig // optional PCR-specific config
}

17. Message Authentication and Transport Security

All inter-node communication is secured at two layers:

The libp2p transport layer (used for P2P consensus, shard coordination, and wave propagation) provides encrypted channels via the Noise protocol framework with Ed25519 key pairs. Each node's identity is derived from its cryptographic key pair, and all protocol streams are mutually authenticated.


18. Crash Recovery and State Persistence

PRN nodes are designed for graceful degradation and recovery:


19. Sybil Resistance via Coefficient Entropy Analysis

The PRN network employs multiple layers of Sybil resistance derived from its biosignal-native architecture:

The GLE choke point. Ultimately, Sybil resistance in a PRN network reduces to the difficulty of producing valid GLE coefficients. The General Learning Encoder transforms raw biosignals into a 128-dimensional coefficient space using patented encoding techniques. Generating coefficients that pass entropy analysis, produce meaningful harmonic resonance with legitimate nodes, and maintain temporal consistency across heartbeat intervals requires either a real human with real sensors or a successful attack on the GLE encoding itself.


20. References

  1. Tran, P. & Tran, A. (2026). “Paragon Resonance Network: A Compliance-First Distributed Infrastructure for Health AI.” Univault Technologies Whitepaper, v1.0. paragondao.org/docs/PRN_INFRASTRUCTURE_WHITEPAPER.html
  2. Kuramoto, Y. (1975). “Self-entrainment of a population of coupled non-linear oscillators.” International Symposium on Mathematical Problems in Theoretical Physics, Lecture Notes in Physics, Vol. 39, pp. 420–422.
  3. Castro, M. & Liskov, B. (1999). “Practical Byzantine Fault Tolerance.” OSDI 1999.
  4. Bernstein, D. J. (1991). DJB2 hash function. Originally described in a comp.lang.c Usenet post, December 1991.
  5. Malkov, Y. A. & Yashunin, D. A. (2020). “Efficient and Robust Approximate Nearest Neighbor Using Hierarchical Navigable Small World Graphs.” IEEE TPAMI, 42(4), pp. 824–836.
  6. Gray, J. & Lamport, L. (2006). “Consensus on Transaction Commit.” ACM TODS, 31(1), pp. 133–160.
  7. Lloyd, S. P. (1982). “Least Squares Quantization in PCM.” IEEE Transactions on Information Theory, 28(2), pp. 129–137. (k-means algorithm.)
  8. Ahmed, N., Natarajan, T., & Rao, K. R. (1974). “Discrete Cosine Transform.” IEEE Transactions on Computers, C-23(1), pp. 90–93.

Univault Technologies, LLC

[email protected]  ·  paragondao.org

Version 1.0 — March 2026  ·  Public Technical Disclosure
© 2026 Univault Technologies, LLC. The text of this document is licensed under CC BY 4.0. The algorithms, methods, and techniques described herein are publicly disclosed and are not subject to copyright protection. This publication establishes the public availability of these techniques as of the date above.