Abstract
This document is a companion to the Paragon Resonance Network (PRN) whitepaper. Where the whitepaper describes what PRN is — Phase-Coherent Resonance consensus, the dual-layer Kuramoto–BFT architecture, and the security model — this specification describes how it is built. We disclose the complete implementation details required to scale a PRN network from proof-of-concept to production: resonance-modulated transaction processing, content-addressed wave propagation for gossip routing, phase-scored cross-shard coordination via two-phase commit, hierarchical privacy-preserving aggregation through relay pre-computation, semantic shard assignment via k-means clustering, harmonic state encoding with frequency-domain state evolution, and the HFTP wire protocol for coefficient exchange. All algorithms are specified with exact formulas, numeric parameters, data structures, and pseudocode sufficient for independent implementation.
Keywords: Resonance Routing, Cross-Shard Coordination, Wave Propagation, Hierarchical Aggregation, Harmonic State Encoding, Phase-Coupled Oscillators, Semantic Sharding, HFTP Wire Protocol, Defensive Publication
Implementation Status. This specification documents algorithms from the PRN codebase at varying stages of maturity. The majority of systems described (Sections 2–13, 15–16) are implemented and tested. Three subsystems are currently commented out in the codebase and documented here as design specifications: the Kuramoto sinusoidal coupling module (Section 2.3, Eq. 6 — replaced by linear coupling in the active code), the self-healing engine (Section 14.1), and the reputation engine (Section 14.2). Section 1 (resonance-modulated transaction processing) and Section 3.4 (phase-scored peer selection) represent architecture-level design specifications that are defined in the node configuration but not yet exercised in test harnesses. All numeric parameters and formulas for these components are drawn from the codebase as written.
1. Resonance-Modulated Transaction Processing
1.1 Oscillator-Driven Latency Control
In a PRN network, each node operates as a coupled oscillator with a phase value \(\theta \in [0, 1]\). Transaction processing latency is not fixed but modulated by the node's current phase alignment with the network. The resonance factor is computed as:
At perfect resonance (\(\theta = 0\)), \(R = 1.0\) and operations execute at minimum latency. At anti-resonance (\(\theta = 0.5\)), \(R = 0.0\) and operations execute at maximum latency. This creates a natural rhythm in the network where synchronized nodes process transactions faster — an emergent incentive for phase alignment.
1.2 Operation-Specific Delay Formulas
Each transaction type has a specific delay formula parameterized by the resonance factor. Let \(R\) denote the resonance factor from Equation (1):
| Operation | Delay Formula (ms) | Min Delay | Max Delay |
|---|---|---|---|
| Vector search | \(\max(2,\; \lfloor 30 \cdot (1 - 0.9R) \rfloor)\) | 2 ms | 30 ms |
| Vector insert | \(\max(2,\; \lfloor 20 \cdot (1 - 0.9R) \rfloor)\) | 2 ms | 20 ms |
| Key-value GET | \(\max(1,\; \lfloor 5 \cdot (1 - 0.9R) \rfloor)\) | 1 ms | 5 ms |
| Key-value PUT | \(\max(1,\; \lfloor 10 \cdot (1 - 0.9R) \rfloor)\) | 1 ms | 10 ms |
| Cross-shard | \(\max(5,\; \lfloor 50 \cdot (1 - 0.9R) \rfloor)\) | 5 ms | 50 ms |
1.3 Resonance-Modulated Success Rates
Cross-shard transactions use a two-phase commit protocol where the success probability of each phase is modulated by the resonance factor:
At perfect resonance, both phases succeed with probability 1.0. At anti-resonance, prepare succeeds at 95% and commit at 98%. The processing delay for both phases follows: \(\lfloor 20 \cdot (1 - 0.8R) \rfloor\) milliseconds.
Design rationale. Resonance modulation creates an emergent incentive: nodes that synchronize their oscillator phase with the network process transactions faster and more reliably. This replaces explicit staking rewards with a physics-inspired performance gradient. The network self-organizes toward synchronization because synchronized nodes are more useful.
1.4 Batch Processing Parameters
Each node processes up to 100 transactions per processing interval. The processing interval is set by the node's PCR synchronization interval. Individual buffered transactions time out after 5,000 ms (5 seconds). Nodes set a maximum of 1,000 event listeners to handle concurrent transaction streams.
2. Content-Addressed Wave Propagation
2.1 Message-to-Wave Mapping
PRN uses a novel gossip protocol where message content deterministically maps to wave properties, and the interference pattern between the message's wave and each peer's oscillator state determines propagation. This replaces random gossip with topology-aware, content-dependent dissemination.
Given a message payload \(M\), the wave properties are derived as follows:
- Compute \(h = \text{SHA-256}(M)\) and extract the first 8 hexadecimal characters as an integer \(n\).
- Amplitude: \(A = \frac{n \bmod 1000}{1000} \cdot A_{\max}\), where \(A_{\max} = 10.0\). Range: \([0, 10.0)\).
- Phase: \(\phi = \frac{(n \bmod 360) \cdot \pi}{180}\). Range: \([0, 2\pi)\). This maps the hash deterministically to a point on the unit circle.
- Frequency: \(f = f_{\text{base}}\), where \(f_{\text{base}} = 1.0\).
The resulting wave message is the tuple \(\langle A, f, \phi, M \rangle\).
2.2 Interference-Based Peer Selection
For each candidate peer node with oscillator state \((\phi_{\text{node}}, f_{\text{node}})\), the interference score is computed as:
The first factor, \(\cos(\Delta\phi)\), ranges from \(-1\) (destructive interference, exactly anti-phase) to \(+1\) (constructive interference, perfectly in-phase). The second factor is a frequency match term: 1.0 when frequencies are identical, decreasing as they diverge, and going negative when the frequency ratio exceeds 2.0 or drops below 0.
Retransmission rule: A message is forwarded only to peers where \(I > 0\) (constructive interference). Peers with destructive interference (\(I \leq 0\)) do not receive the retransmission. This creates content-dependent routing: the same message propagates along different paths depending on the current phase distribution of the network.
2.3 Network Self-Organization
Phase coherence across the network is measured using the Kuramoto order parameter:
When \(r\) drops below the target coherence threshold of 0.8, the network triggers frequency adjustment. Each node's frequency is pulled toward the average of its neighbors via linear coupling:
where \(K\) is the coupling strength parameter from the node's PCR configuration, range \([0, 1]\). This is a linear coupling model (contrast with the sinusoidal Kuramoto coupling used in the consensus layer) chosen for computational efficiency in the gossip layer.
3. Frequency-Based Peer Discovery and Routing
3.1 Oscillator Frequency Space
Each PRN node maintains a center frequency clamped to the range [400, 500]. This bounded frequency space ensures that all nodes operate within a finite resonance domain. A peer is classified as resonant if the absolute difference between its frequency and the querying node's center frequency falls within a configurable tolerance:
3.2 Network Stability Scoring
The network stability metric is derived from a 60-sample moving average of peer frequency drift. When a peer's frequency is updated, the drift is computed as \(\delta = |f_{\text{new}} - f_{\text{old}}|\) and fed into the moving average. Stability is then:
Note that the stability score is not clamped at zero: if the average drift exceeds 1.0, the score becomes negative, signaling to the system that the network is in an unstable state requiring corrective action.
3.3 Resonant Peer Caching
The list of resonant peers is cached with a TTL of 1,000 ms (1 second). The cache is invalidated on any of: (a) a peer frequency update, (b) a center frequency adjustment, (c) a tolerance change, or (d) stale data cleanup removing any peer. Stale peers are evicted after 300,000 ms (5 minutes) without a frequency update. The PeerId-to-string conversion cache is cleared when it exceeds 10,000 entries to bound memory usage.
3.4 Phase-Scored Peer Selection
When selecting a peer for cross-shard communication, the system scores candidates using circular phase distance:
where the circular distance is:
Scores range from 0 (exactly anti-phase) to 1.0 (perfectly phase-aligned). The highest-scoring peer in the target shard is selected as the routing destination. Shard membership and phase information are encoded in libp2p protocol strings using the format /shard-{id}/ and /phase-{value}/.
Phase convention note: The codebase uses two phase representations. The PCR node configuration initializes phase in the range \([0, 1]\) (fractional oscillator cycle). The vector store and peer selection modules use \([0, 2\pi]\) (radians). Conversion is \(\theta_{\text{rad}} = \theta_{\text{frac}} \cdot 2\pi\). Equations 9–10 above use the radian convention.
4. Cross-Shard Transaction Coordination
4.1 Two-Phase Commit Protocol
Cross-shard transactions use a two-phase commit (2PC) protocol with resonance-enhanced peer routing. The protocol tracks the following state per transaction:
CrossShardTxStatus {
id: string // unique transaction identifier
originShard: string // initiating shard
targetShards: string[] // destination shards
status: 'pending' | 'prepared' | 'committed' | 'aborted'
preparedShards: string[] // shards that completed prepare
waitingFor: string[] // shards still awaited
startTime: number // epoch milliseconds
completionTime: number // epoch milliseconds (on finalization)
}
Phase 1 (Prepare): The coordinator creates a transaction status with waitingFor = [...targetShardIds] and forwards the transaction to each target shard. As each shard completes preparation, it sends a SHARD_PREPARED message. The coordinator removes that shard from waitingFor and adds it to preparedShards.
Phase 2 (Commit/Abort): When ALL target shards AND the origin shard appear in preparedShards, the transaction is finalized. The coordinator broadcasts the commit decision to all involved shards.
4.2 Resonance-Weighted Coordinator Selection
When routing a transaction to a target shard, the coordinator selects the optimal peer using a weighted quality score:
where \(R_{\text{freq}}\) is a binary resonance indicator (1.0 if the peer is frequency-resonant per Equation 7, else 0.2) and \(C_{\text{phase}}\) is the peer's phase coherence (continuous, range [0, 1], defaults to 0.5 if unknown). The 60/40 weighting prioritizes frequency resonance while allowing phase coherence to break ties. Non-resonant peers receive a floor score of 0.2 (not zero), ensuring reachability even in degraded conditions. The quality range is [0.12, 0.52] for non-resonant peers (when \(R_{\text{freq}} = 0.2\): \(Q_{\min} = 0.6 \cdot 0.2 + 0.4 \cdot 0 = 0.12\), \(Q_{\max} = 0.6 \cdot 0.2 + 0.4 \cdot 0.8 = 0.52\)) and [0.6, 1.0] for resonant peers.
4.3 Timeout and Deadlock Management
The system employs a three-layer liveness guarantee:
- Eager deadlock detection: Every new cross-shard transaction proposal triggers inline deadlock checking before processing.
- Periodic sweep: Every 5,000 ms (5 seconds), all pending transactions are scanned for timeouts.
- Hard timeout: Any pending transaction older than 15,000 ms (15 seconds) is automatically aborted.
4.4 Wait-For Graph Deadlock Detection
Deadlock detection builds a wait-for graph and performs DFS cycle detection:
- Collect all transactions with
status === 'pending'. - Build a directed graph: for each pending transaction \(T\), for each shard \(s\) in \(T\).
waitingFor, find other pending transactions that have \(s\) in theirpreparedShards. Add a directed edge \(T \to T'\). - Run DFS with a recursion stack. If a node is encountered that is already in the recursion stack, a cycle is detected.
- Resolution policy: Sort all deadlocked transactions by
startTimeascending. Abort the newest transaction (last in sorted order). This preserves the transaction that has been running longest.
4.5 Voting and Finalization
Block proposals require a supermajority vote. The system defines three vote types: APPROVE, REJECT, and ABSTAIN. The implementation defines two voting thresholds: the protocol-level threshold is \(\geq 66\%\) (used in the primary processVotes path) and the standalone tally threshold is \(\geq 66.66\%\) (used in the tallyVotes utility). With 100 voters, 66 approvals pass the first threshold but fail the second — an intentional two-tier acceptance gate where protocol-level voting is slightly more permissive than the standalone tally. Note: classical BFT requires strictly > 2/3 (66.67%) for quorum intersection safety. The 66% protocol-level threshold is a deliberate engineering choice that trades a marginal safety bound reduction for faster finalization in the health data context, where the cost of delayed health updates is weighed against the probability of > 33% Byzantine nodes in a breathing-attested network. A vote is valid only if the voter's ID appears in the shard's member list and the vote carries a non-empty signature. In the current implementation, vote signatures are validated for presence (non-empty check); the transport layer (libp2p Noise with Ed25519) provides cryptographic authentication of the peer identity, ensuring that votes received on an authenticated stream originate from the claimed voter. Leader election is deterministic: nodes are sorted lexicographically by ID, and the first node is the leader.
Cross-shard transactions are included in block finalization only if their global status is 'committed'. Regular (intra-shard) transactions are always included. The consensus adapter emits a transactionsFinalized event after a 50 ms asynchronous delay to simulate consensus completion.
4.6 Cross-Shard Metrics
The system tracks the following metrics as a module-level singleton: total transactions, committed transactions, aborted transactions, timed-out transactions (latency exceeding 15,000 ms), deadlocked transactions, running average latency, total latency, and maximum observed latency.
5. Resonance-Based Block Validation
5.1 Block Structure
A PRN block (ResonanceBlock) consists of:
ResonanceBlock {
header: {
previousBlockHash: string
merkleRoot: string
timestamp: number
blockHeight: number
networkPhaseState: PhaseState
validatorSet: string[]
}
transitions: Transition[] // ordered transaction list
resonanceProof: ResonanceProof // phase coherence evidence
hash: string
}
Each Transition carries a harmonicState — a set of frequency components representing the transaction's harmonic signature. Each transition may also carry a crossShard boolean flag.
5.2 Resonance Proof Validation
The ResonanceProof contains a phaseCoherenceScore, an array of FrequencyAlignment entries, and an array of HarmonicSignature validator signatures. Validation checks:
- Minimum resonance threshold: The average resonance score across all frequency alignments must be \(\geq 0.1\).
- Resonance deviation: The absolute difference between the claimed
phaseCoherenceScoreand the independently computed average resonance must be \(\leq 0.3\).
5.3 Frequency Alignment Calculation
For each frequency component in the block's transitions, alignment is computed against the network's phase state. Two components are considered matching if their frequency difference is \(< 0.001\). The alignment score is:
where \(\Delta A\) is the amplitude difference and \(\Delta \phi\) is the phase difference between the transition's component and the network's corresponding component.
6. Harmonic State Encoding
6.1 Data-to-Harmonic Conversion
Arbitrary data is converted to harmonic representations (frequency/amplitude/phase triplets) through the following algorithm:
- Vectorization: Serialize the data to JSON and compute its string length \(L\). Generate a vector of length \(C\) (the configured component limit) where each element is: \(v_i = \sin(i \cdot 0.1 + L \cdot 0.01) \cdot 0.5 + 0.5\). Note that \(L\) parameterizes the sine function but the vector dimension is determined by the component limit, not the string length.
- Segmentation: Divide the vector into 8 equal segments.
- Component extraction: For each segment \(k\) (0-indexed):
- Amplitude \(A_k\) = mean of absolute values in the segment. Skip if \(A_k < 0.1\).
- Frequency offset: \(\delta_k = (v_{\text{first}} \cdot v_{\text{last}}) \bmod 0.2\), where \(v_{\text{first}}\) and \(v_{\text{last}}\) are the first and last values of the segment.
- Frequency: \(f_k = f_{\text{base}} \cdot (1 + k \cdot 0.2 + \delta_k)\), where \(f_{\text{base}}\) defaults to 450.
- Phase: \(\phi_k = \left(\sum_{i} v_i \cdot i\right) \bmod 1.0\), where the sum is over all values in the segment.
- Coherence index: \(C = \sum A_k / N_{\text{components}}\), where \(N_{\text{components}}\) is the number of segments with amplitude \(\geq 0.1\).
6.2 Content Identification
A deterministic content hash is computed using a DJB2-variant algorithm: starting with hash = 0, for each character \(c\) in the JSON-serialized data, compute hash = ((hash << 5) - hash) + charCode(c). The result is formatted as hid-{hex}.
6.3 State Evolution Rules
Harmonic states evolve through two operations, each with distinct blending rules:
Transition application (applying a new transaction to existing state): For each frequency component in the transition, find a matching component in the existing state (frequency difference \(< 0.001\)). If matched, blend with an 80/20 ratio:
In the canonical PhaseState implementation, the harmonics array stores only frequency values, so the 80/20 blend applies to frequency alone. An alternative implementation path (applyTransitionsToState in the block layer) tracks full frequency/amplitude/phase triplets and uses a 50/50 blend for all three components. Non-matching components are added to the state as new entries in both paths.
State merge (merging two states during synchronization): Matching components are blended with a 50/50 ratio:
Non-matching components from either state are added to the merged result.
6.4 Harmonic Resonance Scoring
The resonance between two harmonic states is computed by matching their frequency components (within 0.001 tolerance) and scoring each match:
The overall resonance is the arithmetic mean of all component scores.
6.5 Harmonic Similarity Search
The state store supports similarity search via harmonics. Candidate vectors are retrieved by sorting on \(|f_{\text{base}} - f_{\text{target}}|\). For each candidate, a resonance score is computed:
where components are matched if their frequency difference is \(< 10\%\). Candidates with overall resonance below 0.3 are excluded from results.
7. Phase-Augmented Vector Embeddings
7.1 Extended Vector Storage
The PRN vector store extends standard vector databases by adding phase and natural frequency as first-class attributes on every embedding:
VectorEmbedding {
id: string
vector: Float32Array // L2-normalized
phase: number // range [0, 2*PI]
naturalFrequency: number
lastUpdated: number // epoch ms
metadata: Record<string, any>
}
The default index type is HNSW (Hierarchical Navigable Small World). The store supports up to 10,000,000 vectors. The default similarity metric is cosine similarity. All vectors are L2-normalized on insertion.
7.2 Frequency-Domain Query Filters
Queries can filter by phase and frequency using min/max ranges or value/tolerance pairs, enabling frequency-domain filtering alongside traditional vector similarity search. This allows queries such as “find the 10 nearest vectors with phase between 1.0 and 2.0 radians and natural frequency within 5 Hz of 450.”
7.3 Performance Metrics
The vector store maintains a sliding window of the last 1,000 query times and computes percentile statistics (p50, p95, p99) for performance monitoring.
8. Semantic Shard Assignment
8.1 Shard Count Calculation
The number of shards is computed dynamically based on the node count and configuration parameters:
where \(N\) is the node count, \(n_{\max}\) is the maximum nodes per shard, \(n_{\min}\) is the minimum nodes per shard, and \(n_{\text{target}}\) is the desired shard count.
8.2 K-Means Clustering on Semantic Vectors
Nodes are assigned to shards via k-means clustering on their semantic vectors:
- Initialization: Select initial cluster centers by evenly spacing through the node list (deterministic, not random). Specifically, center \(i\) is the node at index \(\lfloor i \cdot N / k \rfloor\).
- Assignment: Each node is assigned to the nearest center using Euclidean distance on Float32Array semantic vectors.
- Update: Each center is recomputed as the component-wise mean of its assigned nodes' vectors.
- Iteration: Steps 2–3 repeat for exactly 5 iterations (hardcoded, no convergence check).
- Fallback: Nodes without semantic vectors are assigned round-robin:
shardIndex = nodeIndex % shardCount.
8.3 Incremental Resharding
On network membership changes, the system computes a change percentage:
If \(\Delta\) exceeds the configured resharding threshold, a full reshard is triggered with forceShardsCount = existingShardCount + 1. Otherwise, an incremental update is performed: departed nodes are removed, and new nodes are assigned to the shard with the fewest current members (greedy load balancing). The shard assignment version number increments on every update.
8.4 Shard Protocol
Shard assignments are distributed via a custom libp2p protocol registered at /paragon/shard-coord/1.0.0. The coordinator broadcasts JSON-encoded shard assignments to all connected peers. Peers accept newer assignments based on version number or timestamp comparison.
9. HFTP Wire Protocol
9.1 Node Types
The HFTP network recognizes four node types: server (registry), haven (crisis intervention endpoint), builder (third-party application), and relay (hierarchical aggregation point).
9.2 Protocol Constants
| Constant | Value | Purpose |
|---|---|---|
| HEARTBEAT_INTERVAL_MS | 15,000 | Heartbeat frequency |
| NODE_TIMEOUT_MS | 45,000 | Eviction threshold (3× heartbeat) |
| AGGREGATE_BROADCAST_MS | 30,000 | Aggregate broadcast frequency |
| MAX_COEFFICIENTS | 128 | Max GLE coefficient vector length |
| MAX_NODE_ID_LENGTH | 64 | Max nodeId string length |
| MAX_REGION_LENGTH | 32 | Max region string length |
| MAX_MESSAGE_SIZE | 4,096 bytes | Wire message size limit |
| MAX_LOCAL_PEER_COUNT | 10,000 | Max peers behind a relay |
| MAX_TIMESTAMP_AGE_MS | 60,000 | Clock drift tolerance |
9.3 Client-to-Registry Messages
Four message types flow from nodes to the registry:
RegisterMessage { type: 'register', nodeId, nodeType, region, timestamp }
HealthMessage { type: 'health', nodeId, coefficients[128], breathingAttested, healthSummary?, timestamp }
HeartbeatMessage { type: 'heartbeat', nodeId, timestamp }
RelayStatusMsg { type: 'relay-status', nodeId, localPeerCount, localAggregate, timestamp }
The coefficients field carries up to 128 DCT-II coefficients (validated as length ≤ MAX_COEFFICIENTS; serialized as JSON; Float64 in memory, Float32 on wire per the codebase convention of 512 bytes at maximum length). This is the only health data that leaves the device. The optional healthSummary contains derived values: classification (string), stressIndicator (float [0,1]), and breathingDepth (float [0,1]).
9.4 Registry-to-Client Messages
WelcomeMessage { type: 'welcome', registryId, nodeCount, timestamp }
PeersMessage { type: 'peers', peers: NodeInfo[], timestamp }
AggregateMessage { type: 'aggregate', activeAgents, avgStressLevel, avgBreathingDepth, dominantClassification, timestamp }
ErrorMessage { type: 'error', code, message, timestamp }
9.5 Connection Handshake
The HFTP connection sequence is:
- Client opens WebSocket to registry URL.
- On
open, client immediately sends aRegisterMessagewith its nodeId, nodeType, and region. - Registry validates the message, checks for duplicate nodeId across all active WebSocket sessions (rejects with
DUPLICATE_NODEerror if found), binds the WebSocket session to the nodeId. - Registry responds with
WelcomeMessage, then broadcasts an updatedPeersMessageto all connected nodes. - Client starts a heartbeat interval at
HEARTBEAT_INTERVAL_MS(15s).
Session binding security: After registration, all subsequent messages on that WebSocket must carry the same nodeId. A mismatch triggers a NODE_ID_MISMATCH error. This prevents impersonation without requiring cryptographic key exchange at the transport layer.
9.6 Message Validation
All inbound messages pass through a two-stage validation pipeline: (1) JSON parse, (2) structural validation. The validation rules are:
- nodeId: string, non-empty, ≤ 64 characters, matches
/^[a-zA-Z0-9_-]+$/. - nodeType: must be one of
server,haven,builder,relay. - coefficients: array, length ≤ 128, every element must be a finite number.
- timestamp: finite positive number, within 60 seconds of server time. This prevents replay attacks.
- stressIndicator, breathingDepth: finite numbers in [0, 1].
- localPeerCount: non-negative integer, ≤ 10,000.
9.7 Reconnection
Clients implement exponential backoff reconnection: delay = min(baseDelay × 2^attempts, 60000), where baseDelay = 5000 ms and the cap is 60 seconds. The attempt counter resets to 0 on successful connection. Configurable maxReconnectAttempts (0 = unlimited).
10. Hierarchical Privacy-Preserving Aggregation
10.1 Two-Tier Relay Architecture
The HFTP network implements a two-tier topology where relay nodes act as local registries for their connected peers while simultaneously connecting as clients to the central registry:
[Haven Phones] ──ws──> [Relay Node] ──ws──> [Central Registry]
[Builder Apps] ──ws──> [Relay Node] ──ws──> [Central Registry]
Each relay runs a full HFTPRegistry instance locally (with ID relay-{nodeId}) and an HFTPNodeClient upstream. This dual role enables the relay to serve its local peers even if the upstream connection is lost.
10.2 Relay Aggregate Pre-Computation
Every 10,000 ms (10 seconds), the relay computes a local aggregate from its connected peers and sends it upstream as a RelayStatusMessage. The aggregate contains:
localPeerCount: number of peers connected to this relayavgStressLevel: average of all local peers' stress indicatorsavgBreathingDepth: average of all local peers' breathing depth valuesdominantClassification: most common classification among local peers
The central registry never receives individual health data from relay-connected peers. It receives only the pre-computed aggregate. This is the core privacy mechanism: individual coefficients stay within the relay's local network.
10.3 Weighted Network Aggregate
The central registry computes the network-wide aggregate using a weighted average across two data sources:
- Direct nodes (nodes connected directly to the registry with health data): each contributes weight = 1.
- Relay nodes (with local aggregates): each contributes weight =
relayPeerCount.
totalSources = directCount + sum(relay.relayPeerCount for each relay with aggregate)
weightedStress = sum(direct.stressIndicator) + sum(relay.avgStressLevel * relay.relayPeerCount)
avgStressLevel = weightedStress / totalSources
dominantClassification = classification with highest count (relay weighted by peerCount)
Privacy-by-design. The registry stores coefficients internally for aggregate computation but strips them when broadcasting peer lists. The getNodes() method destructures each internal node to exclude coefficients, healthSummary, and localAggregate before returning. No node ever receives another node's raw coefficient data through the registry.
10.4 Merged Peer Visibility
When global peers are received from upstream, the relay merges them with its local peers for broadcast to local clients:
globalFiltered = globalPeers.filter(p => p.nodeId !== selfNodeId)
allPeers = [...localPeers, ...globalFiltered]
This gives local peers visibility of the entire network without requiring direct connections to the central registry. The relay filters its own nodeId from the global list to prevent self-reference.
10.5 Upstream Disconnection Resilience
When the upstream registry becomes unreachable, the relay continues operating: local peers remain connected and can communicate with each other. The upstream client uses the exponential backoff reconnection described in Section 9.7. When the connection is restored, the relay re-syncs automatically. The relay's HTTP status endpoint reports 'online' or 'upstream-disconnected' accordingly.
11. Breathing Attestation as Participation Proof
Each node in the HFTP network carries a breathingAttested boolean that is broadcast to all peers in the peer list. This flag is set to true when a node submits valid health data with the breathingAttested field set to true in its HealthMessage. On fresh registration, the flag defaults to false.
This constitutes a novel form of proof of biological participation: nodes prove they have a living human behind them by submitting breathing-derived GLE coefficients. The attestation status is visible to all peers, creating a network-wide map of which nodes have active human participants versus automated or dormant nodes.
12. Node Eviction and Lifecycle
The registry runs an eviction sweep every \(\frac{\text{NODE\_TIMEOUT\_MS}}{3}\) (i.e., every 15,000 ms with the default 45-second timeout). Any node whose lastSeen timestamp is older than NODE_TIMEOUT_MS is removed. If any nodes are evicted, an updated peer list is broadcast to all remaining nodes.
The heartbeat handler silently ignores heartbeats from unknown nodes (no error response), allowing graceful handling of out-of-order messages during reconnection.
The registry enforces a configurable maxNodes capacity. When at capacity, new registrations from unknown nodeIds are rejected with a REGISTRY_FULL error. Re-registrations from known nodeIds are always accepted.
13. Network Discovery
PRN nodes discover each other through three parallel mechanisms:
- mDNS/Bonjour: Local network discovery using service type
emergentwaveand advertising nameew-node-{nodeId}. - UPnP: UDP broadcast on port
nodePort + 1000, with broadcast interval of 30 seconds and UPnP port mapping TTL of 1,800 seconds. Message type:ew-node-announce. - Bootstrap server: Cloud discovery via a persistent bootstrap node (e.g.,
bootstrap.emergentwave.net:9545).
The DiscoveryManager aggregates all three mechanisms, synchronizing every 60 seconds. Discovered peers are registered with the node's API, and multiaddresses are constructed as /ip4/{address}/tcp/{port}.
14. Self-Healing and Reputation
14.1 Self-Healing Engine
The self-healing engine runs diagnostic checks every 300,000 ms (5 minutes):
- Connectivity: If the number of connected peers drops below 3, trigger peer discovery.
- Consensus health: If the Kuramoto order parameter drops below 0.8, adjust the node's center frequency by +0.1 to seek better resonance.
- Reputation anomaly: If the weighted average reputation across peers drops below 50, boost self-reputation by +10.
14.2 Reputation Engine
Peer reputation scores use exponential decay with additive updates. The update formula is:
where \(\gamma = 0.95\) is the decay factor and \(\delta\) is the reputation change (positive or negative). Scores are clamped to [0, 100]. The default initial score is 100. Peers with reputation \(\geq 75\) are classified as trusted.
14.3 Node Reconnection
Nodes implement reconnection with exponential backoff: maximum 5 attempts with delays of [1000, 5000, 15000, 30000, 60000] ms. Reconnection status is checked every 30,000 ms (30 seconds).
15. Resource Marketplace
15.1 Resource Types
The PRN marketplace supports five resource types: COMPUTATION, STORAGE, BANDWIDTH, AI_TRAINING, and MODEL_PARAMETERS. Each resource offer and request carries metadata specifying hardware requirements (CPU cores, GPU type, GPU memory, storage type, bandwidth, model type).
15.2 Matching Algorithm
The marketplace uses a greedy first-match algorithm: iterate all pending requests (or active offers), check for resource type match, verify that offer.pricePerUnit ≤ request.maxPricePerUnit and offer.quantity ≥ request.quantity, plus hardware requirements (minimum CPU cores, minimum GPU memory, GPU-required flag, specific model type). The first qualifying match is selected.
15.3 Escrow Payment Model
On transaction creation, the buyer's funds move from available to reserved. On completion, reserved funds transfer to the seller's balance and available. On failure, reserved funds return to the buyer's available. Wallet addresses follow the format ew-{first 8 chars of nodeId}. Expired offers and requests are cleaned up every 60,000 ms (60 seconds).
16. PCR Configuration Reference
The complete Phase-Coupled Resonance configuration for a PRN node:
PCRConfig {
frequency: number // oscillator frequency (Hz)
couplingStrength: number // coupling strength, range [0, 1]
maxPhaseOffset: number // max allowed phase offset before correction
discoveryInterval: number // peer discovery interval (ms)
syncInterval: number // phase synchronization interval (ms)
tolerance: number // phase difference tolerance
}
PCRNodeConfig {
initialPhase: number // initial phase value, range [0, 1]
pcr: PCRConfig // optional PCR-specific config
}
17. Message Authentication and Transport Security
All inter-node communication is secured at two layers:
- Transport layer: WebSocket connections use TLS 1.3 (wss://) for all registry, relay, and client communication. The HFTP server validates the
Originheader and enforces the 4,096-byte message size limit before parsing, preventing oversized payload attacks. - Session binding: On initial registration, the server binds each
nodeIdto its WebSocket session. Subsequent messages from a different session for the samenodeIdare rejected with aNODE_ID_MISMATCHerror. DuplicatenodeIdregistrations from new connections are rejected outright, preventing session hijacking. - Consensus votes: Each vote must carry a non-empty
signaturefield and the voter'snodeIdmust appear in the shard member list. Votes without signatures or from non-members are silently discarded. - Timestamp validation: All incoming messages with timestamps are validated against a 60-second clock drift window (
MAX_TIMESTAMP_AGE_MS). Messages with timestamps in the future or older than 60 seconds are rejected, preventing replay attacks.
The libp2p transport layer (used for P2P consensus, shard coordination, and wave propagation) provides encrypted channels via the Noise protocol framework with Ed25519 key pairs. Each node's identity is derived from its cryptographic key pair, and all protocol streams are mutually authenticated.
18. Crash Recovery and State Persistence
PRN nodes are designed for graceful degradation and recovery:
- Heartbeat-based liveness: Each node sends heartbeats every 15,000 ms. If a node crashes, the registry detects its absence after 45,000 ms (3× heartbeat) and evicts it. Remaining nodes receive an updated peer list automatically.
- Client reconnection: The
HFTPNodeClientimplements exponential backoff reconnection with a base delay of 5,000 ms and a cap of 60,000 ms. On reconnection, the client re-registers with the registry, restoring its presence in the network. - Relay resilience: Relays continue serving local peers during upstream disconnection (Section 10.5). The local registry operates independently, and re-syncs automatically on reconnection.
- Shard state: Shard assignments carry version numbers and timestamps. On rejoining, a node accepts the latest version from the coordinator, ensuring consistency without requiring full state transfer.
- Transaction recovery: Pending cross-shard transactions are swept every 5,000 ms. Any transaction older than 15,000 ms is automatically aborted, preventing indefinite resource locks from crashed coordinators. The deadlock detector runs on every new transaction proposal, catching cycles caused by partial failures.
- Harmonic state checkpointing: Each harmonic state carries a version counter and content hash (
hid-{hex}). On recovery, nodes can verify state integrity by recomputing the deterministic hash from the stored data. The 50/50 merge blend (Eq. 14) enables state reconciliation between divergent replicas.
19. Sybil Resistance via Coefficient Entropy Analysis
The PRN network employs multiple layers of Sybil resistance derived from its biosignal-native architecture:
- Breathing attestation: Each node's
breathingAttestedflag (Section 11) provides a binary proof of biological participation. Nodes that submit GLE coefficients derived from real breathing patterns receive attestation; automated or synthetic nodes do not. The attestation status is visible network-wide. - Coefficient validation: The registry validates that coefficient arrays contain up to 128 values (enforced as
length ≤ MAX_COEFFICIENTS), all of which must be finite numbers (no NaN, no Infinity). This prevents trivial Sybil attacks using random or degenerate coefficient vectors. - Entropy analysis: GLE coefficients derived from genuine biosignals exhibit characteristic statistical properties — non-uniform amplitude distributions, frequency-domain structure from the DCT-II transform, and temporal consistency across updates. Synthetic coefficients (random, constant, or algorithmically generated) lack this structure. The
harmonicResonancescoring function (Eq. 16) naturally penalizes artificial coefficient patterns because they fail to produce meaningful frequency-component matches with legitimate nodes. - Session binding: One nodeId per WebSocket session, enforced at the transport layer. Creating multiple identities requires multiple authenticated connections, each carrying distinct coefficient streams.
- Relay weighting: The weighted aggregate computation (Section 10.3) weights each relay by its
relayPeerCount. A Sybil attacker running many fake nodes behind a single relay would need to produce diverse, biologically plausible coefficient streams for each — a task that requires solving the GLE encoding problem, which is precisely what the GLE patent protects.
The GLE choke point. Ultimately, Sybil resistance in a PRN network reduces to the difficulty of producing valid GLE coefficients. The General Learning Encoder transforms raw biosignals into a 128-dimensional coefficient space using patented encoding techniques. Generating coefficients that pass entropy analysis, produce meaningful harmonic resonance with legitimate nodes, and maintain temporal consistency across heartbeat intervals requires either a real human with real sensors or a successful attack on the GLE encoding itself.
20. References
- Tran, P. & Tran, A. (2026). “Paragon Resonance Network: A Compliance-First Distributed Infrastructure for Health AI.” Univault Technologies Whitepaper, v1.0.
paragondao.org/docs/PRN_INFRASTRUCTURE_WHITEPAPER.html - Kuramoto, Y. (1975). “Self-entrainment of a population of coupled non-linear oscillators.” International Symposium on Mathematical Problems in Theoretical Physics, Lecture Notes in Physics, Vol. 39, pp. 420–422.
- Castro, M. & Liskov, B. (1999). “Practical Byzantine Fault Tolerance.” OSDI 1999.
- Bernstein, D. J. (1991). DJB2 hash function. Originally described in a comp.lang.c Usenet post, December 1991.
- Malkov, Y. A. & Yashunin, D. A. (2020). “Efficient and Robust Approximate Nearest Neighbor Using Hierarchical Navigable Small World Graphs.” IEEE TPAMI, 42(4), pp. 824–836.
- Gray, J. & Lamport, L. (2006). “Consensus on Transaction Commit.” ACM TODS, 31(1), pp. 133–160.
- Lloyd, S. P. (1982). “Least Squares Quantization in PCM.” IEEE Transactions on Information Theory, 28(2), pp. 129–137. (k-means algorithm.)
- Ahmed, N., Natarajan, T., & Rao, K. R. (1974). “Discrete Cosine Transform.” IEEE Transactions on Computers, C-23(1), pp. 90–93.