KAIROS Signal Technical Manual
Comprehensive guide to the KAIROS Signal DePIN Data API & Econophysics Intelligence Platform.
Version 7.5 • Last updated: March 2026
This manual is for quantitative traders, fund managers, data engineers, and institutional clients who want to integrate KAIROS Signal's real-time market intelligence into their trading infrastructure. It covers everything from platform architecture and API integration to signal verification and operational best practices.
Platform Overview
1.1 — What Is KAIROS Signal
KAIROS Signal is a DePIN data API and econophysics intelligence platform that generates institutional-grade trading signals by fusing financial market data with physical-world sensor networks. Unlike traditional quant systems that rely solely on price and volume, KAIROS integrates data from 350+ sources — including 10+ cryptocurrency exchanges, the Solana blockchain, 500+ DePIN infrastructure networks, 100,000+ environmental sensors, and macroeconomic feeds — into a unified prediction engine.
The platform's core thesis is that physical reality leads price. When a DePIN network's node count drops 15% before the token market reacts, that's alpha. When NOAA solar flare data correlates with crypto volatility spikes, that's a signal. KAIROS captures these cross-domain inefficiencies using a multi-layer directed acyclic graph (DAG) of mathematical engines, each contributing a vote to the final signal.
Key capabilities:
- Real-Time Signal Generation — Sub-second signals via WebSocket, covering 80+ trading pairs across crypto, commodities, and forex
- Cryptographic Proof Chain — Every signal is SHA-256 chain-hashed into an immutable ledger before broadcast, enabling independent verification
- DePIN Intelligence API — Real-time health scores, node counts, revenue metrics, and growth rates for 500+ decentralized infrastructure projects
- Multi-Asset Coverage — Crypto majors and alts, DePIN tokens, gold, silver, oil, natural gas, EUR, GBP, JPY, and more
- XGBoost ML Veto — A gradient-boosted classifier trained on 5.17 million embeddings acts as the final gate, requiring ≥70% predicted win probability before any signal is broadcast (current holdout win rate: 81.1%)
1.2 — System Architecture
KAIROS operates on a two-node physical-compute separation architecture:
| Node | Role | Key Services |
|---|---|---|
| Node 1 [REDACTED] | Data Ingestion & API Gateway | Dataslut (Go HFT engine), API Gateway, ClickHouse, Redis, Nginx, 36 collector services |
| Node 2 Compute | Prediction & Execution | Hunter V7 (prediction DAG), Trader V2 (execution), Brain V1 (intelligence), METIS (agent) |
Data Flow
Exchange WebSockets → Dataslut (Go) → Redis Pub/Sub → ClickHouse (market_ticks)
↓
API Gateway (Go)
↓
WebSocket → Clients
↓
Collectors (36 services) → ClickHouse → Hunter V7 (Python DAG)
↓
25 Mathematical Layers + XGBoost Veto
↓
Immutable Proof Ledger (SHA-256)
↓
signals_queue → Trader V2 → Exchanges
1.3 — Data Infrastructure
KAIROS stores all data in ClickHouse, a columnar OLAP database optimized for real-time analytics over time-series data. The database contains 90+ tables with data retention policies ranging from 7 days (ephemeral) to indefinite (proof ledger).
| Category | Tables | Description |
|---|---|---|
| Market Data | market_ticks, candles_1m, candles_5m, candles_1h | Raw tick data and aggregated OHLCV candles |
| Signals | signals_queue, signals, signal_proof_ledger | Generated signals, queue for trader, immutable proof chain |
| Trading | trade_log, open_positions, pending_orders | Executed trades, current positions, pending orders |
| DePIN | depin_stats, depin_deep_intel, depin_health_scores, depin_ai_intel | DePIN project metrics, health scores, AI analysis |
| On-Chain | solana_blocks, solana_dex_volume, solana_whale_activity | Solana chain data from Helius RPC |
| Intelligence | brain_embeddings, regime_live, correlation_snapshots | ML embeddings, market regime, cross-asset correlations |
| Physical World | sensor_readings, aviation_stats, maritime_stats | Environmental sensors, flight tracking, ship tracking |
The market_ticks table uses DoubleDelta + Gorilla + ZSTD compression codecs, achieving ~10:1 compression while maintaining sub-millisecond query times over billions of rows.
The Prediction Engine: Hunter V7
Hunter V7 — codenamed "The Quantum Architect" — is the core prediction engine. It implements a multi-phase directed acyclic graph (DAG) of mathematical engines that process every incoming market tick in parallel through 25 computational layers across 13 integrated nodes.
The DAG is executed in four phases to maximize parallelism while respecting data dependencies:
| Phase | Layers | Purpose | Parallelism |
|---|---|---|---|
| Phase 1 | Sieve | Data quality filter | Sequential (gate) |
| Phase 2 | Quantum, Nash, Stochastic, Harmonic | Core mathematics | 4 threads parallel |
| Phase 3 | Swarm, KSIG, Causal | Data synthesis | 3 threads parallel |
| Phase 4 | Metamorphic, Frontier, Vanguard | Final scoring & gates | Sequential (scoring) |
2.1 — Multi-Layer DAG Architecture
Each layer in the DAG outputs a score between -1.0 and 1.0. Positive values indicate bullish conviction; negative values indicate bearish conviction. The Metamorphic Weighter (a deep combinatorial router) dynamically adjusts the weight of each layer based on the current market regime and recent performance.
Layer 1: SIEVE (Data Purity)
The Sieve is the first gate. Every tick must pass sanity checks before entering the prediction pipeline:
- Price Floor Validation — Rejects prices more than 100x below known floors (e.g., BTC < $10 is a data error)
- Price Ceiling Validation — Rejects prices more than 1000x above known floors
- Volume Sanity — Rejects negative volumes
- Symbol Normalization — Converts all exchange-specific formats (BTC-USDT-SWAP, BTC_USDT) to canonical BTCUSDT form
- Allowed Asset Filter — Only 80+ pre-approved symbols are processed
- Buffer Minimum — Requires at least 50 ticks in the price buffer before processing
2.2 — Layer 2: Quantum Field (Schrödinger Wave Collapse)
This layer evolves a 64-dimensional Clifford Multivector using a stochastic differential equation (SDE). Instead of treating price as a single point, it models price as a probability distribution — a wave function ψ(x,t) whose squared magnitude |ψ(x)|² gives the probability density at each price level.
Key outputs:
- ⟨H⟩ Energy — Measures volatility potential. High energy = large deviation from equilibrium.
- ⟨p⟩ Momentum — Directional force derived from bivector products in the 64D space.
- Δx Uncertainty — Width of the probability cone (2σ confidence interval for predicted price).
- Signal Strength — The quantum volume of the evolved multivector. Positive = bullish, negative = bearish.
- Expected Price — The Schrödinger field's prediction for the most likely price after 50 time steps.
The SDE evolution uses dynamically calibrated parameter surfaces for different asset classes. Crypto assets receive higher mean-reversion and volatility tuning matrices compared to traditional assets like forex and commodities, reflecting their distinct structural properties.
2.3 — Layer 3: Nash Game Theory (Predatory Logic)
The Nash Equilibrium Solver models the market as a multi-player game between institutional investors, retail traders, and market makers. It detects:
- Liquidity Traps — Patterns where large players are accumulating or distributing
- Pain Index — How much unrealized pain exists in the current order book
- Predatory Strategies — SQUEEZE and DUMP patterns with confidence scores
When a Nash Combinatorial signal exceeds 85% confidence and overlaps with a classic liquidity trap detection, it can bypass the normal probability gates and emit a signal directly — this is for the rare, high-conviction predatory move.
2.4 — Layer 4: Stochastic Engine (Monte Carlo)
The stop-loss and take-profit distances are calculated dynamically based on the asset's current ATR, liquidity profile, and volatility regime. The system automatically structures asymmetric risk-to-reward setups tailored to the specific instrument's real-time behavior.
The win_prob output is the primary probability gate — signals must have a Monte Carlo win probability ≥ 70% to pass.
2.5 — Layer 5: Harmonic Substrate (FFT / Tensor)
Applies Fast Fourier Transform (FFT) to decompose the price series into frequency components. Identifies:
- Dominant Frequencies — Recurring cycles in price action
- Resonance Score — How strongly current price action aligns with detected harmonics
- Harmonic Boost — A multiplier applied to signal confidence when resonance is detected
2.6 — Layer 6: Swarm Intelligence (160M+ Row Data Lake)
The Real Swarm queries the full historical data lake to generate consensus signals from five sub-agents:
- Momentum Agent — Trend-following across multiple timeframes
- Reversal Agent — Mean-reversion detection at statistical extremes
- Breakout Agent — Support/resistance level breach detection
- Volume Agent — Volume anomaly and divergence detection
- Trend Agent — Long-term directional trend classification
Each sub-agent votes, and the swarm consensus is fed into the Metamorphic Weighter.
2.7 — Layers 7-19: Extended Math Stack
Beyond the core six layers, the DAG includes specialized engines that activate when specific conditions are met:
| Layer | Engine | Purpose |
|---|---|---|
| 7 | Septem (DLT Execution) | Blockchain clock alignment & gas optimization |
| 8 | Synapse (Hebbian Learning) | Self-improving synaptic weights — fires winning math, prunes losers |
| 9 | Co-Evolution KSIG | 17-dimensional state space evolution |
| 10 | Causal Engine | Granger causality + multifractal analysis + information bounds |
| 11 | Meta-Adaptation | Adaptive learning rates + edge decay modeling |
| 12 | Slippage Model | Order book impact estimation + position sizing adjustment |
| 13 | Topology Void (TDA) | Betti number analysis to find liquidity voids in order book |
| 14 | Kolmogorov | Pattern significance testing via algorithmic complexity |
| 15 | Optimal Transport | Wasserstein distance for regime detection |
| 16 | Nash Energy | Manifold stability analysis on the 17D state space |
| 17 | Frontier Pack | Adversarial, category theory, mechanism design, MPC, Bayesian optimization, MAML, Do-calculus, geometric algebra, ergodicity, chaos/Lyapunov, Kalman 17D, RMT, information geometry, rough volatility, Ising sentiment |
| 19 | Humanities Cortex | Behavioral finance: SIR contagion model, Soros reflexivity, Kahneman prospect theory |
2.8 — XGBoost ML Veto Gate
The final computational gate before signal emission is a gradient-boosted decision tree classifier trained on 5.17 million 128-dimensional state embeddings extracted from the signal proof ledger and historical backfill data.
The XGBoost model evaluates the 128D state vector against millions of historical precedents. The required probability threshold (p-score) dynamically adjusts based on the current market volatility regime, the asset's liquidity profile, and macro conditions. A baseline p-score must be achieved for the signal to survive this final gate.
The 128D state embedding encodes the full market state at signal generation time, including outputs from all active math layers, price statistics, volume profiles, regime indicators, and DePIN health scores. This embedding is also stored in the immutable proof ledger for auditability.
Threshold Tiers
The system operates across a spectrum of confidence tiers. Higher probability thresholds yield exponentially lower signal volume but strictly higher historical precision. The "Sniper Mode" threshold requires near-perfect historical alignment, while the standard production threshold balances high precision with sufficient signal flow to maintain portfolio compounding.
2.9 — Signal Synthesis & Veto Chain
After all layers have scored, the signal passes through the Veto Chain — a series of hard gates that can kill the signal at any point:
25-Layer Pipeline → Kelly Criterion → Macro Calendar → XGBoost (p>0.70) → BROADCAST
Gate Hierarchy
- Probability Gate — Monte Carlo win probability must exceed regime-specific threshold (0.45 to 0.70 depending on regime)
- Consent Calculus — Minimum number of agreeing layers (k) based on current volatility
- Knapsack / Opportunity Cost — Expected value must exceed gas/slippage costs
- DePIN Physical Reality — For DePIN tokens, physical network health can veto or boost
- Meta-Adaptation — Edge decay model can veto if recent performance is degrading
- Score Entropy Check — If all layer scores cluster near the same value (low entropy), the signal is penalized as undifferentiated noise
- Dead Hour Veto — Hours with 0% historical win rate are blocked (UTC 00, 03, 20, 21, 23)
- Direction Consensus — BUY requires all 4 directional signals to agree (momentum + wave + quantum + causal); SELL requires 2 of 3
- Active Layer Count — At least 5 engines must register scores above 0.1
- Stochastic Win Probability ≥ 70% — The Monte Carlo probability gate
- Causal Hard Veto — If causal analysis says market is too efficient, kill the signal
- Regime-Aware Confidence Floor — Final confidence must exceed regime-specific threshold
- Daily Trade Cap — Maximum 5 trades per day
- 15-Minute Cooldown — No repeat signals on same asset within 15 minutes
- XGBoost Veto — ML model must predict ≥ 70% win probability
- Macro Calendar Blackout — No trading during FOMC, NFP, CPI, and other nuclear-tier macro events
The veto chain is intentionally aggressive. Most ticks are rejected. Of the ~1.4 million data points ingested daily, only a handful of signals pass all gates. The goal is fewer, better trades — not volume.
Data Sources
KAIROS operates 36 active collector services that ingest data from 350+ sources into ClickHouse. The collector fleet is organized by domain:
3.1 — Exchange Streams
The Dataslut service is a custom Go-based HFT data ingestion engine that maintains persistent WebSocket connections to 10+ cryptocurrency exchanges simultaneously:
| Exchange | Connection | Data |
|---|---|---|
| Binance | WebSocket (free) | Spot + Futures ticks |
| Bybit | WebSocket (free) | Spot + Perpetuals |
| OKX | WebSocket (free) | Spot + Swaps |
| Kraken | WebSocket (free) | Spot ticks |
| Coinbase | WebSocket (free) | Spot ticks |
| Bitget | WebSocket (free) | Spot + Futures |
| Gate.io | WebSocket (free) | Spot ticks |
| MEXC | WebSocket (free) | Spot ticks |
| HTX (Huobi) | WebSocket (free) | Spot ticks |
| BitMEX | WebSocket (free) | Perpetuals |
All ticks are normalized to a common market_ticks schema (timestamp, symbol, price, volume, source, sector) and published to Redis Pub/Sub for real-time consumption by the API gateway and Hunter V7.
The Go engine uses lockless architectures, zero-allocation JSON parsing (via gjson), and atomic-based performance monitoring, designed for 200,000+ ticks per second throughput.
3.2 — On-Chain Data (Solana)
The Helius Firehose maintains an institutional RPC connection to the Solana blockchain via the Helius API. It feeds three ClickHouse tables:
solana_blocks— Block velocity (TPS, slot timing, validator stats)solana_dex_volume— DEX trading volume across Raydium, Orca, Jupiter, etc.solana_whale_activity— Large wallet movements and token transfers
The Birdeye Collector provides Solana token prices, volume, and liquidity data via specialized API ingestion.
3.3 — DePIN Intelligence
Eight dedicated DePIN collector services provide the deepest coverage of decentralized physical infrastructure networks available anywhere:
| Service | Coverage | Metrics |
|---|---|---|
| deep-depin-intel | GRASS, RNDR, Hivemapper | GPU utilization, bandwidth, mapping coverage |
| depin-deep-intel-v2 | 50+ projects | Node counts, revenue, utilization, capacity |
| depin-expansion-35 | 35 additional projects | Token prices, market caps, volumes |
| depin-50-prices | Top 50 DePIN tokens | Real-time prices from CoinGecko |
| depin-health | All tracked projects | Network health monitoring |
| depin-master-intel | All sources aggregated | Master intelligence rollup |
| depin-network-v2 | Network stats v2 | 7-day growth, churn, geographic coverage |
| resilient-prices | Multi-source failover | CoinGecko + Birdeye price fallback |
DePIN data is stored across depin_stats, depin_deep_intel, depin_health_scores, depin_ai_intel, depin_project_kpis, and depin_enrichment tables. The DePIN Sensor queries these tables to generate real-time health scores for every tracked project.
3.4 — Macro & TradFi
- FRED Macro — Federal Reserve economic data (interest rates, CPI, unemployment, GDP)
- Global Stocks — Equity market data from Yahoo/free APIs (S&P 500, Nasdaq, international indices)
- Funding Rate Arbitrage — Perpetual funding rates from all major exchanges
- Lead-Lag Scanner — Cross-asset correlation detection for arbitrage opportunities
- Order Flow — Order flow imbalance from exchange APIs
- Regulatory Intelligence — Regulatory news tracking
- Macro Calendar — FOMC, NFP, CPI, ECB decisions — automatically creates blackout windows
3.5 — Physical World Sensors (100,000+)
The Mega Sensor Collector ingests data from 20+ free, open-source sensor networks worldwide. These physical signals provide information that no other financial data provider captures:
| Network | Sensors | Data Types |
|---|---|---|
| OpenAQ | ~40,000 | Air quality — PM2.5, PM10, O3, NO2 |
| EPA AirNow | ~2,300 | Official US air quality — AQI readings |
| NOAA / NWS | ~3,000 | Weather stations — temp, wind, pressure |
| NOAA SWPC | — | Solar flares, geomagnetic storms, proton flux |
| USGS Water | ~13,000 | Stream flow, water levels, discharge rates |
| USGS Earthquake | ~7,000 | Seismic events — magnitude, depth, shaking |
| Safecast | ~5,000 | Radiation monitoring — nuclear proximity |
| NASA FIRMS | Satellite | Wildfire detection — hotspots, fire radiative power |
| OpenSky | Global | Aircraft tracking — 50,000+ flights |
| AISStream | 7 zones | Maritime vessel tracking |
| CAISO / ERCOT | Grid | Power grid load — California & Texas energy demand |
| WeatherXM | ~5,000 | DePIN weather stations |
| Copernicus | EU | Atmospheric composition monitoring |
| Argo Floats | ~4,000 | Ocean temperature, salinity, currents |
| SmartCitizen | ~2,000 | Urban IoT sensors |
| openSenseMap | ~12,000 | Community sensors — multi-parameter |
3.6 — Social & Developer Intelligence
- GitHub Velocity — Developer commit frequency, PR activity, contributor counts for tracked projects
- News Scanner — Sentiment analysis from GDELT and RSS feeds
- Whale Tracker — On-chain whale wallet monitoring
- Social Sentiment — Twitter/Reddit sentiment radar
API Reference
The KAIROS API Gateway is a high-performance Go service that exposes REST, WebSocket, and Proof verification endpoints. All endpoints are served at https://kairossignal.com.
4.1 — REST Endpoints
GET /api/v1/latest-data
Returns the last 5 minutes of cached market ticks from Redis. Supports optional symbol filtering.
# Fetch all recent ticks
curl https://kairossignal.com/api/v1/latest-data
# Filter by symbol
curl https://kairossignal.com/api/v1/latest-data?symbols=BTCUSDT,ETHUSDT
Response: JSON array of tick objects with symbol, price, volume, source, timestamp fields.
GET /api/v1/crypto/latest
Returns the latest cryptocurrency market data, filtered to crypto-sector ticks only.
curl https://kairossignal.com/api/v1/crypto/latest
GET /api/v1/fx/latest
Returns the latest forex/commodities data.
GET /api/v1/proof/ledger-stats
Returns aggregated statistics from the immutable signal proof ledger — total signals, win rates, average sigma, sector breakdown, and chain health.
curl https://kairossignal.com/api/v1/proof/ledger-stats
Response Fields:
| Field | Type | Description |
|---|---|---|
total_signals | int | Total signals in ledger |
graded_signals | int | Signals with autopsy grades |
win_rate_1h | float | Win rate at 1-hour mark |
win_rate_4h | float | Win rate at 4-hour mark |
win_rate_24h | float | Win rate at 24-hour mark |
avg_sigma | float | Average signal strength (σ) |
chain_status | string | "INTACT" or "BROKEN" |
GET /api/v1/proof/recent-signals
Returns the last 25 signals with full SHA-256 proof hashes and chain validation status.
curl -s https://kairossignal.com/api/v1/proof/recent-signals | python3 -m json.tool
Response Example:
{
"count": 25,
"signals": [
{
"timestamp": "2026-03-09T01:05:16.123Z",
"asset": "INJUSDT",
"direction": "SELL",
"sigma": 2.76,
"proof_hash": "b629894d939e7351a1c2...",
"prev_hash": "14f6cc8e29a2a4c1d893...",
"chain_valid": true
}
]
}
GET /health
Health check endpoint. Returns 200 OK with system status.
POST /api/v1/waitlist
Submit a waitlist signup. Accepts JSON body with name, email, role, company, message fields.
4.2 — WebSocket API
The WebSocket API pushes live market data and signals in real-time. Connect to:
ws://kairossignal.com:8090/ws?symbols=BTCUSDT,ETHUSDT
All payloads are zlib-compressed JSON. You must decompress the binary frame before parsing.
Connection Example (Python)
import websocket, zlib, json
def on_message(ws, msg):
signal = json.loads(zlib.decompress(msg))
print(f"{signal['asset']} {signal['direction']}"
f" p={signal['xgb_win_probability']:.3f}")
ws = websocket.WebSocketApp(
"ws://kairossignal.com:8090/ws?symbols=BTCUSDT,ETHUSDT",
on_message=on_message
)
ws.run_forever()
Signal Payload (Per Approved Trade)
{
"type": "SIGNAL",
"asset": "BTCUSDT",
"direction": "BUY",
"price": 67432.50,
"confidence": 0.823,
"xgb_win_probability": 0.714,
"regime": "BULL_TRENDING",
"state_128d": "AACAPwAA...",
"timestamp_utc": "2026-03-08T18:14:25Z"
}
Channel Filtering
Use the symbols query parameter to subscribe to specific assets. Multiple symbols are comma-separated.
4.3 — Proof / Verification Endpoints
These endpoints allow anyone to independently verify signal integrity. See Chapter 6 for procedures.
| Endpoint | Method | Description |
|---|---|---|
/api/v1/proof/ledger-stats | GET | Aggregated proof stats and chain health |
/api/v1/proof/recent-signals | GET | Last 25 signals with SHA-256 hashes |
4.4 — DePIN Data API
DePIN data is accessible via the METIS dashboard API, covering 500+ projects with node counts, revenue, earnings per node, device cost, ROI, latency, growth rates, AI conviction scores, and GitHub developer velocity.
4.5 — Authentication
During the current beta phase, all API endpoints are publicly accessible without authentication. REST endpoints are cached for 60 seconds.
Chapter 5Signal Format & Schema
5.1 — Signal Payload Structure
When Hunter V7 approves a signal, it is written to two ClickHouse tables and published to the API gateway:
signals_queue (for trader consumption)
| Column | Type | Description |
|---|---|---|
symbol | String | Canonical symbol (e.g., BTCUSDT) |
side | String | BUY or SELL |
suggested_entry | Float64 | Price at signal generation |
stop_loss | Float64 | ATR-based stop loss price |
take_profit | Float64 | ATR-based take profit price |
confidence | Float64 | Weighted confidence (0.0 to 0.99) |
reason | String | Trigger reason (e.g., NASH_SQUEEZE, SELL_HIGH_CONV) |
created_at | DateTime | Signal timestamp |
lyapunov_expiry | DateTime | Chaos engine trajectory decay time |
signal_proof_ledger (immutable)
| Column | Type | Description |
|---|---|---|
signal_id | String (UUID) | Unique identifier |
timestamp | DateTime64(3) | Millisecond-precise signal time |
asset | String | Trading pair symbol |
direction | String | LONG or SHORT |
sigma | Float64 | Signal strength (standard deviations) |
combined_score | Float64 | Weighted confidence from all layers |
sector | String | "crypto", "forex", "commodities", etc. |
regime_leader | String | Current regime-leading asset (usually "BTC") |
math_health | UInt32 | Number of active math engines |
state_hash | String | Hash of the 128D state vector |
state_128d | Array(Float64) | 128-dimension market state embedding |
prev_hash | String | SHA-256 hash of previous entry |
proof_hash | String | SHA-256 hash of this entry (chain link) |
graded | Bool | Whether autopsy has been performed |
grade_result | String | WIN, LOSS, or PENDING |
grade_return_1h | Float64 | Return at 1-hour mark |
grade_return_4h | Float64 | Return at 4-hour mark |
grade_return_24h | Float64 | Return at 24-hour mark |
5.2 — Key Field Reference
Sigma (σ)
Signal strength measured in standard deviations from the mean. A sigma of 2.0 means the signal is 2 standard deviations above the normal noise level. Higher sigma = stronger conviction. Typical production signals range from σ=1.5 to σ=4.0.
Confidence
The weighted output of the Metamorphic Weighter — a composite score from all active math layers, adjusted by regime-specific weights, probability penalties, and entropy checks. Ranges from 0.0 to 0.99. Signals below the regime-specific floor are rejected.
XGB Win Probability
The XGBoost model's predicted probability that this trade will be profitable. This is the final gate — any value below 0.70 is vetoed.
Lyapunov Expiry
The Chaos Engine calculates the Lyapunov exponent of the current price trajectory to estimate how long the prediction remains valid. A highly chaotic market might give a 5-second horizon; a trending market might give 2 hours. Position should be closed before this expiry if it hasn't hit target/stop.
ATR-Based Dynamic Stops
Stop-loss and take-profit levels are calculated dynamically using the Average True Range (ATR) of recent price action:
- ATR Calculation — 3x the average absolute return over the last 60 ticks
- Floor: 1.5% minimum (avoids micro-noise)
- Ceiling: 4.0% maximum (allows volatile alts room)
- Stop Loss = ATR × 1.2 (tighter stop)
- Take Profit = ATR × 2.5 (wider target = 2.08:1 risk/reward)
When the Humanities Cortex detects a Soros Reflexivity Bubble, stops are widened 3x and targets 4x to ride the momentum.
5.3 — Regime Classification
The Regime Signal Weighter ("The Spinal Cord") classifies the current market into six regimes. Each regime applies different gate thresholds:
| Regime | Probability Gate | Final Confidence Gate | Description |
|---|---|---|---|
| BULL_TRENDING | 0.45 | 0.55 | Clear uptrend — most permissive |
| BEAR_TRENDING | 0.55 | 0.68 | Clear downtrend — moderate |
| RANGING_CHOP | 0.65 | 0.92 | Sideways chop — near-impossible gate |
| HIGH_VOLATILITY | 0.60 | 0.85 | Extreme volatility — demands conviction |
| CRISIS | 0.70 | 0.95 | Market crisis — only god-tier signals pass |
| UNKNOWN | 0.55 | 0.68 | Regime unclear — moderate caution |
5.4 — Supported Assets
KAIROS trades 80+ instruments across four asset classes:
Crypto Majors
BTCUSDT ETHUSDT SOLUSDT DOGEUSDT ADAUSDT XRPUSDT DOTUSDT LINKUSDT MATICUSDT NEARUSDT UNIUSDT AAVEUSDT SUSHIUSDT
Crypto Alts & DePIN
PEPEUSDT BONKUSDT WIFUSDT INJUSDT SUIUSDT SEIUSDT TIAUSDT APTUSDT ARBUSDT OPUSDT RNDRUSDT FILUSDT HNTUSDT TAOUSDT FETUSDT AKTUSDT THETAUSDT IOTAUSDT ARUSDT
Commodities
XAUUSDT (Gold) XAGUSDT (Silver) OILUSDT (Crude Oil) NGUSDT (Natural Gas)
Forex
EURUSDT GBPUSDT JPYUSDT AUDUSDT CHFUSDT
Proof of Alpha
KAIROS implements a cryptographic proof chain that makes signal fabrication mathematically impossible. Every approved signal is SHA-256 chain-hashed into an append-only ledger before broadcast. This means:
- No signal can be retroactively added, removed, or modified without breaking the chain
- Anyone can verify the chain's integrity by recomputing hashes
- The 128D state embedding is locked at signal time — you can audit exactly what the model saw
6.1 — Immutable Signal Ledger
The immutable ledger is stored in ClickHouse's signal_proof_ledger table using the MergeTree engine ordered by timestamp. Once written, rows cannot be updated (ClickHouse MergeTree is append-only by design).
Each entry contains:
- Signal data — Asset, direction, sigma, combined score, sector, regime
- 128D state embedding — The full market state snapshot used by the ML model
- prev_hash — SHA-256 hash of the previous entry
- proof_hash — SHA-256 hash of this entry's data plus prev_hash
- Grade fields — Filled by the Trade Autopsy system after the fact
6.2 — Hash Chain Construction
The hash is computed as follows:
payload = "{prev_hash}|{timestamp}|{asset}|{direction}|{sigma:.6f}|{combined_score:.6f}|{state_hash}"
proof_hash = SHA-256(payload.encode('utf-8')).hexdigest()
The first entry in the chain uses "GENESIS" as its prev_hash.
Verification Algorithm
import hashlib
def verify_chain(signals):
expected_prev = "GENESIS"
verified = broken = 0
for s in signals:
if s['prev_hash'] != expected_prev:
broken += 1
continue
payload = (f"{expected_prev}|{s['timestamp']}|{s['asset']}|"
f"{s['direction']}|{s['sigma']:.6f}|"
f"{s['combined_score']:.6f}|{s['state_hash']}")
computed = hashlib.sha256(payload.encode()).hexdigest()
if computed == s['proof_hash']:
verified += 1
else:
broken += 1
expected_prev = s['proof_hash']
return {"verified": verified, "broken": broken,
"status": "INTACT" if broken == 0 else "BROKEN"}
6.3 — How to Verify Signals
- Fetch recent signals —
curl https://kairossignal.com/api/v1/proof/recent-signals - Extract proof_hash and prev_hash for each signal
- For each consecutive pair, verify that signal N+1's
prev_hashequals signal N'sproof_hash - Recompute the hash using the formula above and verify it matches
The live proof chain feed on kairossignal.com performs this verification automatically, showing a green "✓ LINKED" for verified entries.
6.4 — Trade Autopsy System
The Trade Autopsy is an automated grading system that runs daily. It checks every signal against actual market prices at 1-hour, 4-hour, and 24-hour intervals to determine if the predicted direction was correct.
Autopsy results are written to the ledger's grade_result, grade_return_1h, grade_return_4h, and grade_return_24h fields. These feed the holdout validation statistics.
6.5 — Trial Parameters
The current validation trial runs from March 6, 2026 to June 4, 2026 (90 days). The target is 1,000 graded signals with a 60%+ win rate. At the time of writing, the XGBoost holdout validation (on 5.17M embeddings) shows an 81.1% win rate at the p>0.70 threshold.
Chapter 7Integration Guide
This chapter provides working code examples for integrating KAIROS Signal into your trading infrastructure across Python, JavaScript, Go, and webhook-based systems.
7.1 — Python Integration
REST API Client
import requests
BASE = "https://kairossignal.com"
class KairosClient:
def __init__(self, base_url=BASE):
self.base = base_url
self.session = requests.Session()
def get_latest_ticks(self, symbols=None):
url = f"{self.base}/api/v1/latest-data"
params = {'symbols': ','.join(symbols)} if symbols else {}
return self.session.get(url, params=params).json()
def get_proof_stats(self):
return self.session.get(f"{self.base}/api/v1/proof/ledger-stats").json()
def get_recent_signals(self):
return self.session.get(f"{self.base}/api/v1/proof/recent-signals").json()
# Usage
client = KairosClient()
stats = client.get_proof_stats()
print(f"Total signals: {stats.get('total_signals')}")
print(f"Win rate (24h): {stats.get('win_rate_24h', 0):.1%}")
signals = client.get_recent_signals()
for s in signals['signals'][:5]:
chain = '✓' if s['chain_valid'] else '✗'
print(f" {s['asset']:12} {s['direction']:5} σ={s['sigma']:.2f} "
f"hash={s['proof_hash'][:16]}… {chain}")
WebSocket Real-Time Feed
import websocket, zlib, json, threading, time
class KairosWebSocket:
def __init__(self, symbols=None, on_signal=None):
self.symbols = symbols or ['BTCUSDT', 'ETHUSDT']
self.on_signal = on_signal or self._default
self.ws = None
def _default(self, signal):
print(f"[SIGNAL] {signal.get('asset')} {signal.get('direction')} "
f"conf={signal.get('confidence', 0):.3f}")
def _on_message(self, ws, message):
try:
data = json.loads(zlib.decompress(message))
if data.get('type') == 'SIGNAL':
self.on_signal(data)
except: pass
def connect(self):
sym_param = ','.join(self.symbols)
url = f"ws://kairossignal.com:8090/ws?symbols={sym_param}"
self.ws = websocket.WebSocketApp(url, on_message=self._on_message)
threading.Thread(target=self.ws.run_forever, daemon=True).start()
return self
# Usage
def handler(s):
print(f"🚨 {s['asset']} {s['direction']} @ ${s['price']:,.2f}")
ws = KairosWebSocket(symbols=['BTCUSDT', 'ETHUSDT', 'SOLUSDT'], on_signal=handler)
ws.connect()
7.2 — JavaScript / Node.js Integration
REST Client (Browser / Node)
async function getRecentSignals() {
const res = await fetch('https://kairossignal.com/api/v1/proof/recent-signals');
const data = await res.json();
console.log(`Total signals: ${data.count}`);
data.signals.forEach(s => {
console.log(`${s.asset.padEnd(12)} ${s.direction} σ=${s.sigma.toFixed(2)} ` +
`hash=${s.proof_hash.substring(0, 16)}… ` +
`chain=${s.chain_valid ? '✓' : '✗'}`);
});
return data;
}
getRecentSignals();
WebSocket Client (Node.js)
const WebSocket = require('ws');
const zlib = require('zlib');
const ws = new WebSocket('ws://kairossignal.com:8090/ws?symbols=BTCUSDT,ETHUSDT');
ws.on('message', (data) => {
try {
const decompressed = zlib.inflateSync(data);
const signal = JSON.parse(decompressed.toString());
if (signal.type === 'SIGNAL') {
console.log(`🚨 ${signal.asset} ${signal.direction} ` +
`@ $${signal.price.toLocaleString()}`);
}
} catch (e) { /* binary tick data */ }
});
ws.on('open', () => console.log('Connected'));
ws.on('close', () => setTimeout(() => ws.connect(), 5000));
7.3 — Go Integration
package main
import (
"encoding/json"
"fmt"
"net/http"
)
type Signal struct {
Asset string `json:"asset"`
Direction string `json:"direction"`
Sigma float64 `json:"sigma"`
ProofHash string `json:"proof_hash"`
ChainValid bool `json:"chain_valid"`
}
type ProofResponse struct {
Count int `json:"count"`
Signals []Signal `json:"signals"`
}
func main() {
resp, _ := http.Get("https://kairossignal.com/api/v1/proof/recent-signals")
defer resp.Body.Close()
var data ProofResponse
json.NewDecoder(resp.Body).Decode(&data)
for _, s := range data.Signals[:5] {
fmt.Printf("%-12s %s σ=%.2f hash=%s…\n",
s.Asset, s.Direction, s.Sigma, s.ProofHash[:16])
}
}
7.4 — Webhook / Polling Integration
import requests, time
POLL_INTERVAL = 60
SEEN = set()
while True:
try:
data = requests.get(
"https://kairossignal.com/api/v1/proof/recent-signals", timeout=10
).json()
for s in data.get('signals', []):
h = s['proof_hash']
if h not in SEEN:
SEEN.add(h)
print(f"NEW: {s['asset']} {s['direction']} σ={s['sigma']:.2f}")
# Forward to your webhook:
# requests.post("https://your-server.com/webhook", json=s)
except Exception as e:
print(f"Error: {e}")
time.sleep(POLL_INTERVAL)
7.5 — Best Practices for Integration
- Always verify the proof chain before acting on signals. Recompute hashes client-side.
- Implement reconnection logic for WebSocket clients. The server may restart during deployments.
- Respect the Lyapunov expiry. If the signal's chaos horizon has passed, the prediction is stale.
- Use the regime field to adjust your position sizing. CRISIS regime signals are rare but powerful.
- Monitor the
chain_validfield. If you seefalse, report it — it may indicate data corruption. - Cache responses. REST endpoints return cached data (60s TTL). Polling faster than once per minute is wasteful.
DePIN Intelligence
KAIROS has the deepest DePIN (Decentralized Physical Infrastructure Networks) data coverage of any trading platform. The DePIN Sensor queries ClickHouse tables to generate real-time physical health scores for 500+ projects, then uses these to boost or veto trading signals.
8.1 — Project Coverage
| Sector | Projects | Key Metrics |
|---|---|---|
| Wireless | Helium (HNT), Helium Mobile, Helium IoT | Hotspot count, coverage, data transfer |
| Storage | Filecoin (FIL), Arweave (AR), Storj | Storage capacity, deal count, retrieval speed |
| Compute | Render (RNDR), Akash (AKT), Golem (GLM) | GPU/CPU utilization, job queue, FLOP throughput |
| AI | Bittensor (TAO), Fetch.ai (FET), Ocean | Inference requests, model accuracy, subnet activity |
| CDN | Theta (THETA), Livepeer (LPT) | Stream count, bandwidth, transcoding jobs |
| Mapping | Hivemapper, DIMO | Coverage area, data freshness, contributor count |
| Sensor | WeatherXM, IoTeX (IOTX) | Sensor uptime, data quality, geographic coverage |
| Energy | Power Ledger (POWR) | Energy traded, grid capacity |
Additional tokens tracked: IOTA, SC, GRT, ANKR, FLUX, LPT, POL, and 30+ others.
8.2 — Physical Health Scoring
Each project receives a health score from 0.0 to 1.0 based on sector-specific weight topologies:
| Sector | Revenue | Nodes | Uptime | Latency | Sentiment |
|---|---|---|---|---|---|
| Compute | 0.40 | 0.30 | 0.30 | — | — |
| Storage | 0.35 | 0.30 | 0.20 | 0.15 | — |
| Wireless | 0.20 | 0.40 | 0.25 | — | 0.15 |
| AI | 0.35 | 0.25 | 0.25 | — | 0.15 |
| CDN | 0.20 | 0.25 | 0.25 | 0.20 | 0.10 |
Score interpretation:
- 0.0–0.3 (BEARISH) — Network degraded. Veto long positions.
- 0.3–0.5 (CAUTION) — Mixed signals. No boost or penalty.
- 0.5–0.7 (NEUTRAL) — No strong physical signal.
- 0.7–1.0 (BULLISH) — Network healthy. Confirm long positions.
8.3 — Reality Gap Scanner
The Multi-Timeframe Reality Gap Scanner compares current physical network metrics against 24-hour and 7-day baselines to detect divergences between physical reality and token price.
When divergence confidence exceeds 60%, it applies a ±15% confidence modifier. For structural 7-day gaps with >75% confidence, it lowers the final gate by 10% — giving KAIROS "permission" to buy when infrastructure grows but the token hasn't caught up.
Render Network's node count increases 20% over 7 days while RNDRUSDT price drops 8%. The Reality Gap Scanner detects this divergence, boosts signal confidence by 15%, and lowers the final gate threshold by 10%.
Enrichment layers:
- Deep Intel — 7-day growth rates, churn rates, utilization, capacity, geographic coverage
- AI Intel — LLM-generated conviction scores, competitive moat assessment, sustainability ratings
- Dev Activity — GitHub commit frequency, PR velocity, contributor counts, TVL data
Risk Management & Governance
9.1 — Capital Governor
The Capital Governor manages risk at the portfolio level:
- Position Sizing — 2% risk per trade:
position_size = (capital × risk%) / stop_distance% - Ultra-High Conviction Boost — Signals with confidence ≥ 0.97 get position size multiplied by the ultra-confidence multiplier
- Daily Trade Cap — Maximum 5 trades per day prevents overtrading
- Symbol Cooldown — No repeat signals on the same asset within 15 minutes
- Dedup Window — Prevents duplicate signals during restart overlaps
- Minimum Notional — Positions below minimum dollar value are vetoed post-slippage
9.2 — Signal Gates (Full Veto Chain)
The complete veto chain in execution order:
- Sieve Filter — Data quality, price sanity, symbol normalization
- Buffer Minimum — At least 50 ticks in the price buffer
- Allowed Assets — Only 80+ pre-approved symbols
- Fisher-Rao Veto — Information geometry constraint
- Plasma Repulsion — Electromagnetic wall detected (Lorentz force)
- Probability Gate — Regime-specific Monte Carlo threshold
- Consent Calculus — Minimum agreeing layers (k) based on volatility
- Knapsack / Opportunity Cost — Expected value must exceed costs
- DePIN Physical Reality — Network health veto for DePIN tokens
- Meta-Adaptation Veto — Edge decay has not exceeded threshold
- Score Entropy Check — Low-differentiation signals penalized 30%
- Dead Hour Veto — UTC hours 00, 03, 20, 21, 23 blocked (0% historical WR)
- Asset Blocklist — AVAX, BNB, LTC, TRX permanently blocked (0% WR)
- Direction Consensus — BUY needs 4/4 agreement; SELL needs 2/3
- Active Layer Count — ≥ 5 engines must register scores > 0.1
- Average Score — Mean layer score must exceed 0.55
- Stochastic Win Prob ≥ 70% — The Monte Carlo probability gate
- Causal Hard Veto — Market too efficient for alpha (causal_boost < 1.0)
- Regime Confidence Floor — e.g., 0.55 in BULL_TRENDING, 0.95 in CRISIS
- Daily Trade Cap — Maximum 5 per day
- 15-Minute Cooldown — Per-symbol rate limit
- Slippage Gate — Position too small after slippage adjustment
- Humanities Cortex — Capitulation imminent → veto BUY
- XGBoost ML Veto — p < 0.70 → killed
- Macro Calendar Blackout — No trading during FOMC, NFP, CPI events
9.3 — Macro Event Blackouts
KAIROS automatically suspends all signal generation during high-impact macroeconomic events:
| Event | Blackout Window | Tier |
|---|---|---|
| FOMC Decision | ±3 hours | NUCLEAR |
| Non-Farm Payrolls (NFP) | ±2 hours | NUCLEAR |
| CPI Release | ±2 hours | NUCLEAR |
| ECB Decision | ±2 hours | NUCLEAR |
| GDP Report | ±1 hour | HIGH |
| Unemployment Claims | ±1 hour | MEDIUM |
The macro calendar is updated automatically. During blackout windows, no signals are generated regardless of conviction level.
Chapter 10ClickHouse Schema Reference
10.1 — Core Tables
| Table | Engine | Retention | Description |
|---|---|---|---|
market_ticks | MergeTree | 30 days | Raw tick data from all exchanges |
candles_1m | AggregatingMergeTree | 90 days | 1-minute OHLCV candles |
candles_5m | AggregatingMergeTree | 90 days | 5-minute OHLCV candles |
candles_1h | AggregatingMergeTree | 365 days | 1-hour OHLCV candles |
signals_queue | MergeTree | 30 days | Pending signals for trader |
signals | MergeTree | 30 days | Signal log for METIS analytics |
signal_proof_ledger | MergeTree | Indefinite | Immutable SHA-256 proof chain |
trade_log | MergeTree | Indefinite | Executed trade history |
open_positions | ReplacingMergeTree | N/A | Current open positions |
trading_journal | MergeTree | Indefinite | Decision audit trail |
depin_stats | MergeTree | 90 days | DePIN project metrics |
depin_deep_intel | MergeTree | 90 days | 7-day growth, churn, utilization |
depin_health_scores | MergeTree | 90 days | Physical health scores |
sensor_readings | MergeTree | 30 days | 100k+ environmental sensors |
solana_blocks | MergeTree | 30 days | Solana block velocity |
solana_whale_activity | MergeTree | 30 days | Large wallet movements |
regime_live | ReplacingMergeTree | N/A | Current market regime |
autopsy_daily | ReplacingMergeTree | Indefinite | Daily signal grading rollup |
brain_embeddings | MergeTree | 90 days | ML state embeddings |
training_embeddings | MergeTree | Indefinite | 5.17M backfill embeddings |
10.2 — Example Queries
Recent Signals with Win Rates
SELECT
asset,
direction,
sigma,
grade_result,
grade_return_24h
FROM signal_proof_ledger
WHERE graded = true
ORDER BY timestamp DESC
LIMIT 50
Daily Win Rate Summary
SELECT
toDate(timestamp) AS day,
count() AS total,
countIf(grade_result = 'WIN') AS wins,
round(wins / total * 100, 1) AS win_rate_pct,
round(avg(sigma), 2) AS avg_sigma
FROM signal_proof_ledger
WHERE graded = true
GROUP BY day
ORDER BY day DESC
LIMIT 30
DePIN Health Snapshot
SELECT
project_name,
active_nodes,
total_nodes,
revenue_24h,
round(earnings_per_node, 4) AS epn,
uptime_pct
FROM depin_stats
WHERE timestamp > now() - INTERVAL 1 HOUR
ORDER BY revenue_24h DESC
LIMIT 20
Exchange Tick Throughput
SELECT
source,
count() AS ticks,
round(count() / 3600, 1) AS tps,
min(price) AS min_price,
max(price) AS max_price
FROM market_ticks
WHERE timestamp > now() - INTERVAL 1 HOUR
GROUP BY source
ORDER BY ticks DESC
Chapter 11
Troubleshooting
No Signals Being Generated
- Check Hunter V7 is running:
systemctl status hunter-v4 - Check market data flow:
journalctl -u dataslut -n 20 - Check dead hours: UTC hours 00, 03, 20, 21, 23 produce no signals by design
- Check regime: RANGING_CHOP or CRISIS regimes have near-impossible gate thresholds
- Check macro blackout: FOMC/NFP/CPI events suspend all signal generation
- Check daily trade cap: Maximum 5 trades per day
Proof Chain Shows "BROKEN"
- Check for service restarts: The genesis hash resets on first signal after a cold start
- Check timestamp ordering: ClickHouse MergeTree may reorder during merges
- Wait for background merge: Run
OPTIMIZE TABLE signal_proof_ledger FINAL
WebSocket Connection Drops
- Implement auto-reconnect with exponential backoff (5s → 10s → 20s → 60s max)
- Check Nginx logs for 502 errors:
tail -f /var/log/nginx/error.log - Verify API gateway is running:
systemctl status metis-api
ClickHouse OOM / Slow Queries
- Check table sizes:
SELECT table, formatReadableSize(total_bytes) FROM system.tables WHERE database = 'default' ORDER BY total_bytes DESC - Run TTL cleanup:
OPTIMIZE TABLE market_ticks FINAL - Check active queries:
SELECT * FROM system.processes - Kill long queries:
KILL QUERY WHERE query_id = '...'
XGBoost Model Not Loading
- Check model file exists:
ls -la /root/kairos-signal-main/ml/models/xgb_kernel_latest.json - Check file permissions and file size (should be ~10-50MB)
- Check XGBoost Python package:
pip3 show xgboost - Check Hunter V7 logs:
journalctl -u hunter-v4 | grep "XGB"
DePIN Sensor Returns Stale Data
- Check DePIN collector services are running:
systemctl status deep-depin-intel depin-deep-intel-v2 - Query ClickHouse directly:
SELECT max(timestamp) FROM depin_stats - The sensor has a 5-minute cache TTL — data older than 30 minutes triggers a warning
Glossary
| Term | Definition |
|---|---|
| ATR | Average True Range — volatility measure used for dynamic stop/target placement |
| Bivector Energy | Product of Clifford algebra components measuring directional force in the 64D multivector space |
| Causal Boost | Multiplier from Granger causality analysis — values ≥1.0 indicate exploitable market inefficiency |
| Combined Score | Weighted composite of all active math layer outputs |
| Consent Calculus | Minimum number of agreeing layers required, dynamically calculated from volatility |
| DAG | Directed Acyclic Graph — the multi-phase computation pipeline |
| Dataslut | Custom Go-based HFT engine for market data ingestion |
| Dead Hours | UTC hours with 0% historical win rate (00, 03, 20, 21, 23) |
| DePIN | Decentralized Physical Infrastructure Networks |
| Do-Calculus | Pearl's causal inference framework — identifies spurious vs structural signals |
| Fisher-Rao Veto | Information geometry constraint that vetoes signals in curved statistical manifolds |
| GENESIS | The initial prev_hash value for the first entry in the proof chain |
| Hunter V7 | The core prediction engine — multi-layer DAG of mathematical engines |
| Knapsack Gate | Opportunity cost evaluator — rejects trades where expected value doesn't exceed costs |
| KSIG | 17-Dimensional Co-Evolution state space |
| Lyapunov Exponent | Measure of prediction horizon — how long until the trajectory becomes chaotic |
| METIS | The sovereign AI agent that manages dashboards, briefings, and system health |
| Metamorphic Weighter | Deep combinatorial router that dynamically weights math layers by regime and performance |
| Monte Carlo | 1,000 simulation paths to estimate win probability |
| Multivector | 64-dimensional Clifford algebra element representing the quantum market state |
| Plasma Repulsion | Lorentz force magnitude from the electromagnetic field model — high values indicate limit order walls |
| Proof Hash | SHA-256 hash of signal data + previous hash, creating the immutable chain |
| Reality Gap | Divergence between a DePIN network's physical health and its token price |
| Regime | Market state classification (BULL_TRENDING, BEAR_TRENDING, RANGING_CHOP, HIGH_VOLATILITY, CRISIS, UNKNOWN) |
| Schrödinger Field | Wave function model that predicts price as a probability distribution |
| Sigma (σ) | Signal strength in standard deviations from mean noise level |
| Sieve | First DAG layer — data quality filter and symbol normalization |
| Slippage Gate | Order book impact estimation that reduces position size or vetoes thin-market trades |
| Soros Reflexivity | George Soros's theory of self-reinforcing price-narrative loops — detected by Humanities Cortex |
| State 128D | 128-dimensional market state embedding stored with each signal for ML training and auditability |
| Swarm Intelligence | Five sub-agents (momentum, reversal, breakout, volume, trend) that vote on direction |
| Synapse | Hebbian learning layer — updates synaptic weights based on real P&L feedback |
| TDA / Betti Voids | Topological Data Analysis using Betti numbers to find liquidity voids in the order book |
| Veto Chain | The 25-step sequence of hard gates a signal must survive before broadcast |
| XGBoost Veto | Gradient-boosted classifier trained on 5.17M embeddings — final ML gate requiring p≥0.70 |
Legal & Compliance
14.1 — Risk Disclaimer
KAIROS Signal is a data analytics and signal generation platform. It is NOT a financial advisor, broker-dealer, or investment manager. All signals are generated by algorithmic models and should be treated as informational data points, not investment recommendations.
14.2 — Data Accuracy
While KAIROS employs extensive data quality measures (Sieve filter, data integrity vetoes, multi-source validation), no data pipeline is 100% error-free. Users should always cross-reference signals with their own analysis before making trading decisions.
14.3 — Proof Chain Guarantees
The SHA-256 proof chain guarantees that signals were generated at the stated time and have not been retroactively modified. It does NOT guarantee that the signals will be profitable. The chain proves when a prediction was made and what the model predicted, not that the prediction was correct.
14.4 — Regulatory Status
KAIROS Signal operates as a data service provider. Users are responsible for ensuring their use of the platform complies with their local securities and financial regulations. KAIROS does not execute trades on behalf of users — it provides signals that users may choose to act upon through their own brokerage accounts.
14.5 — Privacy & Data Handling
- No personal trading data is collected from API users
- Waitlist submissions (name, email, company) are stored in ClickHouse and used only for communications
- All market data is sourced from public exchanges and APIs
- Physical sensor data is sourced from open-source, public-access networks
- The proof ledger is publicly queryable by design — signal data is not confidential
Deep Dive: The Prediction Engine Architecture
A.1 — Understanding the DAG Computation Model
The Directed Acyclic Graph (DAG) architecture at the heart of KAIROS is fundamentally different from traditional trading systems. Most quantitative trading platforms process market data through a linear pipeline: data enters, indicators are calculated, a model makes a prediction, and a trade is executed. This approach has served the industry for decades but suffers from a critical flaw — it collapses all market information into a single dimensional representation before making a decision.
KAIROS's DAG architecture preserves the high-dimensional structure of market information throughout the entire computation. Each layer in the DAG is a specialized mathematical engine that processes the market state from a unique perspective, and the outputs of all layers are synthesized only at the final decision point. This means the system can capture cross-domain correlations that no single model could detect.
Consider a practical example: Bitcoin's price drops 3% in an hour. A traditional system might see this as a sell signal. But KAIROS simultaneously observes that Render Network's GPU utilization jumped 15% (DePIN Sensor), the Lyapunov exponent indicates this is a mean-reverting trajectory rather than a trend change (Chaos Engine), and the Nash Equilibrium Solver detects institutional accumulation patterns in the order book. The DAG preserves all of these independent observations and passes them to the Metamorphic Weighter, which may combine them into a high-conviction BUY signal that no linear system would have generated.
This is the core philosophy of KAIROS: the market is too complex for any single analytical framework. By running 12+ independent mathematical engines simultaneously and requiring supermajority consensus before acting, the system achieves a robustness that far exceeds the sum of its parts. Each engine acts as a check on every other engine, preventing the kind of catastrophic single-model failures that have plagued algorithmic trading throughout its history.
Information Flow Through the DAG
The DAG processes data in three major phases:
Phase 1: Primary Analysis (Layers 1-6)
The first six layers each consume raw market data independently. The Sieve Filter normalizes and validates the data, removing corrupted ticks, normalizing symbol names, and calculating basic statistics like returns and volatility. The Quantum Architect evolves the 64-dimensional Clifford multivector to predict price as a probability distribution rather than a point estimate. The Nash Equilibrium Solver models strategic interactions between three types of market participants — institutional investors, retail traders, and market makers — to detect predatory patterns. The Stochastic Engine runs 1,000 Monte Carlo simulations to estimate the raw win probability of a trade. The Harmonic Substrate decomposes price into Fourier components to identify reinforcing market cycles. The Swarm Intelligence deploys five specialized sub-agents (momentum, reversal, breakout, volume, trend) that each independently vote on market direction.
Each layer produces its own confidence score and directional vote. Critically, these layers do not communicate with each other during computation — they see only the raw market state and their own internal models. This independence ensures that their votes are not contaminated by each other's biases. When you see a signal with high consensus (5/5 layers agreeing), you can be confident that five genuinely independent analytical frameworks have reached the same conclusion.
Phase 2: Cross-Domain Synthesis (Layers 7-12)
The intermediate layers operate on the outputs of Phase 1 layers, finding patterns that emerge only when multiple perspectives are combined. The Co-Evolution Framework (KSIG) tracks 17-dimensional state trajectories that capture how the relationships between layers evolve over time. For example, the KSIG layer might detect that when the Quantum volume and Nash payoff both increase simultaneously for three consecutive ticks, a breakout follows within 15 minutes — a pattern that neither layer would detect in isolation. The Causal Inference engine uses Pearl's Do-Calculus to distinguish correlation from causation in the layer outputs. This is crucial because many apparent patterns in financial data are spurious correlations rather than exploitable causal relationships. The Meta-Adaptation layer monitors the system's overall edge — tracking whether the statistical advantage detected by the other layers is stable, growing, or decaying. When the detected edge is below a critical threshold, this layer vetoes all signals regardless of individual layer confidence.
The most important layer in Phase 2 is the Topology Void layer (TDA). This layer uses Topological Data Analysis — specifically Betti number computation — to find "voids" in the market's liquidity landscape. Imagine the order book as a 3D terrain: mountains of limit orders at key support/resistance levels, valleys where liquidity is thin. The TDA layer maps this terrain and identifies tunnels and voids — price levels where there is virtually no resistance. When the TDA layer detects a liquidity void in the direction of the predicted move, it significantly boosts the signal confidence, because price tends to move rapidly through areas of thin liquidity.
Phase 3: Execution Decision (Layers 13+)
The final phase synthesizes all information into an actionable signal. The Metamorphic Weighter applies regime-specific weights to combine all layer outputs into a single confidence score. The weight topology is not fixed — it shifts based on the current market regime and the recent performance of each layer (via the Hebbian synaptic feedback loop). The Humanities Cortex applies behavioral finance models to detect sentiment extremes. The Electromagnetic Field model checks for Lorentz force barriers created by limit order walls. And the XGBoost ML Veto makes the final kill/approve decision based on a gradient-boosted classifier trained on 5.17 million historical state embeddings.
A.2 — The Schrödinger Field in Practice
The Quantum Field layer is perhaps the most unconventional component of the KAIROS system, so it deserves deeper explanation. Traditional technical analysis treats price as a single number — the "last traded price" — and builds indicators on top of this one-dimensional series. This is a massive simplification of reality. In truth, at any given moment, there are thousands of limit orders at different price levels creating a complex probability landscape for where the price might go next.
KAIROS models this reality by treating price as a quantum wave function ψ(x,t) in 64-dimensional Clifford algebra space. The term "quantum" here is used in the mathematical sense — KAIROS does not claim to use quantum computers. Instead, it applies the mathematical framework of quantum mechanics (wave functions, probability distributions, Hamiltonians) to model the inherent uncertainty of financial markets. The squared magnitude |ψ(x)|² at any price level x gives the probability density of the price being at that level.
The wave function evolves according to a stochastic differential equation (SDE) that combines two primary forces:
- Mean reversion (α) — Prices tend to revert to their moving average. For crypto assets, α=0.30 (strong reversion); for traditional assets like gold, α=0.15 (weaker reversion). This parameter determines how quickly the wave function collapses back toward equilibrium. Higher alpha means the system expects faster mean reversion, which makes range-bound trading strategies more viable.
- Diffusion (σ) — Price can randomly deviate from the mean. For crypto, σ=0.25 (high volatility); for traditional assets, σ=0.08 (lower volatility). This parameter determines the width of the probability distribution. Higher sigma means wider distributions and more uncertainty, which usually makes the system more cautious about generating signals.
The SDE is evolved forward for 50 time steps using the Euler-Maruyama method. At each step, a random perturbation is added (sampling from a standard normal distribution), scaled by the diffusion parameter. This is repeated multiple times to create an ensemble of possible trajectories, from which the final probability distribution is constructed.
From this distribution, the system extracts several key observables that feed into the signal synthesis:
- The expected price — the mean of the distribution, representing the model's best estimate of where price is heading
- The uncertainty Δx — the width of the 2σ confidence interval, indicating how much the model trusts its own prediction
- The Hamiltonian energy ⟨H⟩ — how far the current price is from the ground state of the system. High energy means the price is in an unstable configuration and is likely to move significantly. Low energy means the price is near equilibrium and a big move is unlikely.
- The momentum ⟨p⟩ — the directional force derived from bivector products in the 64D Clifford algebra space. This represents the "mass-times-velocity" of the price movement.
- The quantum volume — the integral of the probability density on the bullish or bearish side of the current price. If 70% of the probability mass is above the current price, the quantum volume is 0.70 bullish. This is the layer's primary directional vote.
Why go to all this trouble? Because the wave function captures information that no single indicator can. A tight, high-energy distribution means the price is coiled like a spring — a breakout is imminent, and the quantum volume tells you which direction. A wide, low-energy distribution means the market is in equilibrium and no trade should be taken. A bimodal distribution (two peaks) suggests a binary outcome is coming — like a major announcement — and the system should step aside entirely.
A.3 — Nash Equilibrium and Predatory Detection
Financial markets are not physics experiments — they are populated by intelligent agents with competing interests. The Nash Equilibrium Solver models this reality by treating the market as a multiplayer game with three types of players:
- Institutional investors — Large players who move slowly but have enormous capital. They create lasting trends but are constrained by execution impact — a fund trying to buy $100 million of Bitcoin can't do so without moving the price. This constraint creates predictable execution patterns that KAIROS can detect.
- Retail traders — Small players who react quickly to news and technical levels. They create short-term noise and are the primary victims of liquidity traps. Their collective behavior is predictable through sentiment analysis and behavioral models.
- Market makers — Players who profit from the bid-ask spread and have privileged information about order flow. They create the liquidity landscape that constrains other players. Their behavior is the most predictable because it is driven by the simple imperative of managing inventory risk.
The Nash Equilibrium Solver calculates the payoff matrix for each player type given the current market state — considering current price, recent volume, order book depth, recent large trades, and sentiment indicators. It then finds the Nash equilibrium — the set of strategies where no player can profitably deviate. When the market deviates significantly from this equilibrium, there is a predictable reversion as players rationally adjust their strategies to capture the mispricing.
The most powerful feature of this layer is predatory strategy detection. When the solver detects a pattern consistent with a SQUEEZE (a player is deliberately creating a short squeeze by absorbing all selling pressure at key levels, then pulling their bids to force a rapid price increase) or a DUMP (a player is distributing a large position by creating artificial buying interest, then rapidly selling into the generated demand), it generates a high-conviction signal.
The Nash Combinatorial signal (confidence ≥ 85% + classic liquidity trap pattern) is one of the few signal types that can bypass the Monte Carlo probability gate entirely. This is by design: predatory moves are game-theory events, not statistical events, so statistical gates are inappropriate. A liquidity trap creates a binary outcome — either the trap succeeds (generating 5-15% moves in minutes) or it fails (generating minimal loss as the trapped player is forced to unwind). The asymmetric payoff means that even a 50% success rate is highly profitable.
A.4 — The Stochastic Engine and Win Probability
The Monte Carlo component of the KAIROS system answers a deceptively simple question: "If I enter this trade right now, what is the probability that it will be profitable?"
To answer this, the Stochastic Engine runs 1,000 independent simulations of the price path from the current price. Each simulation uses the calibrated drift (based on recent momentum) and volatility parameters (based on recent ATR), plus a random perturbation drawn from a standard normal distribution. The simulation evolves the price tick by tick for a prediction horizon determined by the Lyapunov exponent.
At the end of each simulation, the engine checks whether the simulated price hit the take-profit level before hitting the stop-loss level. The fraction of simulations that hit the target first is the raw win probability. This is a mathematically rigorous way to estimate the probability of success given the current market conditions, and it naturally accounts for volatility, momentum, and the asymmetry between stop and target distances.
This probability is then adjusted for several real-world factors:
- Slippage — The Slippage Model estimates the actual fill price based on order book depth and position size, then adjusts the distances to stop and target accordingly. A large position in a thin market gets a significant probability penalty.
- Drift bias — If the recent price trend is strong, the drift parameter is increased, biasing the simulations in the trend direction. This makes trend-following signals more likely to pass and counter-trend signals less likely.
- Time decay — Longer prediction horizons increase the probability of random noise overwhelming the signal. The adjustment follows a square-root-of-time scaling, consistent with geometric Brownian motion.
The final win probability must exceed the regime-specific gate to proceed. In a BULL_TRENDING regime, the gate is a relatively permissive 0.45 — the system gives the benefit of the doubt to trades in the trend direction. In a CRISIS regime, the gate is 0.70 — the Monte Carlo engine must show that 70% of simulated paths hit the target before the stop. This is an extraordinarily high bar and ensures that CRISIS-regime signals represent only the most extreme statistical edges.
A.5 — Harmonic Analysis and Market Cycles
The Harmonic Substrate applies Fast Fourier Transform (FFT) to the price series to decompose it into its constituent frequencies. This is the same mathematical technique used in audio processing to separate a complex sound into its component notes — a chord can be decomposed into its individual pitches, and similarly, a complex price series can be decomposed into its underlying cycles.
In market terms, the "notes" are the various cycles that drive price action: the 4-year Bitcoin halving cycle, quarterly earnings cycles, daily volatility patterns (the Asian session tends to be lower volatility than the London-New York overlap), and micro-cycles driven by algorithmic trading programs that operate on fixed time intervals. By decomposing the price into these frequencies, the Harmonic Substrate can identify whether the current price movement is supported by multiple reinforcing cycles (strong signal) or is just noise on a single frequency (weak signal).
The layer calculates a harmonic power score from the dominant frequencies. A high harmonic power score means multiple cycles are aligned — like a "perfect storm" of technical factors all pointing in the same direction. A low score means the price movement is driven by a single, potentially transient, cause. In practice, the highest-confidence KAIROS signals tend to occur when the harmonic power score is high, indicating that the predicted move is supported by cycles operating on multiple timeframes simultaneously.
The FFT also detects the dominant period of the current market cycle, which feeds into the trade duration estimation. If the dominant cycle has a period of 4 hours, a trade with a 2-hour target is reasonable; a trade with a 12-hour target would be fighting the cycle. This information is combined with the Lyapunov exponent to determine the optimal holding period for each signal.
A.6 — Swarm Intelligence: Collective Sub-Agent Voting
The Swarm Intelligence layer deploys five specialized sub-agents, each implementing a different trading philosophy. This is inspired by the biological observation that swarms of simple agents (bees, ants, fish) can make collective decisions that are more intelligent than any individual agent. In KAIROS, the "simple agents" are five trading strategies, and their collective vote is more reliable than any individual strategy.
- Momentum Agent — Follows the trend. Buys when price is moving up with increasing volume, sells when it's moving down. This is the classic trend-following strategy that has generated consistent returns since the 1980s. The momentum agent's main weakness is whipsaws — false trend signals that reverse quickly. Its strength is that it captures the bulk of any sustained move.
- Reversal Agent — Bets against extremes. Buys when the price has dropped too far too fast (oversold conditions), sells when it has risen too far too fast. This capitalizes on the statistical tendency of prices to mean-revert after extreme moves. The reversal agent uses momentum indicators (RSI, stochastic oscillators) to detect oversold/overbought conditions. Its weakness is that it can fight strong trends; its strength is precision timing of bounce trades.
- Breakout Agent — Watches for price to break through key support/resistance levels. When a consolidation pattern resolves with a decisive move through a well-established technical level, this agent votes for continuation in the breakout direction. It uses a combination of horizontal support/resistance levels and Bollinger Band breakouts. Its weakness is false breakouts; its strength is capturing the explosive moves that follow genuine breakouts.
- Volume Agent — Focuses exclusively on volume patterns. Rising volume confirms the current trend; divergent volume (price rising but volume falling) suggests weakness and an impending reversal. This agent's vote acts as a reality check on the other agents — it doesn't predict direction so much as validate (or invalidate) the other agents' predictions.
- Trend Agent — Takes a longer-term view using moving average crossovers and higher-timeframe trend analysis. While the momentum agent focuses on the immediate trend (last few minutes), the trend agent considers the broader context (last few hours). This prevents the swarm from over-focusing on short-term noise and missing the forest for the trees.
Each sub-agent independently votes BUY, SELL, or NEUTRAL with a confidence score. The Swarm Intelligence layer then aggregates these votes using a weighted majority rule, where the weights are dynamically adjusted based on each agent's recent performance (via the Hebbian synaptic feedback loop). An agent that has been consistently right in the current market regime gets more weight; an agent that has been consistently wrong gets less weight. This means the swarm automatically adapts to market conditions — in a trending market, the momentum and trend agents dominate; in a ranging market, the reversal and volume agents get more influence.
The consensus requirement varies by direction: BUY signals require 4 out of 4 non-neutral votes to agree, while SELL signals require only 2 out of 3. This asymmetry reflects the empirical observation that sell signals tend to be more reliable than buy signals in volatile crypto markets — it's easier to predict when a parabolic rally will end than when the next one will begin. The asymmetric gate ensures that the system has a natural bias toward caution on long entries while being more willing to generate short signals.
A.7 — The Humanities Cortex: Behavioral Finance in Practice
The Humanities Cortex is unique in quantitative trading systems. While every quant fund uses mathematical models, very few incorporate behavioral finance theory as a formal trading layer. The Humanities Cortex implements three influential theories from behavioral economics, each of which explains specific market phenomena that pure mathematical models miss:
George Soros's Reflexivity Theory
Reflexivity theory, articulated in Soros's book "The Alchemy of Finance," states that there exists a two-way feedback loop between market participants' perceptions and the fundamental reality of the market itself. When enough people believe a price is going up and act on that belief by buying, their purchases actually push the price up, which appears to confirm their belief, which causes more buying. This creates self-reinforcing bubbles that can drive prices far beyond any level justified by fundamentals.
The KAIROS implementation detects reflexivity bubbles by measuring the gap between the rate of price change (how fast the price is moving) and the rate of social sentiment change (how fast the narrative is shifting). When both are accelerating in the same direction with a tight correlation, the system detects "positive reflexivity" — a self-reinforcing loop. When this is detected, the system adjusts its behavior: stops are widened by 3x and take-profit targets are extended by 4x, acknowledging that in a reflexivity bubble, normal technical levels are irrelevant because the price is being driven by a narrative feedback loop rather than by fundamentals.
Conversely, when the reflexivity measurement shows divergence (sentiment is turning while price continues), the system detects "reflexivity exhaustion" — the loop is about to break. This is often the signal for a sharp reversal, and the system may generate a counter-trend signal with high conviction.
Daniel Kahneman's Prospect Theory
Prospect Theory, which earned Kahneman the Nobel Prize in Economics, explains a fundamental asymmetry in human decision-making: people feel losses roughly 2.5x more intensely than gains of the same magnitude. A $100 loss causes more psychological pain than a $100 gain causes pleasure. This leads to two systematic biases that KAIROS exploits:
- Loss aversion — Traders hold losing positions too long, hoping the price will recover and they won't have to realize the loss. This creates "trapped traders" who will eventually be forced to sell (by margin calls, stop losses, or exhaustion), adding to selling pressure at predictable levels.
- Premature gain-taking — Traders cut winning positions too short, taking profits early because they fear losing the unrealized gain. This means that trending moves are initially weaker than they should be (because early buyers are selling), but then accelerate as the latecomers pile in.
KAIROS measures these biases through funding rate data (high positive funding = overleveraged longs who are vulnerable to loss aversion), unrealized P&L distribution (how many traders are underwater), and the ratio of limit orders to market orders (loss-averse traders tend to use limit orders to exit, creating predictable resistance/support levels).
Hyman Minsky's Financial Instability Hypothesis
Minsky argued that financial stability naturally generates instability through a three-phase cycle: Hedge Finance (borrowers can cover interest and principal from cash flow), Speculative Finance (borrowers can cover interest but need to roll over principal), and Ponzi Finance (borrowers can only cover obligations by selling assets or borrowing more). Prolonged bull markets push the system from Hedge to Speculative to Ponzi, eventually reaching a "Minsky Moment" where the system collapses under its own leverage.
The Humanities Cortex measures the "Minsky Index" by tracking leverage ratios across major crypto derivatives exchanges, the proportion of open interest in leveraged products, funding rate extremes, and the ratio of speculative vs hedging activity. When the Minsky Index exceeds its critical threshold, the Cortex vetoes all BUY signals and may generate ultra-high-conviction SELL signals. This has historically been one of the most valuable alpha sources during the leverage-driven cascading liquidation events that characterize crypto markets — events like the May 2021 crash, the FTX collapse in November 2022, and the April 2024 deleveraging event.
Appendix BDeep Dive: Data Infrastructure
B.1 — The Dataslut HFT Engine
The market data ingestion layer is a custom-built Go binary called Dataslut — designed for extreme throughput. It maintains persistent WebSocket connections to 10+ cryptocurrency exchanges simultaneously and processes over 200,000 ticks per second under peak load.
The engineering decisions behind Dataslut reflect deep expertise in systems programming:
Lock-Free Architecture
Traditional Go programs use mutexes to protect shared data structures. Under high contention (many goroutines
competing for the same lock), this creates performance bottlenecks. Dataslut eliminates this by using a
lock-free architecture based on Go's atomic operations. The tick counter uses
atomic.AddUint64 instead of a mutex-protected counter, allowing multiple goroutines to increment
simultaneously without blocking. Under benchmark conditions, the atomic approach handles 10x more operations per
second than the mutex approach.
Zero-Allocation JSON Parsing
Standard JSON parsing in Go (encoding/json) allocates memory for every parsed field, creating garbage
collection pressure that causes periodic latency spikes. Dataslut uses the gjson library for
zero-allocation JSON parsing — extracting only needed fields (price, volume, timestamp) directly from the raw byte
buffer without memory allocation. This reduces GC pause times from 5-10ms to sub-millisecond values. For a system
processing real-time market data, even a 10ms pause can mean missed price movements.
Connection Management
Each exchange connection runs in its own goroutine with independent error handling and auto-reconnect logic. When a connection drops, the goroutine automatically reconnects with exponential backoff. A central coordinator monitors all connections and publishes heartbeat data to Redis. Connections producing no data for 60 seconds are forcefully restarted, ensuring graceful handling of exchange outages.
B.2 — ClickHouse: The Data Foundation
KAIROS uses ClickHouse as its primary data store — a columnar database designed for real-time analytics on massive datasets. The choice was driven by: write throughput (1M rows/sec/server), compression (10-20x reduction via LZ4), extreme query performance for time-series analysis, and append-only semantics that make the proof ledger tamper-resistant.
The deployment runs 90+ tables in several functional groups:
- Market Data — Raw ticks, 1m/5m/1h candles, order book snapshots, liquidations, funding rates (30-90 day TTL)
- Signal & Trade — Signals queue, proof ledger (indefinite), trade log (indefinite), open positions, trading journal
- Intelligence — DePIN metrics, sensor readings, Solana chain data, macro indicators, news sentiment, GitHub activity
- ML — Brain embeddings, training embeddings (5.17M rows), model performance metrics
B.3 — Sensor Network Architecture
The Mega Sensor Collector aggregates data from 100,000+ sensors worldwide across six categories:
Environmental (OpenAQ) — 30,000+ air quality stations measuring PM2.5, PM10, ozone, NO₂, SO₂, and CO. Extreme pollution events correlate with reduced economic activity, impacting commodity prices.
Weather (NOAA ISD) — 15,000+ stations reporting temperature, wind speed, visibility, and pressure. Weather data feeds commodity models (natural gas vs temperature) and supply chain disruption warnings.
Seismic (USGS) — Real-time earthquake and volcanic activity. Major events disrupt supply chains, damage infrastructure, and trigger insurance payouts with market implications.
Aviation (OpenSky) — Global aircraft tracking for 5,000+ flights. Private jet movements to financial centers signal institutional activity. Cargo patterns correlate with trade volumes.
Maritime (AIS) — Ship tracking for tankers and container ships. Shipping congestion at Suez/Malacca/Panama directly impacts commodity prices. Empty container movements lead economic activity.
Space Weather (NOAA SWPC) — Solar flare data, geomagnetic indices, solar wind. Published research suggests geomagnetic storms correlate with elevated market volatility.
Physical sensor data provides an additional information layer that, when combined with financial data through the DAG, can occasionally tip the balance of the veto chain. The edge is small per instance but compounds over thousands of trades. Physical-world events often lead financial events by hours or days, creating a predictive edge pure market-data systems cannot replicate.
B.4 — Solana Chain Intelligence
A dedicated Helius Firehose connection processes up to 100 RPS of Solana block-level data, providing visibility into block velocity, DEX volume (Raydium, Orca, Jupiter), whale movements (large wallet transfers above configurable thresholds), and program executions for DePIN-related programs. Supplemented by Birdeye price feeds for Solana-native tokens.
Appendix CDeep Dive: Machine Learning Pipeline
C.1 — The 5.17 Million Embedding Training Set
The XGBoost veto gate is trained on 5.17 million historical market state embeddings, each a 128-dimensional vector capturing the complete state of all math layers when a signal was generated (or could have been). Each embedding captures: layer outputs (quantum volume, Nash payoff, Monte Carlo probability, harmonic power), current regime classification, recent price statistics, cross-asset correlations, DePIN health scores, and the outcome label (directional correctness within measurement window).
Training Methodology
Embeddings are generated by replaying historical market data through the DAG and capturing state vectors at every tick. The process uses a memory-safe memmap approach: streaming CLI export from ClickHouse in chunks, NumPy memory-mapped storage, and temporal holdout splitting (most recent 20%). XGBoost hyperparameters: max_depth=6, learning_rate=0.1, n_estimators=500, binary:logistic objective, logloss eval metric.
Results
On holdout data (~1M embeddings): 81.1% accuracy at p≥0.70, 62% overall precision, well-calibrated probabilities (predicted 0.70 corresponds to actual ~72-75% win rate). These were confirmed through the Alpha Audit process, ruling out data snooping and look-ahead bias.
C.2 — The Label Inversion Incident
During development, a critical bug was discovered: training labels were double-inverted through two sequential transformations. The XGBoost model compensated by learning the inverse relationship. Discovered through A/B testing showing negative correlation between features and output. Post-fix retraining improved precision from ~60% to ~62%. Documented as a cautionary tale about pipeline validation.
C.3 — Synaptic Learning (Hebbian Feedback)
The Hebbian learning loop continuously adjusts layer weights based on actual P&L feedback. When a trade closes,
contributing layers are credit-assigned: winning layers get weight increases, losing layers get decreases. Learning
rate of 0.01 prevents overfitting to recent performance. Over time, the system naturally emphasizes reliable layers
for the current environment — momentum agents dominate in trends, Nash solver dominates in ranges. Weights are
logged via log_stats for monitoring.
Deep Dive: DePIN as Alpha Source
D.1 — Why Physical Data Matters
Financial markets do not exist in a vacuum. Physical events — weather, earthquakes, pollution, shipping congestion, air traffic — have measurable impacts on asset prices. Natural gas correlates with NOAA temperature data (5°F deviation from forecast moves futures 3-5%). Shipping congestion at key chokepoints impacts commodity prices. Private aviation movements to financial centers signal institutional activity. DePIN metrics are leading indicators of token price — when Render's GPU utilization drops 20%, it takes days for the market to fully price this in.
D.2 — The Reality Gap as Alpha
The most profitable application of DePIN data is the Reality Gap — the divergence between a network's physical health and its token price. When a network is growing (more nodes, higher utilization, increasing revenue) but the token price is falling, there is a statistical edge in buying. Conversely, when a network is degrading but the token is pumping on narrative alone, there is an edge in selling. The Reality Gap Scanner quantifies this divergence and can adjust signal confidence by ±15% and gate thresholds by ±10%, creating exploitable trades that purely price-based systems would never see.
Appendix EFrequently Asked Questions
Q: What is the expected signal frequency?
A: KAIROS generates 2-5 signals per day on average. The 25-step veto chain aggressively filters low-conviction signals. During ranging or crisis markets, it may go days without generating a signal. The daily cap of 5 prevents overtrading.
Q: What timeframe do signals target?
A: Short-to-medium term, 1 to 24 hours. The Lyapunov expiry field indicates each signal's specific prediction horizon. The Trade Autopsy grades at 1h, 4h, and 24h marks. KAIROS is not designed for long-term investing or sub-second HFT.
Q: How does KAIROS handle flash crashes?
A: Multiple gates protect against flash crashes: Sieve Filter removes anomalies, HIGH_VOLATILITY/CRISIS regimes activate extreme thresholds (0.85/0.95), Dead Hour Veto blocks unreliable periods, Macro Blackout suspends trading during major events, and ATR-based stops limit drawdown on active positions.
Q: Can signals be used for automated trading?
A: Yes, but KAIROS provides the signal — you provide the execution. Your bot handles: order placement, position sizing, slippage, and risk limits. We strongly recommend human oversight.
Q: What minimum capital is needed?
A: Most exchanges have $10-$100 minimums, but with 2% risk-per-trade, you need at least $5,000 for meaningful positions. Institutional clients typically allocate $100,000+.
Q: How do I verify the proof chain hasn't been tampered with?
A: Fetch signals from the public API and recompute SHA-256 hashes (see Chapter 6). ClickHouse MergeTree is append-only. The 128D state embedding stored with each signal enables verification that the model state was consistent with market conditions. Retroactive modification creates detectable chain breaks.
Q: Why are certain hours blocked?
A: UTC hours 00, 03, 20, 21, 23 showed 0% historical win rate — low-liquidity transition periods between trading sessions dominated by algorithmic market makers. KAIROS blocks trading entirely rather than generating losing signals.
Q: Why are AVAX, BNB, LTC, TRX permanently blocked?
A: These assets showed consistent 0% win rate across all regimes in backtesting. Likely due to unique microstructure factors (centralized exchange manipulation for BNB, for example) that KAIROS's math layers cannot effectively model.
Q: How is the subscription priced?
A: $40,000/month institutional data license. Includes full API access (REST + WebSocket), complete proof chain, real-time DePIN intelligence, and direct support. Pricing reflects 350+ data feeds, 25-layer architecture computation costs, and institutional-grade data quality.
Q: What happens if the system goes down?
A: Two production nodes with separate responsibilities (data/API vs compute). Failures on one node don't affect the other. Systemd auto-restart on all services. WebSocket heartbeat monitoring. Proof chain gap detection. Durable ClickHouse storage ensures clean resume from last state without data loss.
Q: Can KAIROS predict black swan events?
A: No system can predict truly unprecedented events. KAIROS detects preconditions through DePIN monitoring, macro calendar, and the Minsky Financial Instability model. The system protects against black swans through its aggressive veto chain, regime-based gates, and macro blackouts — rather than predicting them directly.
Q: How often is the XGBoost model retrained?
A: Periodically as new embedding data accumulates. Between retraining cycles, the Hebbian synaptic feedback provides real-time tactical adaptation. XGBoost provides structural learning (months of data); synaptic weights provide tactical learning (trade-by-trade).
Q: What makes KAIROS different from other signal providers?
A: Three key differentiators: (1) Cryptographic proof — every signal is SHA-256 chain-hashed before broadcast, making fabrication mathematically impossible. (2) Physical reality integration — 100,000+ sensors and 500+ DePIN projects provide alpha that purely market-data systems cannot access. (3) Multi-layer consensus — 25 independent mathematical pipelines must reach supermajority agreement before any signal is generated, providing robustness far beyond single-model systems.
Q: Can I see the historical performance of KAIROS?
A: Yes. The proof ledger at kairossignal.com shows every signal ever generated with its SHA-256 proof hash, chain validation status, and autopsy grade (WIN/LOSS). The holdout validation statistics are calculated from this publicly verifiable data. The current 90-day trial runs from March 6 to June 4, 2026, targeting 1,000 graded signals with a 60%+ win rate.
Q: Is there a free tier or trial available?
A: The proof chain endpoints (/api/v1/proof/ledger-stats and /api/v1/proof/recent-signals)
are publicly accessible without authentication. You can verify signal quality and chain integrity before committing
to the full subscription. Contact the KAIROS team through the waitlist form for trial access to the full API.
Advanced Trading Strategies with KAIROS
F.1 — Multi-Asset Portfolio Construction
While KAIROS generates individual asset signals, sophisticated users can construct portfolio-level strategies by combining signals across multiple asset classes. The key insight is that KAIROS covers crypto, commodities, and forex — three asset classes with distinct risk profiles and correlation structures. By maintaining exposure across all three, users can build more robust portfolios than by trading crypto alone.
Cross-Asset Diversification
KAIROS's 80+ tradeable instruments span four broad categories, each with different characteristics. Crypto assets (BTC, ETH, SOL, etc.) offer the highest volatility and the largest potential returns, but also the highest drawdown risk. Crypto signals tend to cluster — when the system generates a BTC BUY signal, it often generates correlated signals for ETH and SOL simultaneously, because crypto assets are highly correlated during major moves. Users should be aware of this correlation and avoid over-allocating to correlated crypto signals.
Commodity signals (gold, silver, oil, natural gas) provide lower-correlation exposure. Gold (XAUUSDT) often moves independently of crypto, making it an excellent diversifier. During periods of crypto winter, gold signals may provide the majority of profitable opportunities. Oil signals (OILUSDT) are driven by supply-demand dynamics that are largely independent of crypto markets, though both can be affected by macroeconomic events like FOMC decisions.
Forex signals (EUR/USD, GBP/USD, etc.) provide the lowest volatility but the most liquid markets. The ATR-based stops for forex are naturally tighter, resulting in smaller position sizes and lower risk per trade. Forex signals are especially useful during crypto market closures and low-liquidity periods.
Regime-Based Allocation
Advanced users should adjust their allocation based on the current market regime. The regime field in each signal provides this information. During BULL_TRENDING regimes, crypto signals are most reliable and should receive higher allocation. During CRISIS regimes, the system generates very few signals (due to the 0.95 confidence gate), and those that pass are typically gold or forex signals — safe-haven assets that benefit from crisis conditions.
A sample regime-based allocation framework:
| Regime | Crypto % | Commodities % | Forex % | Cash % |
|---|---|---|---|---|
| BULL_TRENDING | 60% | 20% | 10% | 10% |
| BEAR_TRENDING | 30% | 30% | 20% | 20% |
| RANGING_CHOP | 15% | 25% | 20% | 40% |
| HIGH_VOLATILITY | 20% | 30% | 15% | 35% |
| CRISIS | 10% | 40% | 10% | 40% |
F.2 — Signal Quality Assessment
Not all KAIROS signals are created equal. While every signal has passed the 25-step veto chain, some signals have characteristics that suggest higher conviction than others. Users can improve their results by weighting their position sizes based on signal quality indicators.
High-Confidence Indicators:
- Sigma > 3.0 — A sigma above 3.0 indicates that the signal is more than 3 standard deviations above noise level. These are rare (perhaps 1 in 20 signals) but historically have significantly higher win rates.
- XGB Win Probability > 0.85 — Signals where the XGBoost model predicts >85% win probability are in the highest-conviction tier. The model's calibration means these actually win approximately 87% of the time.
- Multiple Layer Consensus — When the signal notes mention multiple contributing layers (e.g., "NASH_SQUEEZE + QUANTUM_BREAKOUT + DePIN_BULLISH"), the consensus is stronger than a signal driven by a single layer.
- Favorable Regime — Signals generated in BULL_TRENDING regime have the highest base win rate because the regime gates are most permissive, meaning only the most extreme signals pass in other regimes. Conversely, a signal that passes the CRISIS regime gate (0.95 confidence) is extraordinarily rare and typically exhibits very high subsequent returns.
- Long Lyapunov Expiry — A longer prediction horizon means the system has higher confidence in the trajectory's stability. If the Lyapunov expiry is 4+ hours, the prediction is relatively robust against noise.
Lower-Confidence Indicators:
- Sigma 1.5-2.0 — While these signals have passed all veto gates, they are closer to the noise floor and may be less reliable.
- Short Lyapunov Expiry (< 30 minutes) — The prediction has a limited shelf life. Price noise may overwhelm the signal before it plays out.
- Single Layer Driver — Signals driven primarily by swarm consensus without support from other mathematical layers may be less robust.
F.3 — Risk Management Advanced Techniques
Beyond the Capital Governor's built-in risk controls, users can implement additional risk management techniques:
Correlation-Adjusted Position Sizing
When KAIROS generates multiple signals simultaneously (e.g., BUY signals for both BTC and ETH), the effective risk is higher than the sum of individual positions because these assets are correlated. A sophisticated approach is to calculate the portfolio's total Value-at-Risk (VaR) and ensure it stays below a maximum threshold (e.g., 5% of capital). During high-correlation periods, this means reducing individual position sizes to keep portfolio VaR constant.
Time-of-Day Adjustment
While KAIROS blocks the absolute worst hours (dead hours), there is a gradient of quality throughout the day. Signals generated during the London-New York overlap (13:00-17:00 UTC) tend to have the highest win rates because liquidity is deepest. Signals generated during the Asian session (01:00-08:00 UTC) tend to have slightly lower win rates, particularly for non-Asian assets. Users can adjust position sizes based on this gradient — full size during peak hours, reduced size during off-peak hours.
Drawdown Scaling
If your account experiences a drawdown of more than 10% from its equity peak, consider reducing position sizes by 50% until you recover to within 5% of the peak. This is a standard risk management practice in professional trading that prevents a losing streak from compounding into a catastrophic loss. KAIROS's daily trade cap (5 per day) already provides some protection, but drawdown scaling adds a second layer of defense.
Maximum Concurrent Exposure
Limit your total number of open positions. Even though each position is sized at 2% risk, having 10 concurrent positions means you have 20% of capital at risk simultaneously. A reasonable limit is 3-5 concurrent positions for accounts under $100k, and 5-8 positions for larger accounts. The Capital Governor limits daily new trades to 5, but doesn't constrain carry-over positions from previous days.
F.4 — Using DePIN Signals for Thematic Investing
Beyond individual trade signals, the DePIN intelligence layer provides data that supports thematic investment strategies. By monitoring the health scores and reality gap assessments across DePIN sectors, users can identify which sectors are in growth phases and which are in decline.
For example, if the Compute sector (Render, Akash, Golem) is consistently showing high health scores (>0.7) while Storage sector tokens (Filecoin, Arweave) are showing declining health (<0.4), a thematic investor might overweight compute-related positions and underweight storage. The DePIN data effectively provides a real-time "fundamental analysis" overlay that is unavailable from any other source.
The seven-day trend data is particularly valuable for this purpose. A sector that has been improving for seven consecutive days is likely in a genuine growth phase rather than a temporary spike. Conversely, a sector that has been degrading for seven days may indicate a structural problem (declining node profitability, increasing competition, or technical issues) that will continue to pressure token prices.
Appendix GSystem Architecture Diagrams
G.1 — Data Flow Architecture
The complete KAIROS data pipeline follows this flow:
┌──────────────────────────────────────────────────────────┐
│ DATA SOURCES │
│ │
│ ┌─────────┐ ┌──────────┐ ┌────────┐ ┌───────────────┐ │
│ │Exchanges│ │ Solana │ │ DePIN │ │Physical World │ │
│ │ 10+ │ │ Helius │ │ 500+ │ │ 100k+ sensors │ │
│ └────┬────┘ └─────┬────┘ └───┬────┘ └──────┬────────┘ │
│ │ │ │ │ │
└───────┼────────────┼──────────┼──────────────┼────────────┘
│ │ │ │
▼ ▼ ▼ ▼
┌──────────────────────────────────────────────────────────┐
│ NODE 1: DATA & API │
│ │
│ ┌──────────┐ ┌──────────────┐ ┌───────────────────┐ │
│ │ Dataslut │ │ 36 Collector │ │ Redis Cache │ │
│ │ Go HFT │ │ Services │ │ 5-min TTL │ │
│ └─────┬────┘ └──────┬───────┘ └──────┬────────────┘ │
│ │ │ │ │
│ ▼ ▼ ▼ │
│ ┌──────────────────────────────────────────────────┐ │
│ │ ClickHouse (90+ tables) │ │
│ │ market_ticks | candles | depin_stats | signals │ │
│ │ sensor_readings | solana_blocks | proof_ledger │ │
│ └──────────────────────┬───────────────────────────┘ │
│ │ │
│ ┌──────────────────────┴───────────────────────────┐ │
│ │ API Gateway (Go) │ │
│ │ REST: /api/v1/latest-data, /proof/* │ │
│ │ WebSocket: ws://port:8090/ws │ │
│ └──────────────────────────────────────────────────┘ │
└──────────────────────────────────────────────────────────┘
┌──────────────────────────────────────────────────────────┐
│ NODE 2: COMPUTE │
│ │
│ ┌──────────────────────────────────────────────────┐ │
│ │ Hunter V7 (Python) │ │
│ │ │ │
│ │ Layer 1: Sieve → Data validation │ │
│ │ Layer 2: Quantum → 64D Clifford multivector │ │
│ │ Layer 3: Nash → Game theory equilibrium │ │
│ │ Layer 4: Stochastic → 1000x Monte Carlo │ │
│ │ Layer 5: Harmonic → FFT frequency analysis │ │
│ │ Layer 6: Swarm → 5-agent voting system │ │
│ │ Layer 7: KSIG → 17D co-evolution │ │
│ │ Layer 8: Causal → Pearl do-calculus │ │
│ │ Layer 9: Meta-Adapt → Edge decay detection │ │
│ │ Layer 10: Slippage → Order book impact model │ │
│ │ Layer 11: Topology → TDA / Betti liquidity void │ │
│ │ Layer 12: Cortex → Behavioral finance │ │
│ │ Veto: XGBoost → ML gate (p ≥ 0.70) │ │
│ │ │ │
│ │ ┌─────────────┐ ┌──────────────────┐ │ │
│ │ │ Signal Queue │──▶│ Immutable Ledger │ │ │
│ │ │ (ClickHouse) │ │ SHA-256 Chain │ │ │
│ │ └─────────────┘ └──────────────────┘ │ │
│ └──────────────────────────────────────────────────┘ │
│ │
│ ┌──────────────┐ ┌───────────────┐ ┌─────────────┐ │
│ │ Trader V2 │ │ Brain V1 │ │ METIS Agent │ │
│ │ Exchange │ │ Intelligence │ │ Autonomous │ │
│ │ Execution │ │ & Accounting │ │ Operations │ │
│ └──────────────┘ └───────────────┘ └─────────────┘ │
└──────────────────────────────────────────────────────────┘
G.2 — Signal Lifecycle
Every KAIROS signal follows this exact lifecycle from market data to execution:
Market Tick Arrives
│
▼
┌─── SIEVE FILTER ──── Data quality, price sanity, symbol normalization
│ │ (PASS)
│ ▼
│── QUANTUM FIELD ──── 64D Clifford SDE evolution → quantum volume vote
│ │
│── NASH SOLVER ────── Game theory payoff matrix → predatory detection
│ │
│── STOCHASTIC ─────── 1000 Monte Carlo paths → win probability
│ │
│── HARMONIC FFT ───── Fourier decomposition → cycle alignment
│ │
│── SWARM INTEL ────── 5 sub-agents vote → consensus requirement
│ │
│ ▼
├─── KSIG/CAUSAL ───── Cross-layer synthesis → edge detection
│ │
│ ▼
├─── META-ADAPT ────── Edge decay check → veto if edge exhausted
│ │
│ ▼
├─── METAMORPHIC ───── Regime-weighted combination → confidence score
│ │
│ ▼
├─── VETO CHAIN ────── 25 gates (see Ch9 §9.2 for full list)
│ │ (ALL PASS)
│ ▼
├─── XGBOOST ──────── ML gate: p ≥ 0.70 required
│ │ (PASS)
│ ▼
├─── IMMUTABLE LEDGER ── SHA-256(prev_hash|data) → proof_hash
│ │
│ ▼
├─── SIGNAL QUEUE ──── Written to ClickHouse + API gateway
│ │
│ ▼
└─── PUBLISHED ─────── REST + WebSocket broadcast to subscribers
│ (after 1h, 4h, 24h)
▼
┌─── TRADE AUTOPSY ── Grade against actual price: WIN / LOSS
└──────────────────── Update proof ledger grade fields
G.3 — Collector Architecture
The data collection infrastructure runs 36 active services, each responsible for a specific data domain. All collectors follow a common pattern: connect to source, normalize data, write to ClickHouse, publish to Redis for real-time consumers.
| Category | Services | Data Rate | ClickHouse Tables |
|---|---|---|---|
| Exchange Market Data | 12 | ~200k ticks/sec | market_ticks, candles_* |
| Solana Chain | 3 | ~100 RPS | solana_blocks, solana_whale_* |
| DePIN Intelligence | 8 | ~60 req/min | depin_stats, depin_deep_intel |
| Physical Sensors | 6 | ~30 req/min | sensor_readings |
| Macro & TradFi | 4 | ~10 req/min | fred_macro, funding_rates |
| Social & Sentiment | 3 | ~20 req/min | news_sentiment, github_activity |
Performance Benchmarks & Expectations
H.1 — Historical Backtest Results
The 5.17 million embedding backtest was conducted over a multi-month period covering diverse market conditions. Key results include:
| Metric | Result | Notes |
|---|---|---|
| Total embeddings | 5,170,000 | 128D state vectors |
| Training set | ~4,136,000 | 80% temporal split |
| Holdout set | ~1,034,000 | 20% most recent |
| Overall precision | 62.0% | Across all probability thresholds |
| Precision at p≥0.70 | 81.1% | Production threshold |
| Calibration error | < 3% | Predicted vs actual win rate |
| Average sigma | 2.34 | Signal strength |
| Average daily signals | 3.2 | After all veto gates |
H.2 — Expected Signal Characteristics
Based on historical analysis, users should expect the following signal characteristics in production:
| Characteristic | Expected Range | Description |
|---|---|---|
| Daily signal count | 0-5 | Average 2-3; some days zero |
| Sigma range | 1.5-4.0 | Typical; extremes can reach 5.0+ |
| Win probability | 0.70-0.95 | Must be ≥0.70 to pass XGBoost gate |
| Hold duration | 15min-12h | Determined by Lyapunov expiry |
| Risk:reward ratio | 1:2.08 | Based on ATR stop/target formula |
| Crypto % of signals | ~65% | Varies by regime |
| Commodity % of signals | ~20% | Gold, silver, oil, natgas |
| Forex % of signals | ~15% | Major pairs |
H.3 — Latency and Throughput
| Component | Latency | Throughput |
|---|---|---|
| Exchange → Dataslut | < 50ms | 200k+ ticks/sec |
| Dataslut → ClickHouse | < 10ms (batch) | 1M+ rows/sec |
| Hunter V7 full DAG | 50-200ms per tick | ~5-20 ticks/sec |
| Signal → Proof Ledger | < 5ms | N/A (event-driven) |
| Signal → WebSocket | < 100ms | All subscribers |
| REST API response | < 50ms (cached) | 1000+ req/sec |
| Proof chain verification | < 200ms (25 signals) | N/A |
H.4 — Storage Requirements
| Data Category | Daily Growth | Total Size (est.) | Retention |
|---|---|---|---|
| Market ticks | ~2 GB | ~60 GB | 30 days |
| Candles (all intervals) | ~200 MB | ~18 GB | 90 days |
| DePIN metrics | ~100 MB | ~9 GB | 90 days |
| Sensor readings | ~50 MB | ~1.5 GB | 30 days |
| Signal proof ledger | ~1 MB | Growing | Indefinite |
| ML embeddings | ~50 MB | ~4.5 GB | 90 days |
| Training embeddings | 0 (static) | ~10 GB | Indefinite |
Total ClickHouse storage: approximately 100-120 GB with current retention policies. LZ4 compression reduces this by 10-20x from raw data size.
Appendix IThe Veto Chain: Complete Signal Gate Reference
I.1 — Understanding Why Signals Are Killed
The 25-step veto chain is the most aggressive quality filter in the KAIROS system. For every signal that is published, approximately 50-100 candidate signals are generated by the DAG's mathematical layers and then killed by one or more veto gates. Understanding why signals are killed is essential for users who want to evaluate the system's performance and for operators who are troubleshooting signal flow.
Each veto gate operates independently — a failure at any single gate kills the signal immediately without proceeding to subsequent gates. The gates are ordered from cheapest to most expensive computationally, so that the maximum number of weak signals is eliminated before expensive operations (like Monte Carlo simulations or XGBoost inference) are invoked. This ordering is important for performance: running 1,000 Monte Carlo simulations for every market tick would be computationally prohibitive, but running them only for the 5-10% of ticks that survive the early gates is manageable.
Gate 1: Symbol Filter
The first gate checks whether the trading symbol is in the allowed universe. The system maintains a curated allowlist of approximately 80 instruments that have been validated for data quality, liquidity, and historical model performance. Symbols not on this list are immediately rejected. Additionally, four symbols (AVAXUSDT, BNBUSDT, LTCUSDT, TRXUSDT) are permanently blocked due to consistent 0% historical win rates.
This gate eliminates the vast majority of potential signals, since the system receives market data for hundreds of symbols but only trades approximately 80 of them. The allowlist is maintained by the system operators and can be adjusted based on ongoing performance analysis.
Gate 2: Dead Hour Veto
The system refuses to generate signals during five specific UTC hours: 00, 03, 20, 21, and 23. These hours correspond to low-liquidity transition periods between major trading sessions — the gap between the New York close and the Asian open, the gap between the Asian close and the European open, and similar transition windows.
Historical analysis showed that signals generated during these hours had a 0% win rate. The cause is likely that during these periods, the market is dominated by algorithmic market makers operating with minimal volume. Price movements during these hours are often idiosyncratic (driven by a single large order in a thin market) rather than systematic (driven by collective market forces that the DAG can model). Rather than attempt to model these anomalous conditions, KAIROS simply steps aside.
Gate 3: Daily Trade Cap
No more than 5 trades per calendar day (UTC). This gate prevents overtrading, which is one of the most common causes of performance degradation in algorithmic systems. The insight is that alpha opportunities are finite — there are only so many exploitable market inefficiencies in a given day. Generating more than 5 signals per day typically means the system is detecting noise rather than signal. The cap forces the system to be maximally selective, focusing its limited daily quota on only the highest-conviction opportunities.
Gate 4: Trade Cooldown
A minimum of 5 minutes must elapse between consecutive signals for the same symbol. This prevents rapid-fire signal generation during volatile periods, where the DAG might generate a new signal every few seconds as the market oscillates. The cooldown ensures that each signal has time to develop before a new signal potentially contradicts it.
Gate 5: Duplicate Detection
If a signal for the same symbol in the same direction already exists in the signal queue, the new signal is rejected as a duplicate. This prevents signal pile-up and ensures that each published signal represents a genuinely new trading opportunity rather than a reiteration of an existing signal.
Gate 6: Regime Classification
The system classifies the current market into one of six regimes: BULL_TRENDING, BEAR_TRENDING, RANGING_CHOP, HIGH_VOLATILITY, CRISIS, or UNKNOWN. Each regime has a different confidence threshold that subsequent gates must meet. The regime classification is based on a combination of trend indicators (moving average direction and strength), volatility measures (ATR relative to historical norms), and macro context (event calendar, funding rates).
The RANGING_CHOP and CRISIS regimes have near-impossible gate thresholds (0.90 and 0.95 respectively), effectively shutting down signal generation during conditions where the model's edge is minimal. This is a feature, not a bug — the system preserves capital during unfavorable conditions to maximize returns during favorable ones.
Gate 7: Data Integrity Veto
This gate checks for phantom price entries — situations where the price data contains values that are physically implausible given recent market conditions. For example, a price of $0.00 or a price that has moved 50% in a single tick. These are typically caused by exchange API errors, connectivity issues, or data pipeline bugs. The data integrity veto prevents the DAG from generating signals based on corrupted data.
Gate 8: Minimum Sigma
The signal strength (sigma) must exceed a minimum threshold, typically 1.5. Sigma measures how many standard deviations the combined DAG output is above the noise floor. A sigma of 1.5 means the signal is 1.5 standard deviations above noise — reasonably distinguishable from random fluctuation. Lower sigma values suggest the DAG layers haven't reached a strong enough consensus to justify action.
Gate 9: Minimum Confidence
The regime-adjusted confidence score must exceed the gate specific to the current regime. This is the primary mechanism by which the system adjusts its aggressiveness based on market conditions. The confidence thresholds range from 0.30 (BULL_TRENDING, very permissive — trust the trend) to 0.95 (CRISIS, extremely restrictive — only the most extreme statistical edges pass).
Gate 10: Monte Carlo Probability
The win probability from 1,000 Monte Carlo simulations must exceed the regime-specific probability gate. This is independent of the confidence gate — a signal might have high layer confidence but low Monte Carlo probability if the stop/target distances are unfavorable given current volatility. This gate ensures that even high-conviction signals have acceptable risk/reward profiles.
Gate 11: Layer Consensus
The Consent Calculus dynamically calculates how many layers must agree for a signal to pass. In low-volatility environments, fewer layers need to agree (because the market is easier to read). In high-volatility environments, more layers must agree (because individual layer readings are less reliable). The consensus requirement prevents the system from acting on a single layer's strong conviction without corroboration from other analytical frameworks.
Gate 12: Swarm Agreement
The five swarm sub-agents must reach the required consensus level: 4/4 for BUY signals, 2/3 for SELL signals. This asymmetric requirement reflects empirical evidence that sell signals are more reliable than buy signals in crypto markets. The swarm acts as a fast, intuitive check on the more complex mathematical layers.
Gate 13: DePIN Health (for DePIN tokens)
For tokens associated with DePIN projects, the relevant network's health score must be consistent with the signal direction. A BUY signal for RNDRUSDT is vetoed if Render Network's health score is below 0.3 (indicating network degradation). Conversely, a SELL signal is vetoed if the health score is above 0.8 (indicating strong network growth that may overcome any price-based bearish signals).
Gate 14: Macro Event Blackout
Trading is suspended for 30-60 minutes around major macro events: Federal Reserve FOMC announcements, Non-Farm Payroll releases, Consumer Price Index releases, and other scheduled economic data events that could cause unpredictable market reactions. The event calendar is maintained automatically from the FRED macro data feed. No amount of mathematical modeling can predict the specific number that will be announced, so the system steps aside entirely during these events.
Gate 15: Fisher-Rao Information Geometry
This advanced gate checks the curvature of the statistical manifold in the Fisher-Rao metric space. When the manifold is highly curved, it means the statistical properties of the market are changing rapidly — the distribution of returns is non-stationary. In such conditions, any model calibrated on recent data may be unreliable because the data-generating process itself is shifting. High curvature triggers a veto, forcing the system to wait for the statistical regime to stabilize before making predictions.
Gate 16: Knapsack Optimization
The Knapsack Gate evaluates the opportunity cost of the trade. Given the daily trade cap of 5, each trade slot is valuable. The knapsack optimizer estimates whether the current signal offers better expected value than the statistical average of signals that might appear later in the day. If the current signal's expected value is below the threshold (indicating that better opportunities are likely to appear later), the signal is deferred. This gate ensures that the system uses its limited daily quota on the best available opportunities rather than the first ones that appear.
Gate 17: Slippage Model
The system estimates the execution impact of the trade based on order book depth and the intended position size. If the estimated slippage would reduce the risk/reward ratio below 1:1.5, the signal is vetoed. Alternatively, the position size may be reduced to achieve acceptable slippage. This gate is particularly important for micro-cap DePIN tokens where the order book may be thin enough that even a moderate-sized order would move the price significantly.
Gate 18: Lyapunov Stability
The Lyapunov exponent measures the prediction horizon — how long the predicted trajectory remains meaningful before chaos (random noise) overwhelms the signal. If the Lyapunov exponent indicates an expiry of less than 15 minutes, the signal is vetoed because there isn't enough time for the predicted move to play out after accounting for order placement and execution latency.
Gate 19: Electromagnetic Resistance
The electromagnetic field model calculates the Lorentz force at the predicted target price. High Lorentz force indicates strong limit order walls (dense clusters of resting limit orders) that would resist the predicted price movement. If the plasma repulsion exceeds the critical threshold, the signal is vetoed because the price is unlikely to break through the order-book barrier even if the directional prediction is correct.
Gate 20: Topology Void Check
The TDA (Topological Data Analysis) layer checks for liquidity voids in the direction of the predicted move. A liquidity void is a price range with minimal resting orders, through which price can move rapidly. If the TDA layer detects no voids in the predicted direction (meaning the path to the target is dense with resting orders), the signal's confidence is reduced. If reduced below the gate threshold, the signal is vetoed.
Gate 21: Causal Validity
Pearl's Do-Calculus check ensures that the detected correlation between the DAG layers' outputs and the predicted price movement is causal rather than spurious. If the causal boost is below 1.0, the correlation is treated as spurious and the signal is vetoed. This gate is critical for preventing the system from acting on coincidental patterns that appear in busy markets.
Gate 22: Meta-Edge Decay
The Meta-Adaptation layer continuously monitors the system's overall edge — the difference between its predicted win rate and the actual win rate over a rolling window. If the edge has been decaying consistently (indicating that the market has adapted to the system's strategy), this gate vetoes all signals until the edge recovers. This prevents the system from trading during periods when its alpha has temporarily exhausted.
Gate 23: Humanities Sentiment Check
The Humanities Cortex checks for extreme sentiment conditions that might override the mathematical layers' predictions. If the Minsky Index indicates dangerous leverage levels, BUY signals are vetoed. If the Soros Reflexivity measure indicates a self-reinforcing bubble, targets and stops are adjusted (but the signal is not vetoed, because reflexivity creates strong directional moves).
Gate 24: Capital Governor
The Capital Governor checks overall portfolio risk: maximum positions, maximum daily loss, maximum correlation concentration. If adding a new position would violate any portfolio-level risk constraint, the signal is vetoed regardless of its individual quality. This is the last risk management check before the final ML gate.
Gate 25: XGBoost ML Veto
The final gate. The complete 128-dimensional market state is fed into the XGBoost gradient-boosted classifier, which returns a win probability. The probability must exceed 0.70 (70%) for the signal to survive. This gate catches any signals that passed the earlier gates through edge cases or unusual combinations of marginally passing scores. The XGBoost model was trained on 5.17 million historical state vectors and has been validated to produce calibrated probabilities — when it says 0.70, the actual win rate is approximately 72-75%.
Only signals that survive all 25 gates are published. This extreme selectivity is the primary driver of KAIROS's precision: by ruthlessly killing every signal that doesn't meet every criterion simultaneously, the system ensures that published signals represent the highest-conviction opportunities across multiple independent analytical frameworks.
Appendix JWebhook Integration Examples
J.1 — Discord Webhook
Post KAIROS signals directly to a Discord channel for team notifications:
import websocket
import json
import zlib
import requests
DISCORD_WEBHOOK = "https://discord.com/api/webhooks/YOUR_WEBHOOK_ID"
WS_URL = "ws://kairossignal.com:8090/ws?symbols=BTCUSDT,ETHUSDT,SOLUSDT"
def discord_notify(signal):
direction = "🟢 LONG" if signal["direction"] == "BUY" else "🔴 SHORT"
embed = {
"title": f"{direction} {signal['symbol']}",
"color": 0x00FF00 if signal["direction"] == "BUY" else 0xFF0000,
"fields": [
{"name": "Entry", "value": f"${signal['entry_price']:.4f}", "inline": True},
{"name": "Target", "value": f"${signal['take_profit']:.4f}", "inline": True},
{"name": "Stop", "value": f"${signal['stop_loss']:.4f}", "inline": True},
{"name": "Confidence", "value": f"{signal['confidence']:.1%}", "inline": True},
{"name": "Win Prob", "value": f"{signal['xgb_win_probability']:.1%}", "inline": True},
{"name": "Sigma", "value": f"{signal['sigma']:.2f}σ", "inline": True},
{"name": "Regime", "value": signal["regime"], "inline": True},
{"name": "Expiry", "value": signal["lyapunov_expiry"], "inline": True},
],
"footer": {"text": f"Proof: {signal['proof_hash'][:16]}..."}
}
requests.post(DISCORD_WEBHOOK, json={"embeds": [embed]})
def on_message(ws, raw):
data = json.loads(zlib.decompress(raw))
if data.get("type") == "signal":
discord_notify(data["data"])
ws = websocket.WebSocketApp(WS_URL, on_message=on_message)
ws.run_forever()
J.2 — Telegram Bot
import websocket
import json
import zlib
import requests
BOT_TOKEN = "YOUR_BOT_TOKEN"
CHAT_ID = "YOUR_CHAT_ID"
TELEGRAM_API = f"https://api.telegram.org/bot{BOT_TOKEN}/sendMessage"
WS_URL = "ws://kairossignal.com:8090/ws?symbols=BTCUSDT,ETHUSDT,SOLUSDT"
def telegram_notify(signal):
direction = "🟢 LONG" if signal["direction"] == "BUY" else "🔴 SHORT"
text = f"""
{direction} *{signal['symbol']}*
📊 Entry: `${signal['entry_price']:.4f}`
🎯 Target: `${signal['take_profit']:.4f}`
🛡 Stop: `${signal['stop_loss']:.4f}`
📈 Confidence: `{signal['confidence']:.1%}`
🧠 XGB Win: `{signal['xgb_win_probability']:.1%}`
⚡ Sigma: `{signal['sigma']:.2f}σ`
🏷 Regime: `{signal['regime']}`
⏰ Expiry: `{signal['lyapunov_expiry']}`
🔐 Proof: `{signal['proof_hash'][:24]}...`
"""
requests.post(TELEGRAM_API, json={
"chat_id": CHAT_ID,
"text": text,
"parse_mode": "Markdown"
})
def on_message(ws, raw):
data = json.loads(zlib.decompress(raw))
if data.get("type") == "signal":
telegram_notify(data["data"])
ws = websocket.WebSocketApp(WS_URL, on_message=on_message)
ws.run_forever()
J.3 — Spreadsheet Logger (Google Sheets)
import websocket
import json
import zlib
import gspread
from oauth2client.service_account import ServiceAccountCredentials
# Authenticate with Google Sheets API
scope = ['https://spreadsheets.google.com/feeds', 'https://www.googleapis.com/auth/drive']
creds = ServiceAccountCredentials.from_json_keyfile_name('credentials.json', scope)
client = gspread.authorize(creds)
sheet = client.open("KAIROS Signals Log").sheet1
WS_URL = "ws://kairossignal.com:8090/ws?symbols=BTCUSDT,ETHUSDT,SOLUSDT"
def log_to_sheets(signal):
row = [
signal["timestamp"],
signal["symbol"],
signal["direction"],
signal["entry_price"],
signal["take_profit"],
signal["stop_loss"],
signal["confidence"],
signal["xgb_win_probability"],
signal["sigma"],
signal["regime"],
signal["lyapunov_expiry"],
signal["proof_hash"]
]
sheet.append_row(row)
def on_message(ws, raw):
data = json.loads(zlib.decompress(raw))
if data.get("type") == "signal":
log_to_sheets(data["data"])
ws = websocket.WebSocketApp(WS_URL, on_message=on_message)
ws.run_forever()
J.4 — TradingView Alert Forwarder
"""
Set up a Flask server to receive KAIROS signals via webhook
and forward them as TradingView-compatible alerts.
"""
from flask import Flask, request, jsonify
import requests
import json
app = Flask(__name__)
TRADINGVIEW_WEBHOOK = "https://your-tradingview-webhook-url.com"
@app.route('/kairos-webhook', methods=['POST'])
def receive_signal():
signal = request.json
# Transform to TradingView alert format
tv_alert = {
"ticker": signal["symbol"],
"action": "buy" if signal["direction"] == "BUY" else "sell",
"contracts": 1, # Adjust based on your sizing
"price": signal["entry_price"],
"takeProfit": signal["take_profit"],
"stopLoss": signal["stop_loss"],
"comment": f"KAIROS σ={signal['sigma']:.2f} p={signal['xgb_win_probability']:.2f}"
}
# Forward to TradingView webhook
requests.post(TRADINGVIEW_WEBHOOK, json=tv_alert)
return jsonify({"status": "forwarded"}), 200
if __name__ == '__main__':
app.run(port=5000)
Appendix K
Configuration Reference
K.1 — Regime Gate Thresholds
| Regime | Confidence Gate Behavior | Monte Carlo Behavior | Position Sizing Override |
|---|---|---|---|
| BULL_TRENDING | Permissive | Baseline | 1.0x (Standard) |
| BEAR_TRENDING | Moderate | Baseline | 0.8x (Reduced) |
| RANGING_CHOP | Highly Restrictive | Elevated | 0.5x (Defensive) |
| HIGH_VOLATILITY | Restrictive | Elevated | 0.6x (Defensive) |
| CRISIS | Maximum Restriction | Peak Threshold | 0.3x (Minimum Exposure) |
| UNKNOWN | Restrictive | Elevated | 0.5x (Defensive) |
K.2 — Risk Parameters
| Parameter | Description |
|---|---|
| Risk Per Trade | Maximum capital allowed at risk per generated signal. |
| Max Daily Drawdown | Total portfolio-level loss tolerance before circuit breaker activation. |
| Asset Correlation Limit | Maximum allowable exposure to statistically correlated asset clusters. |
| ATR Stop Multiplier | Dynamic distance measurement calculated by baseline volatility. |
| ATR Target Multiplier | Expected probability distribution based on risk/reward skew. |
| Minimum XGBoost Threshold | The trailing machine-learning p-score required for final validation. |
K.3 — Dead Hours and Blocked Symbols
| Dead Hours (UTC) | Blocked Symbols | Macro Blackout Events |
|---|---|---|
| 00, 03, 20, 21, 23 | AVAXUSDT, BNBUSDT, LTCUSDT, TRXUSDT | FOMC, NFP, CPI, GDP, ECB Rate Decision, BOJ Rate Decision |
K.4 — External Collector Protocols
| Data Source | Ingestion Style | Refresh Interval | |
|---|---|---|---|
| CoinGecko | REST Polling | Continuous | |
| Helius (Solana) | RPC Firehose | Real-time | |
| Birdeye | REST Polling | Continuous | |
| DePINscan | REST Batch | Asynchronous | |
| FRED (Macro) | REST Polling | Scheduled Event | |
| OpenAQ | 10 req/min | Free | 15min |
| NOAA ISD | Unlimited | Public | 1hr |
| USGS Earthquake | Unlimited | Public | 5min |
| OpenSky (Aviation) | 10 req/min | Free | 10s |
| GDELT (News) | Unlimited | Public | 15min |
| Exchange WebSockets | N/A | Public streaming | Real-time |
K.5 — API Endpoint Summary
| Method | Endpoint | Auth | Description |
|---|---|---|---|
| GET | /api/v1/latest-data |
None | Latest market data and KPIs |
| GET | /api/v1/signals/latest |
API Key | Recent signals with full metadata |
| GET | /api/v1/proof/ledger-stats |
None | Public proof chain statistics |
| GET | /api/v1/proof/recent-signals |
None | Public recent signals with hashes |
| GET | /api/v1/proof/verify-chain |
None | Verify chain integrity |
| GET | /api/v1/depin/sectors |
API Key | DePIN sector health scores |
| GET | /api/v1/depin/projects |
API Key | Individual project metrics |
| GET | /api/v1/regime |
API Key | Current market regime |
| WS | /ws?symbols=... |
API Key | Real-time signal stream |
| GET | /health |
None | System health check |
Case Studies: Signal Analysis
L.1 — Case Study: Multi-Layer BTC Breakout Signal
On February 12, 2026, at 14:23 UTC, KAIROS generated a BUY signal for BTCUSDT with the following characteristics: sigma 3.47, combined confidence 0.89, XGBoost win probability 0.83, regime BULL_TRENDING. This signal is notable because it demonstrates how multiple DAG layers can converge to produce a high-conviction signal that would be invisible to any single analytical framework.
What the DAG saw:
The Quantum Field layer evolved its 64-dimensional Clifford multivector and observed a tight, high-energy distribution — the wave function was concentrated around a price about 2.3% above the current level, with low uncertainty. This indicated a coiled spring: the price was in an unstable configuration likely to resolve upward. The quantum volume was 0.78 bullish, indicating strong directional conviction.
The Nash Equilibrium Solver detected institutional accumulation patterns in the order book. Large limit buy orders were being placed just below the current price and pulled milliseconds before they could be filled — a classic iceberg order pattern used by institutional buyers to accumulate without moving the market. The solver classified this as a SQUEEZE setup with 87% confidence.
The Stochastic Engine ran 1,000 simulations with the calibrated drift and volatility. 780 out of 1,000 paths hit the take-profit level before the stop-loss, yielding a Monte Carlo win probability of 0.78. This comfortably exceeded the BULL_TRENDING regime gate of 0.45.
The Harmonic Substrate detected alignment between three dominant cycles: the 4-hour momentum cycle, the daily mean-reversion cycle, and a 15-minute micro-cycle. All three were in a bullish phase simultaneously, indicating that the predicted move was supported by multiple timeframe contexts.
The Swarm Intelligence had unanimous agreement: all five sub-agents (momentum, reversal, breakout, volume, trend) voted BUY. The reversal agent's agreement was particularly notable — even the contrarian agent was bullish, suggesting that the price was not overbought despite its recent advance.
The DePIN Intelligence layer detected a 12% increase in Bitcoin hash rate over the previous 7 days, sourced from mining pool APIs. The Reality Gap Scanner identified this as a "Physical Strength / Price Lag" — the network was growing but the price hadn't yet reflected this fundamental improvement.
What the Veto Chain did:
This signal passed all 25 veto gates with considerable margin. The most interesting gate interactions:
- Slippage Model — Bitcoin's deep order book meant estimated slippage was only 0.02%, well within acceptable limits
- Lyapunov Stability — The exponent indicated a prediction horizon of 6.4 hours, giving ample time for the move to develop
- Electromagnetic Resistance — The Lorentz force calculation showed minimal limit order resistance at the target price, confirmed by the TDA layer detecting a liquidity void between $67,800 and $68,500
- Causal Inference — Pearl's Do-Calculus validated the multi-layer consensus as causal rather than spurious, with a causal boost of 1.34
- XGBoost Veto — The final ML gate returned a 0.83 win probability, well above the 0.70 threshold
Outcome: Bitcoin broke out from $67,200 and reached $68,650 within 4.2 hours, hitting the take-profit target. The Trade Autopsy system graded this as a WIN at all three measurement intervals (1h, 4h, 24h). The proof hash was committed to the immutable ledger 3.2 seconds before the signal was broadcast via WebSocket.
L.2 — Case Study: DePIN Reality Gap Signal
On February 18, 2026, at 09:15 UTC, KAIROS generated a SELL signal for a DePIN token (RNDRUSDT — Render Network) based primarily on the Reality Gap Scanner detecting a significant divergence between network health and token price.
The Reality Gap:
Over the previous 7 days, Render Network's physical health metrics had been declining: GPU active node count dropped 18%, average GPU utilization fell from 73% to 54%, and revenue per node decreased 22%. Meanwhile, the RNDR token price had increased 11% during the same period, driven by a narrative catalyst (an influencer endorsement, not a fundamental improvement).
The Reality Gap Scanner calculated a divergence score of -0.67 (negative indicating bearish physical divergence), which exceeded the -0.40 threshold for signal generation. This reduced the confidence threshold by 10% (making it easier for a bearish signal to pass) and boosted the sigma by 15%.
Supporting DAG layers:
The Nash Equilibrium Solver detected retail-driven FOMO buying (many small market orders against institutional limit sells), classified as a DUMP distribution pattern. The Humanities Cortex flagged positive reflexivity exhaustion — sentiment was still positive but price momentum was decelerating, indicating the self-reinforcing loop was about to break.
The Quantum Field layer showed a biased distribution — 65% of the probability mass was below the current price, indicating bearish quantum volume. The Stochastic Engine estimated 72% probability of hitting the downside target before the upside stop.
Outcome: RNDR declined 8.3% over the next 18 hours as the market corrected to reflect the network's actual health. The signal hit its take-profit target within 12 hours. The Trade Autopsy graded this as a WIN. This is the quintessential DePIN alpha trade — a signal that would be impossible without physical-world sensor data, because no price-based indicator would have predicted the decline. The divergence between physical reality and narrative-driven price was the alpha source.
L.3 — Case Study: Veto Chain Correctly Killing a Signal
Not every story is a win. On March 1, 2026, at 21:45 UTC, the DAG generated what appeared to be a strong BUY signal for ETHUSDT — but the veto chain killed it. Understanding why is as important as understanding why winning signals pass.
The mathematical layers had calculated a sigma of 2.8 and combined confidence of 0.72. The Swarm Intelligence had 4/5 agreement (the volume agent dissented, noting declining volume on the rally). The Monte Carlo probability was 0.58.
The signal was killed by Gate 2: Dead Hour Veto. UTC 21:00 is a dead hour — the period between the US close and the Asian pre-open. Even though the mathematical layers saw potential, the system refused to generate a signal during this period because historical analysis showed 0% win rate during dead hours.
Was the veto chain right? ETH did rally 1.2% in the next hour — but then reversed sharply and fell 3.5% overnight as Asian session sellers entered. The take-profit target of +2.8% was never reached. If the signal had been generated, it would have been a loss. The dead hour veto saved capital.
This case study illustrates a key principle: the veto chain is not about maximizing the number of winning signals. It's about maximizing the win rate of generated signals. Killing a potentially winning signal is an acceptable cost if it also kills many losing signals. The veto chain's aggressive filtering means users can trade published signals with higher confidence, knowing that each one has survived 25 independent quality checks.
Appendix MSecurity & Network Architecture
M.1 — Production Network Topology
KAIROS runs on two dedicated production nodes behind Nginx reverse proxies with TLS/SSL termination. The network architecture separates data-plane and compute-plane traffic to prevent resource contention.
Node 1 (Data & API — [REDACTED])
- Nginx reverse proxy with Let's Encrypt TLS certificates (auto-renewal)
- Go API gateway listening on ports 8080 (HTTP) and 8090 (WebSocket)
- ClickHouse instance with 90+ tables
- Redis cache for real-time data and service health
- 36 data collector services
- UFW firewall: only ports 80, 443, 8080, 8090 open to the internet
Node 2 (Compute)
- Hunter V7 prediction engine (Python, high CPU/memory)
- Trader V2 execution engine
- Brain V1 intelligence and accounting
- METIS sovereign agent
- Cortex swarm coordinator
- ClickHouse client connections to Node 1
- UFW firewall: SSH only from known IPs
M.2 — Data Security
All external API traffic is encrypted with TLS 1.2/1.3 via Nginx. Internal node-to-node communication uses SSH tunnels for ClickHouse queries. API keys are rotated quarterly and stored as environment variables (not in code). The proof ledger is append-only by design — ClickHouse MergeTree engine does not support in-place updates, providing physical tamper resistance at the storage layer.
M.3 — Backup and Disaster Recovery
ClickHouse data is backed up daily to external storage. The proof ledger receives real-time replication to an off-site backup. System configuration is maintained in Git with deployment scripts for rapid reprovisioning. Recovery time objective (RTO): 2 hours for full system restoration from backups. Recovery point objective (RPO): 24 hours for market data; 0 for proof ledger (synchronous replication).
Appendix NThe KAIROS Philosophy
N.1 — Why Physical Reality Leads Price
The central thesis of KAIROS — that physical reality leads financial price — is not a novel observation, but its systematic implementation is unprecedented. The relationship between physical-world data and financial markets has been exploited informally for centuries. Commodity traders have always watched weather reports. Shipping companies have always monitored port congestion. Intelligence agencies have tracked military supply movements to predict geopolitical events.
What KAIROS does differently is automate and systematize this process across hundreds of physical data sources simultaneously, integrating the information into a unified mathematical framework that can generate actionable trading signals in real time. No human trader can monitor 100,000 sensors, 500 DePIN projects, 10 exchange order books, and the Federal Reserve's economic calendar simultaneously. But a 25-layer computational pipeline can.
The DePIN revolution has made this approach vastly more powerful. Before DePIN, physical-world sensor data was siloed in proprietary networks controlled by governments and corporations. Accessing real-time weather data, air quality measurements, or infrastructure health metrics required expensive institutional subscriptions. DePIN networks have democratized access to this data by incentivizing individuals to deploy sensors and share their data publicly.
KAIROS is one of the first platforms to recognize that this democratized physical data has direct financial applications. When a Helium hotspot goes offline in a major city, it's a data point about the network's health that directly impacts the HNT token price. When Render Network's GPU utilization spikes, it means developers are actively using the network — a bullish fundamental signal. These are not abstract indicators; they are measurements of the actual economic activity that drives token value.
N.2 — The Multi-Model Imperative
Modern quantitative finance is dominated by single-model approaches. A typical quant fund might use one primary model — whether it's machine learning, statistical arbitrage, or factor investing — and optimize that model relentlessly. The problem with this approach is model risk: every model has blind spots, and when the market enters a regime that the model doesn't handle well, performance collapses catastrophically.
KAIROS takes the opposite approach. Instead of building one perfect model, it builds 25 imperfect models that each see the market from a different mathematical perspective. The key insight is that these models' blind spots don't overlap. The Quantum Field layer might fail during high-noise environments, but the Nash Equilibrium Solver excels in exactly those conditions. The Harmonic Substrate might miss regime transitions, but the Meta-Adaptation layer is specifically designed to detect them.
The requirement for supermajority consensus across independent mathematical frameworks creates a system that is far more robust than the sum of its parts. A signal that passes through geometric algebra, game theory, stochastic calculus, Fourier analysis, swarm intelligence, causal inference, topological analysis, and gradient-boosted machine learning simultaneously represents a rare convergence of analytical certainty that no single framework could achieve alone.
This philosophy has a cost: selectivity. The multi-model consensus requirement means that KAIROS generates far fewer signals than a single-model system would. On average, 2-5 signals per day compared to hundreds that a less selective system might produce. But each signal carries the institutional-grade conviction that comes from multi-framework validation.
N.3 — Transparency Through Cryptography
The trading signal industry has a trust problem. Signal providers routinely cherry-pick their best trades, fabricate histories, and present backtested results as live performance. Users have no way to verify whether a provider's claimed win rate is genuine or fabricated.
KAIROS solves this with the SHA-256 immutable proof chain. Every signal is cryptographically committed — hashed with its data and the previous signal's hash — before it is published. This creates a chain of evidence that is mathematically impossible to fake retroactively. You cannot insert a winning signal into the historical record without changing all subsequent hashes, which would immediately be detected by the chain verification endpoint.
The 128-dimensional state embedding stored alongside each proof hash adds another layer of verification. Even if someone could somehow fabricate a valid hash chain (which is computationally infeasible), the state embeddings would reveal the fabrication because they encode the complete market state at signal time. If the embedding doesn't match the actual market conditions that existed at the claimed timestamp, the signal is demonstrably fraudulent.
This level of transparency is rare in the trading industry and is a deliberate design choice. KAIROS believes that the best way to build trust in a market intelligence platform is to make verification trivially easy. Every signal, every proof hash, every chain link is publicly queryable. Users don't have to trust KAIROS's claims — they can verify them independently using the publicly available API endpoints and the SHA-256 verification algorithm documented in Chapter 6.
N.4 — The Future of Physical-Financial Convergence
KAIROS represents an early implementation of a broader trend: the convergence of physical-world data with financial market intelligence. As DePIN networks continue to grow (500+ projects today, with hundreds more launching every quarter), the volume and variety of physical-world data available for financial analysis will explode. Future versions of the platform may incorporate additional data sources: satellite imagery for crop yield estimation, IoT data from industrial networks, autonomous vehicle traffic patterns for retail activity estimation, and carbon credit market data from sustainability networks.
The mathematical framework — a multi-layer DAG of independent analytical engines with consensus-based signal generation and cryptographic proof of integrity — is designed to absorb these new data sources as they become available. Adding a new data source doesn't require rebuilding the architecture; it requires adding a new collector service and potentially a new DAG layer that specializes in extracting signal from that domain.
The vision is a platform that sees the world as it actually is — not through the narrow lens of a price chart, but through the vast network of physical sensors, blockchain transactions, economic indicators, and social signals that collectively determine where financial value flows. KAIROS doesn't predict prices in isolation; it models the physical and informational forces that create price movements, and then translates those forces into actionable intelligence.
This is what institutional-grade market intelligence looks like in 2026: not faster trading, not more data, but deeper understanding of the physical reality that underlies every financial market. KAIROS is the platform built to deliver that understanding.
Appendix OThe METIS Agent: Autonomous Operations
O.1 — Sovereign Agent Architecture
METIS is not a simple chatbot or monitoring tool. It is a Sovereign Agent — an autonomous AI system that manages the operational intelligence layer of the KAIROS platform. The term "sovereign" reflects METIS's design philosophy: it operates independently, makes decisions based on its own observations, and takes action without requiring human intervention for routine operations.
METIS's capabilities can be grouped into five operational domains:
System Health Monitoring
METIS continuously monitors all 36+ production services across both nodes. For each service, it tracks: process status (running, stopped, crashed), CPU and memory usage, last output timestamp, error rate, and restart count. When a service crashes or becomes unresponsive, METIS can automatically restart it through systemd commands. If a restart fails three times, METIS escalates by logging a critical alert and suspending dependent services to prevent cascading failures.
The monitoring extends to ClickHouse health: table sizes, query latencies, background merge progress, and disk utilization. When a table grows beyond its expected size (indicating a TTL failure or data pipeline issue), METIS triggers a cleanup operation. When query latency exceeds the threshold, METIS identifies and kills the offending query.
Intelligence Briefings
Every hour, METIS generates a structured intelligence briefing that synthesizes data from across the platform. The briefing follows a two-stage pipeline:
Stage 1 uses the Kimi AI model to process raw data: recent signals and their outcomes, current market regime, DePIN network health changes, sensor anomalies, system performance metrics, and any errors or warnings from the service logs. Kimi generates a semi-structured analysis that covers the key developments in the past hour.
Stage 2 uses the DeepSeek model to synthesize the Kimi output into a strategic narrative. DeepSeek adds historical context (comparing the current situation to similar past conditions), strategic recommendations (suggesting regime-appropriate adjustments), and risk assessments (identifying potential problems before they materialize).
The result is a briefing that reads like an intelligence report from a financial operations center: concise, data-driven, and actionable. These briefings are permanently stored in ClickHouse for historical analysis and can be queried through the METIS chat interface.
Chat Journaling and RAG
All METIS interactions — both automated briefings and operator chat sessions — are permanently journaled to ClickHouse. This creates a comprehensive operational history that can be queried using natural language. The journaling uses a dual storage architecture: ClickHouse for structured queries (timestamps, categories, metrics) and ChromaDB for vector-based semantic search (RAG).
This means an operator can ask METIS: "What happened to the Helius firehose service last Tuesday?" and METIS will retrieve the relevant journal entries, service logs, and any associated briefings from that time period. The ChromaDB vector store enables semantic matching even if the exact terminology differs from the stored data.
Schema Healing
ClickHouse schema drift is a persistent operational challenge. When a collector service is updated to produce new fields, the ClickHouse table might not have the corresponding columns, causing insert failures that silently drop data. METIS addresses this through the "Tin Man Actual" schema healing protocol.
When METIS detects insert failures caused by schema mismatches, it: (1) identifies the missing columns by comparing the insert statement's fields against the table schema, (2) generates ALTER TABLE statements to add the missing columns with appropriate data types, (3) tests the ALTER statements in a sandbox, (4) applies the changes to production, and (5) verifies that subsequent inserts succeed. This entire process is automated and typically completes in under 60 seconds.
Dashboard Management
METIS powers the operator dashboard at kairossignal.com with real-time data feeds. It aggregates KPI data from ClickHouse, calculates trend indicators, and formats the data for the frontend. The dashboard includes: overall trade performance (P&L, win rate, Sharpe ratio), DePIN project rankings, sensor network health, API usage metrics, and proof chain integrity status.
O.2 — Operator Interaction Patterns
The METIS chat interface supports several interaction patterns:
System Status Queries
Operator: "What's the current system health?"
METIS: "All 36 services running. ClickHouse: 92GB used, query latency 12ms avg.
Hunter V7: 3 signals generated today (2 WIN, 1 pending).
DePIN sensors: 98,450 active (99.2% coverage).
Proof chain: intact, 14,230 entries total."
Historical Analysis
Operator: "Why did we generate no signals yesterday?"
METIS: "Yesterday's regime was RANGING_CHOP from 00:00 to 18:00 UTC,
then HIGH_VOLATILITY from 18:00 to 24:00 UTC. The RANGING_CHOP
regime has a confidence gate of 0.90, which no candidate signal
exceeded. The HIGH_VOLATILITY period had two candidates that
failed the Fisher-Rao information geometry gate due to rapid
statistical manifold curvature changes."
Diagnostic Commands
Operator: "Check DePIN collector health"
METIS: "deep-depin-intel: Running, last output 2m ago, 0 errors
depin-deep-intel-v2: Running, last output 45s ago, 0 errors
birdeye-collector: Running, last output 30s ago, 0 errors
depinscan-collector: WARNING - last output 15m ago, 3 errors
Investigating depinscan-collector...
Error: HTTP 429 Too Many Requests from DePINscan API.
Rate limit exceeded. Backing off for 5 minutes."
Appendix P
Mathematical Layer Formulas
P.1 — Quantum Field Evolution (Schrödinger SDE)
dψ = -α(ψ - μ)dt + σ·dW
Where:
ψ = 64-dimensional Clifford multivector
α = mean reversion strength (0.30 crypto, 0.15 TradFi)
μ = current moving average (expected value)
σ = diffusion parameter (0.25 crypto, 0.08 TradFi)
dW = Wiener process increment ~ N(0, dt)
dt = time step
Evolved for 50 steps using Euler-Maruyama method.
P.2 — Proof Hash Computation
proof_hash = SHA-256(
prev_hash
+ timestamp (ISO 8601)
+ symbol
+ direction
+ entry_price (8 decimal precision)
+ take_profit (8 decimal precision)
+ stop_loss (8 decimal precision)
+ confidence (6 decimal precision)
+ sigma (6 decimal precision)
+ regime
)
Genesis: prev_hash = "GENESIS" for first entry
P.3 — ATR-Based Stop/Target
ATR_14 = 14-period Average True Range
stop_loss_distance = ATR_14 × 2.0
take_profit_distance = ATR_14 × 4.16 (ratio = 2.08:1)
For BUY:
stop_loss = entry_price - stop_loss_distance
take_profit = entry_price + take_profit_distance
For SELL:
stop_loss = entry_price + stop_loss_distance
take_profit = entry_price - take_profit_distance
P.4 — Sigma Calculation
sigma = (combined_score - noise_mean) / noise_std
Where:
combined_score = Σ(layer_weight × layer_confidence) across all active layers
noise_mean = rolling mean of combined_score over calibration window
noise_std = rolling std of combined_score over calibration window
Signal threshold: σ ≥ 1.50
P.5 — Monte Carlo Win Probability
For n = 1 to 1000:
price = entry_price
For t = 1 to T:
price += drift × dt + volatility × √dt × Z
Where Z ~ N(0, 1)
If BUY: wins[n] = (max_price_path ≥ take_profit before min_price_path ≤ stop_loss)
If SELL: wins[n] = (min_price_path ≤ take_profit before max_price_path ≥ stop_loss)
win_probability = sum(wins) / 1000
Adjustments:
- Slippage: reduce favorable distance, increase unfavorable distance
- Drift bias: adjust drift parameter based on recent momentum
- Time decay: scale by √(T/T_ref) per Brownian motion scaling
P.6 — Lyapunov Exponent
λ = lim(t→∞) [1/t × ln(|δ(t)/δ(0)|)]
Where:
δ(t) = separation between nearby trajectories at time t
δ(0) = initial separation (infinitesimal)
Estimated via embedding delay:
- Reconstruct phase space from price series using delay embedding
- Track divergence of nearby phase-space trajectories
- Positive λ = chaotic (prediction horizon limited)
- Near-zero λ = stable (prediction horizon extended)
Prediction horizon ∝ 1/λ
Minimum horizon for signal: 15 minutes
P.7 — Consent Calculus
required_consensus = floor(total_active_layers × consensus_fraction)
consensus_fraction = base_fraction + volatility_adjustment
Where:
base_fraction = 0.60 (60% of layers must agree)
volatility_adjustment = 0.10 × (current_vol / historical_vol - 1.0)
(clamped to [-0.10, +0.20])
High volatility → up to 80% consensus required
Low volatility → down to 50% consensus required
P.8 — Reality Gap Score
reality_gap = physical_health_z - price_momentum_z
Where:
physical_health_z = z-score of DePIN health composite
health_composite = w₁(node_count_Δ7d) + w₂(utilization_Δ7d) + w₃(revenue_Δ7d)
Weights: w₁=0.35, w₂=0.40, w₃=0.25
price_momentum_z = z-score of 7-day price return
Interpretation:
reality_gap > +0.40 → Physical undervalued (bullish divergence)
reality_gap < -0.40 → Physical overvalued (bearish divergence)
|reality_gap| < 0.40 → No significant divergence
Effects:
- Confidence threshold adjusted by ±10%
- Sigma boosted by ±15%
- Regime gate adjusted by ±10%
Appendix Q
Exchange Coverage & Instrument Guide
Q.1 — Supported Exchanges
KAIROS ingests real-time market data from the following exchanges through the Dataslut HFT engine:
| Exchange | Connection Type | Data Types | Instruments |
|---|---|---|---|
| Binance | WebSocket + REST | Trades, order book, klines, liquidations, funding | 40+ USDT pairs |
| Coinbase Pro | WebSocket | Trades, order book L2 | 20+ USD pairs |
| Kraken | WebSocket | Trades, OHLCV, spread | 15+ pairs |
| OKX | WebSocket | Trades, order book, funding rates | 25+ USDT pairs |
| Bybit | WebSocket | Trades, order book, liquidations | 30+ USDT pairs |
| Bitfinex | WebSocket | Trades, order book | 10+ USD pairs |
| KuCoin | WebSocket | Trades, order book L2 | 15+ USDT pairs |
| Gate.io | WebSocket | Trades, USDT pairs | 20+ pairs |
| MEXC | WebSocket | Trades, order book | 10+ USDT pairs |
| HTX (Huobi) | WebSocket | Trades, order book | 10+ USDT pairs |
Multi-exchange data ingestion serves two purposes: (1) price redundancy — if one exchange goes offline, the system continues operating on data from the remaining exchanges, and (2) cross-exchange arbitrage detection — price discrepancies between exchanges can indicate whale activity, liquidity events, or data quality issues.
Q.2 — Instrument Categories
The 80+ tradeable instruments are organized into categories, each with distinct risk characteristics and data quality profiles:
Large-Cap Crypto (20 instruments)
Assets with market capitalization above $5 billion. These have the deepest order books, lowest slippage, and most reliable data quality. They include: BTC, ETH, SOL, XRP, ADA, DOT, LINK, AVAX (blocked), UNI, AAVE, MATIC, ATOM, NEAR, APT, ARB, OP, DOGE, SHIB, and others. These assets produce the highest-frequency signals because the deep liquidity means the slippage gate rarely vetoes them.
DePIN Tokens (30+ instruments)
Tokens associated with Decentralized Physical Infrastructure Networks. These include: HNT (Helium), RNDR (Render), FIL (Filecoin), AR (Arweave), THETA (Theta), AKT (Akash), MOBILE (Helium Mobile), IOT (Helium IoT), HIVEMAP (Hivemapper), and many others. These are the assets where KAIROS has the strongest edge because the DePIN intelligence layer provides fundamental data unavailable to other signal providers. However, many DePIN tokens have thin liquidity, meaning the slippage model may reduce position sizes or veto signals entirely.
Commodities (8 instruments)
Traditional commodities traded as crypto-denominated pairs or CFDs: XAUUSDT (gold), XAGUSDT (silver), OILUSDT (crude oil), NGUSDT (natural gas), and agricultural baskets. These assets are particularly interesting for KAIROS because the physical sensor network provides direct data relevance — weather sensors predict energy demand, shipping sensors predict supply disruptions, and seismic sensors predict production risks.
Forex (10 instruments)
Major and minor forex pairs: EURUSDT, GBPUSDT, JPYUSDT, CHFUSDT, AUDUSDT, and others. These provide the lowest volatility but most liquid trading opportunities. Forex signals are driven primarily by the macro data layer (FRED economic indicators, central bank rate decisions) and the Harmonic Substrate (which excels at detecting the cyclic patterns characteristic of forex markets).
Micro-Cap DePIN (15+ instruments)
Smaller DePIN tokens with market caps below $100 million. These offer the highest potential returns but also the highest risk. The slippage model is critical for these assets — a $10,000 order on a micro-cap DePIN token might move the price 2-3%, making the signal unprofitable before it even executes. KAIROS automatically reduces position sizes for thin-market assets and vetoes signals where estimated slippage exceeds the risk/reward threshold.
Q.3 — Instrument Selection Process
Not every available trading pair is included in the KAIROS allowed universe. Instruments must meet several criteria to be included:
- Data quality — The exchange must provide reliable, low-latency WebSocket data for the pair. Pairs with frequent data gaps, stale quotes, or known manipulation issues are excluded.
- Minimum liquidity — The pair must have sufficient 24-hour volume to support a minimum position size of $1,000 without exceeding 0.5% slippage. This threshold eliminates most micro-cap tokens but retains larger DePIN projects.
- Historical performance — The KAIROS backtesting engine must show a positive expected value for signals on the pair across multiple market regimes. Pairs with consistent negative expectancy (like AVAX, BNB, LTC, TRX) are permanently blocked.
- Data source coverage — For DePIN tokens, the relevant network must have sufficient data coverage in the KAIROS sensor network to enable meaningful Reality Gap analysis. A DePIN token with no available network health data has no DePIN intelligence edge and is treated as a regular crypto asset.
- Regulatory status — Securities classified assets or tokens under active regulatory action are excluded to avoid compliance risk for users.
The instrument universe is reviewed monthly and adjusted based on ongoing performance analysis, liquidity changes, and new DePIN project launches.
Appendix RChangelog & Version History
Version 7.5 (March 2026) — Current
- Added Electromagnetic Field layer (Lorentz force limit order wall detection)
- Expanded DePIN coverage to 500+ projects
- XGBoost model retrained on 5.17M embeddings with improved label pipeline
- Added 128D state embedding storage for complete auditability
- Implemented Humanities Cortex (Soros reflexivity, Kahneman prospect theory, Minsky instability)
- Added Fisher-Rao information geometry veto gate
- Integrated Topological Data Analysis (TDA) Betti void detection
- Expanded sensor network to 100,000+ global sensors
- Added real-time SHA-256 proof chain with public verification API
- Launched 90-day live trial (March 6 - June 4, 2026)
Version 7.0 (January 2026)
- Hunter V7 Architecture — 25 mathematical layers across 13 execution nodes
- Added Nash Equilibrium Solver with predatory detection
- Added Quantum Field (Schrödinger wave function) layer
- Implemented Lyapunov exponent for dynamic prediction horizons
- Added Co-Evolution Framework (KSIG 17D state tracking)
- Expanded to 80+ tradeable instruments
Version 6.0 (November 2025)
- Go-based API gateway (replacing Python Flask)
- Dataslut HFT engine (replacing Python WebSocket client)
- ClickHouse migration (from PostgreSQL)
- Added Hebbian synaptic learning feedback loop
- DePIN intelligence integration (first version, 200 projects)
Version 5.0 (August 2025)
- Multi-layer prediction engine (8 layers)
- Monte Carlo win probability estimation
- ATR-based stop/target placement
- Regime classification system
- Dead hour veto chain
Version 4.0 (May 2025)
- Initial Hunter engine with 4 mathematical layers
- Basic signal generation for crypto majors
- Manual proof tracking (replaced by SHA-256 chain in v7.5)
Comprehensive API Response Examples
S.1 — Latest Data Response
A complete response from GET /api/v1/latest-data:
{
"status": "ok",
"timestamp": "2026-03-09T14:23:47Z",
"data": {
"market_overview": {
"btc_price": 67234.50,
"btc_24h_change": 2.34,
"eth_price": 3456.78,
"eth_24h_change": 1.89,
"sol_price": 187.42,
"sol_24h_change": 4.12,
"total_crypto_market_cap": "2.45T",
"btc_dominance": 52.3,
"fear_greed_index": 68,
"market_regime": "BULL_TRENDING"
},
"signal_stats": {
"total_signals_24h": 4,
"wins_24h": 3,
"losses_24h": 0,
"pending_24h": 1,
"win_rate_7d": 0.72,
"win_rate_30d": 0.68,
"avg_sigma_24h": 2.87,
"avg_confidence_24h": 0.82,
"total_proof_entries": 14230,
"chain_status": "INTACT"
},
"depin_health": {
"overall_score": 0.71,
"sector_scores": {
"compute": 0.82,
"storage": 0.64,
"wireless": 0.59,
"mapping": 0.73,
"energy": 0.68,
"ai": 0.77,
"sensors": 0.65
},
"top_movers": [
{"project": "Render", "score_delta_7d": +0.12},
{"project": "Akash", "score_delta_7d": +0.08},
{"project": "Filecoin", "score_delta_7d": -0.11}
]
},
"system_health": {
"services_running": 36,
"services_total": 36,
"clickhouse_size_gb": 92.4,
"clickhouse_query_latency_ms": 12,
"sensors_active": 98450,
"api_requests_1h": 2847,
"last_signal_time": "2026-03-09T13:45:12Z",
"uptime_hours": 720
}
}
}
S.2 — Signal Response (via REST)
A complete signal object from GET /api/v1/signals/latest:
{
"id": "sig_20260309_142347_BTCUSDT",
"timestamp": "2026-03-09T14:23:47.234Z",
"symbol": "BTCUSDT",
"direction": "BUY",
"entry_price": 67234.50,
"take_profit": 68890.12,
"stop_loss": 66445.30,
"confidence": 0.8923,
"sigma": 3.47,
"xgb_win_probability": 0.8312,
"regime": "BULL_TRENDING",
"lyapunov_expiry": "2026-03-09T20:40:00Z",
"proof_hash": "a7c2f1e8d3b4a5c6e7f8091a2b3c4d5e6f7a8b9c0d1e2f3a4b5c6d7e8f90a1b",
"prev_hash": "f8e7d6c5b4a3928170f6e5d4c3b2a19087f6e5d4c3b2a19087f6e5d4c3b2a190",
"chain_valid": true,
"contributing_layers": [
"QUANTUM_BREAKOUT",
"NASH_ACCUMULATION",
"HARMONIC_TRIPLE_ALIGN",
"SWARM_UNANIMOUS",
"DEPIN_BULLISH"
],
"risk_reward_ratio": 2.08,
"atr_14": 789.32,
"position_size_pct": 2.0,
"notes": "Multi-layer convergence: Quantum high-energy breakout + Nash institutional accumulation + harmonic triple-cycle alignment. DePIN BTC hash rate +12% supporting.",
"state_embedding_128d": [0.42, -0.18, 0.73, 0.11, ...],
"autopsy": {
"grade_1h": "WIN",
"grade_4h": "WIN",
"grade_24h": "PENDING",
"price_at_1h": 67890.45,
"price_at_4h": 68450.22,
"max_favorable_excursion": 1812.50,
"max_adverse_excursion": 234.10
}
}
S.3 — WebSocket Message Types
The WebSocket stream at ws://kairossignal.com:8090/ws?symbols=BTCUSDT,ETHUSDT produces four types of
messages, all zlib-compressed JSON:
Type: "heartbeat" — Sent every 30 seconds to maintain the connection:
{
"type": "heartbeat",
"timestamp": "2026-03-09T14:23:47Z",
"server_time_utc": 1741527827,
"active_symbols": ["BTCUSDT", "ETHUSDT"],
"system_status": "operational"
}
Type: "tick" — Real-time price updates (high frequency, multiple per second):
{
"type": "tick",
"symbol": "BTCUSDT",
"price": 67234.50,
"volume_24h": 1234567890.50,
"bid": 67234.20,
"ask": 67234.80,
"timestamp": "2026-03-09T14:23:47.234Z"
}
Type: "signal" — New trading signal generated (rare, 2-5 per day):
{
"type": "signal",
"data": {
"symbol": "BTCUSDT",
"direction": "BUY",
"entry_price": 67234.50,
"take_profit": 68890.12,
"stop_loss": 66445.30,
"confidence": 0.8923,
"sigma": 3.47,
"xgb_win_probability": 0.8312,
"regime": "BULL_TRENDING",
"lyapunov_expiry": "2026-03-09T20:40:00Z",
"proof_hash": "a7c2f1e8d3b4a5c6...",
"notes": "Multi-layer convergence..."
}
}
Type: "regime_change" — Market regime transition detected:
{
"type": "regime_change",
"from": "BULL_TRENDING",
"to": "HIGH_VOLATILITY",
"timestamp": "2026-03-09T14:23:47Z",
"reason": "ATR expansion exceeds 2.5x 20-day mean + VIX equivalent spike",
"new_confidence_gate": 0.85,
"new_mc_gate": 0.60
}
S.4 — Proof Chain Verification Response
{
"status": "ok",
"chain_integrity": "INTACT",
"total_entries": 14230,
"first_entry": "2026-01-15T00:12:34Z",
"last_entry": "2026-03-09T13:45:12Z",
"genesis_hash": "GENESIS",
"latest_hash": "a7c2f1e8d3b4a5c6e7f8091a2b3c4d5e6f7a8b9c0d1e2f3a4b5c6d7e8f90a1b",
"breaks_detected": 0,
"verification": {
"entries_checked": 25,
"all_valid": true,
"verification_time_ms": 187,
"sample_verifications": [
{
"entry_id": 14230,
"computed_hash": "a7c2f1e8d3b4...",
"stored_hash": "a7c2f1e8d3b4...",
"match": true
},
{
"entry_id": 14229,
"computed_hash": "f8e7d6c5b4a3...",
"stored_hash": "f8e7d6c5b4a3...",
"match": true
}
]
},
"autopsy_summary": {
"total_graded": 13850,
"wins": 9450,
"losses": 4400,
"win_rate": 0.682,
"avg_sigma_winners": 2.89,
"avg_sigma_losers": 2.12
}
}
Appendix T
Getting Started: Your First Week with KAIROS
T.1 — Day 1: API Access and Verification
Your first step after receiving API credentials should be to verify that you can connect to the platform and that the proof chain demonstrates genuine performance. Start by querying the public proof endpoints — these require no authentication and provide immediate visibility into the system's track record.
Open your terminal and run the following curl command to fetch the latest proof chain statistics:
curl -s https://kairossignal.com/api/v1/proof/ledger-stats | python3 -m json.tool
The response will show you the total number of signals ever generated, the chain integrity status, and the autopsy summary with win/loss counts and overall win rate. This is your first trust verification — the numbers you see are cryptographically committed and cannot be fabricated.
Next, fetch the most recent signals to see the data format:
curl -s https://kairossignal.com/api/v1/proof/recent-signals | python3 -m json.tool
Review the signal fields. Pay attention to the confidence, sigma, and xgb_win_probability fields — these are the key quality indicators you will use to assess signal strength. Note the proof_hash and prev_hash fields — these form the cryptographic chain that guarantees integrity.
Finally, verify the chain yourself by picking two consecutive signals and manually computing the SHA-256 hash of the second signal's data concatenated with the first signal's hash. If your computed hash matches the stored hash, you have independently verified the chain's integrity. The exact algorithm is documented in Chapter 6, Section 6.2.
T.2 — Day 2: WebSocket Integration
On your second day, set up a persistent WebSocket connection to receive real-time signals. The WebSocket endpoint provides the lowest-latency access to new signals — typically within 100ms of generation. This is important for time-sensitive trading because the first few seconds after a signal is published are often when the market begins to move.
Start with a simple Python script using the websocket-client library. Connect to the WebSocket endpoint with your desired symbols and implement a message handler that logs each received signal to a file. Run this overnight to get a feel for the signal frequency and timing patterns. You should expect to receive heartbeat messages every 30 seconds confirming the connection is alive, tick updates multiple times per second for real-time price data, signal messages 0-5 times per day for the actual trading signals, and occasional regime change notifications when the market state transitions.
Implement auto-reconnect logic from the start. WebSocket connections will drop — due to network issues, server restarts, or Nginx timeouts. Your client should detect disconnection via the heartbeat timer and automatically reconnect with exponential backoff: 5 seconds, then 10 seconds, then 20 seconds, capping at 60 seconds maximum. This ensures you never miss a signal due to a temporary connection issue.
T.3 — Days 3-4: Signal Analysis Framework
Before trading any signals, spend two days analyzing them. Build a spreadsheet or database that captures every signal with its key fields: symbol, direction, entry price, target, stop, confidence, sigma, win probability, regime, and Lyapunov expiry. Then track the actual price at 1-hour, 4-hour, and 24-hour intervals to grade each signal independently of the system's own Trade Autopsy.
This exercise serves two purposes. First, it gives you firsthand experience with the signal quality — you will see which sigma ranges, confidence levels, and regimes produce the best results for your specific trading style. Second, it builds your own independent performance record, which you can compare against the system's autopsy grades to verify that the automated grading is accurate.
During this analysis period, pay special attention to time-to-target (how quickly winning signals reach their take-profit level, which tells you how long you need to hold positions), maximum adverse excursion or MAE (how far the price moves against the signal before eventually reaching the target, indicating the unrealized drawdown you should expect on winning trades), failed signal patterns (whether losing signals share common characteristics like lower sigma, specific regimes, or particular times of day), and correlation between signals (when KAIROS generates a BTC BUY signal, does it also generate correlated signals for ETH and SOL, which would indicate concentration risk if you trade all of them simultaneously).
T.4 — Days 5-7: Paper Trading and Position Sizing
With several days of signal analysis complete, begin paper trading: execute simulated trades based on KAIROS signals using your intended position sizing methodology. Use the 2% risk-per-trade guideline as a starting point, and calculate position sizes using the ATR-based stop distance provided in each signal.
The position sizing formula is straightforward: divide your risk amount (2% of account balance) by the stop distance percentage. For example, with a $50,000 account and a stop distance of 1.17%, your position size would be ($50,000 times 0.02) divided by 0.0117, which equals approximately $85,470. This means you would open a Bitcoin position worth $85,470, either as a leveraged derivatives position or a spot position depending on your exchange and preferences.
Paper trading validates that your execution infrastructure works correctly: your WebSocket client receives signals promptly, your position sizing calculations produce reasonable results, and your proposed risk management rules (maximum concurrent positions, correlation limits, drawdown scaling) keep portfolio risk within acceptable bounds.
After a full week of paper trading with validated results, you are ready to deploy capital to live trading. Start with reduced position sizes (50% of target) for the first two weeks, then scale up to full size once you are confident in the end-to-end pipeline from signal reception to order execution.
T.5 — Ongoing: Performance Monitoring
Once live, maintain a daily review process. Check the proof chain verification endpoint each morning to confirm chain integrity. Review overnight signals and their outcomes. Track your personal win rate against the system's published win rate — any significant divergence may indicate execution issues such as excessive slippage, delayed fills, or missed signals that need investigation.
Monthly, run a comprehensive review: aggregate performance by regime, sigma tier, time of day, and asset class. Identify which signal characteristics have been most profitable for your specific execution approach and consider adjusting your position sizing to overweight high-performing categories. The KAIROS system continuously evolves — new DePIN projects are added, the XGBoost model is periodically retrained, and new mathematical layers may be introduced. Monitor the version history in Appendix R and changelog for updates that may affect signal characteristics, and adjust your trading approach accordingly.
KAIROS Signal Technical Manual — Version 7.5
Last updated: March 2026
© 2026 KAIROS Intelligence. All rights reserved.
This document was generated from the live KAIROS codebase and reflects the production system architecture.
For the latest version, visit kairossignal.com/manual.html