Back to articles
I stress-tested Pyth Oracle's confidence intervals and built a provably fair game seeded by live oracle attestations
NewsTools

I stress-tested Pyth Oracle's confidence intervals and built a provably fair game seeded by live oracle attestations

via Dev.toDE LIGHT

Every Pyth price feed comes with a published confidence interval (CI) — the oracle's way of saying "I'm 68% confident the true price is within ±X of my reported price." But does that 68% claim hold up against real historical data? I built Pyth Insight to find out. Testing Pyth's CI empirically The methodology: Fetch price snapshots at 60-minute intervals over 3 days (72+ snapshots) using the Hermes historical API: https://hermes.pyth.network/v2/updates/price/{timestamp} For each consecutive pair, check whether the actual price move landed inside the CI band, scaled by sqrt(actualInterval / horizon) to adjust for measurement period Compute coverage at multiple sigma levels (0.5σ–3σ) and compare against theoretical normal distribution coverage Score the feed: 80–100 = Well-Calibrated, 60–79 = Acceptable, 40–59 = Poor, 0–39 = Unreliable What the live data shows for BTC/USD: At ±1σ, the CI should capture 68.3% of moves — it actually captures ~45% At ±2σ, should capture 95.4% — actually cap

Continue reading on Dev.to

Opens in a new tab

Read Full Article
6 views

Related Articles