What our infrastructure actually does, what the theoretical floor is, and how to verify any of it yourself in under five minutes.
[1] Pinnacle's market data publish cadence, widely documented across sportsbook engineering community and verifiable by tapping their public market endpoint directly.
[2] Measured against our /v1/sports/{sport}/odds endpoint under stale-while-revalidate cache, sustained over a 60s window. Cold-cache requests can hit ~5s on first miss; SWR keeps subsequent calls sub-second.
[3] Industry baseline for typical odds APIs polling sportsbook data at fixed intervals rather than book-native cadence. Range reflects common Pro/Enterprise tier behavior across vendors.
[4] the-odds-api.com docs, "30 second update interval" on the free tier. Their paid tiers may be faster; verify on their pricing page.
If you have benchmarked another vendor and want their number on this chart, submit below or email support@parlay-api.com with methodology + timestamps. We'll add it (with attribution).
On hot-cycle sources (Pinnacle, FanDuel game lines), a price change moves through our pipeline and out to a connected client in 2-3 seconds. Most of that is the book's own publish cadence. Our infrastructure adds under 100ms on top.
Real-time odds latency is floor-limited by the bookmaker's own publish rate. Nobody can be faster than what the book has already pushed. The honest question is how much overhead the data provider adds on top, and how reliably they hit the book's native cadence under load. Both are numbers we publish and you can measure.
Each bookmaker has a native price-update rate. We pull at the book's actual cadence rather than a fixed interval, so a slow-moving book isn't hammering bandwidth and a fast-moving book isn't undersampled.
| Source | Native poll rate | Notes |
|---|---|---|
| Pinnacle (game lines, hot leagues) | ~2s | Pinnacle's own market data refreshes at this cadence; we match it directly. |
| FanDuel (game lines) | ~2-3s | Per-event subscription, no fixed-interval polling delay. |
| DraftKings, BetMGM, Caesars | ~3-5s | Each book's native rate, parallel pollers per league. |
| Player props (all books) | ~5-15s | Prop tables are larger; refresh prioritized by upcoming start time. |
| Historical closing archive | Immutable | 1.39M game-line rows + 26.8M prop closing rows. Point-in-time, latency-irrelevant. |
Every price change pushed to connected clients within ~50ms of our backend writing it. No client-side polling, no retry-storm risk during quiet markets, no missed updates during burst windows.
Postgres LISTEN/NOTIFY → in-memory fan-out → WebSocket send. ~50ms internal latency from row write to broadcast.
On active leagues, change events arrive every 1.5-3s matching the book's own publish rate. Heartbeats every 30s when markets are quiet.
Full snapshot on connect, then change diffs only. Your client maintains state from the snapshot forward without re-fetching the whole world.
API key in URL, X-API-Key header, or Sec-WebSocket-Protocol subprotocol. Whichever your client supports.
Minimal client. Swap the URL to point at any WebSocket-capable odds API and compare frame cadence.
import asyncio, json, time, websockets API_KEY = "your_key" URL = f"wss://parlay-api.com/ws/odds/basketball_nba?apiKey={API_KEY}" async def main(): async with websockets.connect(URL) as ws: t0 = time.time() while True: msg = json.loads(await ws.recv()) dt = time.time() - t0 print(f"[{dt:6.2f}s] type={msg.get('type')} count={msg.get('count','-')}") asyncio.run(main())
You'll see frame cadence of 1.5-3s on active leagues, which is the book's native rate. Run the same script against any competitor's WebSocket and compare frame timestamps over 60 seconds.
A common failure mode in odds APIs is reporting last_update on a price that hasn't actually been re-verified in minutes. Our every response carries pulse-stamped signals so you can distinguish a real price change from a verification heartbeat.
Timestamp we last polled this book and confirmed the same price. The heartbeat signal.
Timestamp the price actually moved. The real price-change signal.
Boolean. true if verified within the last 5 seconds.
Response header. The newest pulse across all bookmakers in the response, so you can gate freshness at the request level.
Add ?include=verification to any odds endpoint to receive these fields.
Match your use case to the right endpoint shape. Latency expectations follow.
WebSocket on hot leaguesWebSocket + ?include=verificationWebSocket + /live period markets/v1/historical/.../closing-oddsHistorical archive bulk exportHard numbers without naming competitors. You can verify these against any vendor by running the same probes against their API.
| Metric | ParlayAPI | What to ask other vendors |
|---|---|---|
| WebSocket native access | Business tier ($40/mo) | Their tier required and base price |
| API access self-serve | Yes, no sales call | Whether they gate API behind "contact us" |
| Per-bookmaker pulse stamp | verified_at per book | Whether stale rows are detectable |
| Hot-cycle pollers on Pinnacle | 2s native cadence | Their actual cadence, with proof |
| Historical archive depth | 28.2M rows total | Row count, not just date range |
| Bulk historical export | Flat-rate per call | Per-date or per-call billed |
We are competitive on raw latency because we hit each book at its native cadence and add minimal overhead, not because we have a magical pipeline. The headroom we genuinely lead on is access friction (no sales calls), price for self-serve WebSocket, the depth and queryability of the historical archive, and the fact that we publish numbers you can verify rather than asking you to trust marketing. If a vendor claims sub-second end-to-end latency from Pinnacle, ask them to define the measurement boundary; Pinnacle itself publishes at ~2s.
Free tier covers 1,000 calls/month with no card required. The benchmark script above runs against your trial key in 30 seconds.