Cookbook

Drop-in prompts you can paste into Claude, GPT, Cursor, or any agent runtime, plus copy-paste code recipes for the use cases people actually ship: AI prop-pick models, CLV trackers, arbitrage scanners, Discord alert bots. Skip the boilerplate, ship in an evening.

Drop-in quickstart prompt

Paste this into Claude / GPT / Cursor / Continue / Devin and it has everything to integrate ParlayAPI cleanly. Replace YOUR_API_KEY with the value from /signup.

You are integrating the ParlayAPI sports betting odds feed into a project.

API root:           https://parlay-api.com
Auth:               header `X-API-Key: YOUR_API_KEY`  (or query param `apiKey=YOUR_API_KEY`)
Free tier:          1,000 requests/month, no card required, signup at https://parlay-api.com/signup
Reference:          https://parlay-api.com/docs            (full endpoint docs)
LLM-friendly schema:https://parlay-api.com/llms.txt        (read this first)
OpenAPI spec:       https://parlay-api.com/openapi.json    (auto-generate clients)
Cookbook:           https://parlay-api.com/cookbook        (this page)

Core endpoints (all GET, JSON responses):
  /v1/sports                                  list every supported sport_key
  /v1/sports/{sport_key}/events               upcoming games
  /v1/sports/{sport_key}/odds                 current moneyline / spread / total across 26+ books
  /v1/sports/{sport_key}/props                player props (points, rebounds, hits, etc.)
  /v1/sports/{sport_key}/live                 in-play markets only
  /v1/sports/{sport_key}/compare              side-by-side line comparison + best-line highlight
  /v1/sports/{sport_key}/arbitrage            pre-computed cross-book arb opportunities
  /v1/sports/{sport_key}/ev                   pre-computed +EV bets vs no-vig consensus
  /v1/sports/{sport_key}/line-movement        time-series price moves
  /v1/historical/sports/{sport_key}/odds      historical odds (backtest)
  /v1/historical/sports/{sport_key}/closing-odds   closing lines (CLV)
  /v1/bookmakers                              list of supported books with status
  /ws/odds/{sport_key}                        WebSocket push (Pro tier)

Standard response shape (odds endpoint):
[
  {
    "id": "",
    "sport_key": "baseball_mlb",
    "commence_time": "2026-05-04T19:00:00Z",
    "home_team": "Boston Red Sox",
    "away_team": "Houston Astros",
    "bookmakers": [
      {
        "key": "pinnacle", "title": "Pinnacle", "last_update": "...",
        "markets": [
          { "key": "h2h", "outcomes": [
              {"name": "Boston Red Sox", "price": -135},
              {"name": "Houston Astros", "price": +120}
          ]}
        ]
      },
      ... 24 more books ...
    ]
  }
]

Conventions to respect:
- All odds are real captures from each book's real endpoint. Never derive an Under from an
  Over or interpolate; if a price is missing, it was missing at the source.
- Decimal odds via `oddsFormat=decimal` query param. American is the default.
- Filter to specific books with `bookmakers=pinnacle,fanduel,draftkings`.
- For props use market_key like `player_points`, `player_hits`, `player_pass_yds` (TOA canonical).
- Historical endpoints take a `date=YYYY-MM-DDTHH:MM:SSZ` query param.

When you write code, default to:
- requests/httpx for Python, fetch/axios for JS.
- Cache responses for 30 seconds; the upstream collector polls at 30-60s, so finer caching
  wastes credits without adding freshness.
- Surface the Pinnacle line as the sharp baseline when computing fair odds.
- Use /v1/historical/sports/{sport_key}/closing-odds for CLV (closing line value), not the
  live odds endpoint.
- For agent/AI applications, prefer /v1/sports/{key}/compare or /ev or /arbitrage rather
  than reimplementing those calculations yourself.

Do not assume any specific bookmaker; query /v1/bookmakers to see what is currently active.
The list grows; the schema is stable.
Tip: if you are building inside Cursor or Continue, save the above as a .cursorrules or .continuerules file at your project root and the agent picks it up automatically every session.

First request

Sanity-check your key against a free endpoint:

curl 'https://parlay-api.com/v1/sports' -H 'X-API-Key: YOUR_API_KEY'

Returns a JSON array of supported sport_keys. If you see a 401, double-check the header. 403 means you blew through your tier quota for the month; the dashboard at /dashboard shows credits remaining.

AI prop-pick modelmost-requested

Cross-reference our prop closing lines with player season metrics, generate edge predictions, rank picks for an AI agent or model. The pattern works for NBA points, NFL passing yards, MLB total bases, NHL shots, and all our prop markets.

Prompt for an LLM

You are an AI sports analyst. Your job: rank the top 10 player prop bets for tonight's NBA
slate by expected value, given the historical performance of each player and the current
sportsbook lines.

Step 1: GET https://parlay-api.com/v1/sports/basketball_nba/props?markets=player_points,player_rebounds,player_assists
Step 2: For each player, fetch the last 10 games of stats from a reliable source
        (balldontlie.io, NBA stats API, etc.) and compute their season mean and std for
        the relevant stat.
Step 3: Compute the implied probability the prop hits given the player's distribution.
Step 4: Compare to the implied probability from the over_price and under_price. The edge is
        (your_prob - implied_book_prob).
Step 5: Filter to props with edge > 4% AND volume across at least 3 books.
Step 6: Return the top 10 ranked by edge with: player, market, line, your_pick (over/under),
        edge_percent, books_offering, sample_size, confidence_level.

Be conservative: if a player has fewer than 5 games of data this season, exclude. If the
prop is from only one book, flag it as low-confidence.

Python skeleton

import os, statistics, requests
KEY = os.environ["PARLAYAPI_KEY"]
H = {"X-API-Key": KEY}

# 1. get current NBA player-points props
props = requests.get(
    "https://parlay-api.com/v1/sports/basketball_nba/props",
    params={"markets": "player_points"}, headers=H, timeout=10,
).json()

# 2. for each player, pull recent stats and compute distribution mu/sigma
def player_distribution(player_name):
    # TODO: replace with your stats source (balldontlie, nba_api, etc.)
    games = fetch_recent_games(player_name, n=15)
    return statistics.mean(games), statistics.stdev(games)

# 3. compute edge
import math
def normal_cdf(x, mu, sigma):
    return 0.5 * (1 + math.erf((x - mu) / (sigma * math.sqrt(2))))

def implied_prob(odds_american):
    return 100 / (odds_american + 100) if odds_american > 0 else -odds_american / (-odds_american + 100)

picks = []
for p in props:
    mu, sigma = player_distribution(p["player"])
    p_over = 1 - normal_cdf(p["line"], mu, sigma)
    book_p = implied_prob(p["over_price"])
    edge = p_over - book_p
    if edge > 0.04:
        picks.append({**p, "model_p": p_over, "edge": edge})

picks.sort(key=lambda x: -x["edge"])
for pk in picks[:10]:
    print(f"{pk['player']:<22} O{pk['line']} {pk['edge']:+.1%} edge")

CLV (closing line value) tracker

Compare the price you got at bet placement to the closing line. CLV is the single best public-data signal of whether you are beating the market over time. ParlayAPI's /historical/.../closing-odds endpoint gives the closing line for any past event.

curl 'https://parlay-api.com/v1/historical/sports/baseball_mlb/closing-odds?date=2026-05-03T22:00:00Z®ions=us&markets=h2h' \
  -H 'X-API-Key: YOUR_API_KEY'

Workflow for an automated CLV journal:

  1. When a bet is placed (or even when you "almost place" one) call /v1/sports/{sport_key}/odds and store the snapshot row: (timestamp, event_id, market, side, price, book).
  2. At game start, call /v1/historical/sports/{sport_key}/closing-odds?date=... for the same event.
  3. CLV = your_decimal_price − closing_decimal_price (positive = you beat the close).
  4. Plot it cumulatively. Anyone with a positive trend over 200+ bets is a sharp.

+EV scannerno math required

We pre-compute +EV bets server-side using Pinnacle's no-vig price as the fair-value baseline. Just hit the endpoint.

curl 'https://parlay-api.com/v1/sports/basketball_nba/ev?min_edge=3' \
  -H 'X-API-Key: YOUR_API_KEY' | jq '.[] | {event, book, market, price, ev_percent}'

Returns every +EV opportunity vs the no-vig consensus, ranked by edge. min_edge=3 filters out anything under 3% (recommended for soft books).

Arbitrage detector

Two different books on opposite sides of the same market with prices that imply less than 100% total. Server-side computed, refreshed every 30 seconds.

curl 'https://parlay-api.com/v1/sports/soccer_epl/arbitrage?min_profit=1' \
  -H 'X-API-Key: YOUR_API_KEY'

Each result includes the recommended stake split between the two books to lock in the profit, plus the time-to-game so you can prioritize fast-closing windows.

Line-movement watcher

Detect steam moves: a sharp book's line moves 2+ cents in 60 seconds and then the soft books follow. Profitable when you are first to react.

curl 'https://parlay-api.com/v1/sports/baseball_mlb/line-movement?event_id=...&window_minutes=10' \
  -H 'X-API-Key: YOUR_API_KEY'

Pair with the WebSocket stream wss://parlay-api.com/ws/odds/baseball_mlb (Pro tier) for sub-second push alerts when a price diff exceeds your threshold. Subscribe to a single event with {"type":"subscribe","event_id":"..."}.

Discord alert bot~50 lines

Post +EV bets to a Discord channel as they appear. Free Discord webhook, free ParlayAPI tier, runs on a 1-minute cron.

import os, requests, time

PARLAY = os.environ["PARLAYAPI_KEY"]
WEBHOOK = os.environ["DISCORD_WEBHOOK"]    # https://discord.com/api/webhooks/...
SEEN = set()

def scan():
    r = requests.get(
        "https://parlay-api.com/v1/sports/basketball_nba/ev",
        params={"min_edge": 4}, headers={"X-API-Key": PARLAY}, timeout=10,
    )
    for bet in r.json():
        bet_id = f'{bet["event_id"]}-{bet["book"]}-{bet["market"]}-{bet["side"]}'
        if bet_id in SEEN: continue
        SEEN.add(bet_id)
        msg = (f'**{bet["matchup"]}** | {bet["book"]} | {bet["market"]} '
               f'{bet["side"]} {bet["price"]} | edge **{bet["edge_percent"]}%**')
        requests.post(WEBHOOK, json={"content": msg}, timeout=5)

while True:
    try: scan()
    except Exception as e: print("err:", e)
    time.sleep(60)

Model backtest harness

Walk a strategy across the historical archive: 10 years of soccer, expanding US-sports, full prop closing line history. Use this to validate edge before you risk capital.

import requests
from datetime import datetime, timedelta

def historical_at(sport_key, date_iso):
    return requests.get(
        f"https://parlay-api.com/v1/historical/sports/{sport_key}/odds",
        params={"date": date_iso, "regions": "us", "markets": "h2h"},
        headers={"X-API-Key": KEY},
    ).json()

# replay every game day in the last 90 days, run your strategy, record P/L
start = datetime(2026, 2, 1)
for d in range(90):
    day = start + timedelta(days=d)
    games = historical_at("baseball_mlb", day.strftime("%Y-%m-%dT19:00:00Z"))
    for g in games:
        pick = your_strategy(g)            # returns {side, price} or None
        if not pick: continue
        result = settled_result(g)         # use /v1/sports/{key}/scores or your own data
        record_pl(pick, result)

Claude / GPT agent quickstart

For tool-using agents (Claude, GPT, Cursor, Devin, Continue), the cleanest path is to register the OpenAPI spec as a tool definition. The model then calls the right endpoint without you hand-coding wrappers.

// Anthropic Claude SDK with tool use
import Anthropic from "@anthropic-ai/sdk";
const client = new Anthropic();

const tools = [{
    name: "parlayapi_get",
    description: "Fetch sports odds from ParlayAPI. Pass any path from /v1/...",
    input_schema: {
        type: "object",
        properties: {
            path:   { type: "string", description: "endpoint path, e.g. /v1/sports/baseball_mlb/odds" },
            params: { type: "object", description: "query params" }
        },
        required: ["path"]
    }
}];

const message = await client.messages.create({
    model: "claude-sonnet-4-6",
    max_tokens: 1024,
    tools,
    messages: [{ role: "user", content: "Find the +EV NBA bet with the largest edge tonight." }],
});

// when message.content has a tool_use block, fetch:
async function call_parlayapi(path, params) {
    const url = `https://parlay-api.com${path}?${new URLSearchParams(params)}`;
    const r = await fetch(url, { headers: { "X-API-Key": process.env.PARLAYAPI_KEY }});
    return await r.json();
}
For MCP servers: the OpenAPI spec at /openapi.json can be auto-converted to an MCP server with tools like openapi-mcp-generator. Drop it into your Claude Desktop config and the model gets every endpoint as a callable tool.

Contribute a recipe

Building something interesting on top of the API? Email [email protected] with a short writeup and we will add it here with attribution. Things we are looking for: model architectures (Bayesian, gradient-boosted, neural), exotic market strategies (futures laddering, prop-stacking, live-betting reactions), and integrations with notebook tools (Hex, Deepnote, Jupyter) or BI dashboards.