Skip to main content
Most TCG data — sets, series, creatures, the bulk of card details — changes infrequently. The API is built around that fact: every successful read response is cached at Cloudflare’s edge, and we encourage you to layer your own cache on top.

What we cache

Endpoint shapeDefault TTL
List endpoints (/v1/cards, /v1/sets)300 seconds
Resource endpoints (/v1/cards/{id})3600 seconds
Anything with ?updated_since=60 seconds
The updated_since shortened TTL exists because that filter exists to catch recent changes; caching it for an hour would defeat the purpose. The cache key is composed of:
  • The path
  • The query parameters (sorted, so ?a=1&b=2 and ?b=2&a=1 hit the same entry)
  • The API tier of the requesting key (Free and Partner use separate cache namespaces)
  • The Accept header
That last one matters: if you switch a client from Accept: application/json to no Accept header, you’ll see a fresh fetch on the first request after the switch.

Reading cache state

Every response carries meta.cached:
{
  "data": ["..."],
  "pagination": { "...": "..." },
  "meta": { "request_id": "req_01H...", "cached": true }
}
true means we served from edge cache; false means the Worker hit the origin database. During a steady-state production workload, you should see this run 80–95% true. Deviation from that is signal.

What busts the cache

Two things invalidate cached entries:
  1. TTL expiry. Whichever of the values above applies.
  2. Set publish. When a new set goes live in the Elestrals admin system, an internal webhook purges everything tagged with cards, sets, printings, and series. You’ll see fresh data within seconds of a release.
There is no public cache-purge endpoint — partners on a tight refresh window should rely on ?updated_since=<ISO timestamp> instead.

Caching on your end

Stacking your own cache on top of ours is encouraged. A few rules of thumb:
  • By resource ID. Cache /v1/cards/{id} responses keyed by the card ID. These rarely change after a card has been live for 30 days; long TTLs are safe.
  • By query. Cache list responses keyed by the full query string. Use a short TTL (1–5 minutes) and rely on the same cache key whether the response was a hit or a miss on our edge.
  • Honor the response, not the request. Cache by what you got back, not by what you asked for. The meta.request_id is unique per request and shouldn’t make it into your cache key.
const cardCache = new Map();

async function getCard(id, key) {
  if (cardCache.has(id)) {
    const { value, expiresAt } = cardCache.get(id);
    if (expiresAt > Date.now()) return value;
  }

  const res = await fetch(`https://api.elestrals.com/v1/cards/${id}`, {
    headers: { Authorization: `Bearer ${key}` },
  });
  const body = await res.json();
  cardCache.set(id, { value: body.data, expiresAt: Date.now() + 60 * 60 * 1000 });
  return body.data;
}

When not to cache

A handful of patterns aren’t worth caching:
  • Polled ?updated_since= queries. You’re already polling for change; caching the response defeats the loop.
  • Health checks. /v1/health is uncached on our end and shouldn’t be cached on yours either.
  • One-shot scripts. A migration job that runs once and dies has no cache to populate.
For everything else: cache aggressively. Quota lives on cache.