API Protection

Rate Limiting

Fair usage policies ensure optimal performance for all users. Learn how rate limits work and implement graceful handling in your applications.

Sliding Window

Rate limits use a sliding window algorithm for smooth, predictable throttling without sudden resets.

Burst Allowance

Handle traffic spikes with burst limits that allow short-term usage above your base rate.

Per API Key

Limits are applied per API key, allowing you to distribute load across multiple keys if needed.

Rate Limits by Plan

Choose a plan that matches your application's needs. Upgrade anytime as your usage grows.

Free

1,000per hour
Burst Limit50
  • Basic API access
  • Community support
  • Standard endpoints
Get Started

Starter

10,000per hour
Burst Limit200
  • Priority API access
  • Email support
  • All endpoints
Get Started
Most Popular

Professional

100,000per hour
Burst Limit1,000
  • Dedicated pool
  • Phone support
  • Custom webhooks
Get Started

Enterprise

Customnegotiable
Burst LimitUnlimited
  • Dedicated infrastructure
  • SLA guarantee
  • Custom limits
Contact Sales

Rate Limit Headers

Every API response includes headers to help you monitor and manage your usage in real-time.

HeaderDescriptionExample
X-RateLimit-LimitMaximum requests allowed in the current time window1000
X-RateLimit-RemainingRequests remaining in the current time window999
X-RateLimit-ResetUnix timestamp when the rate limit window resets1640995200
Retry-AfterSeconds to wait before retrying (only on 429 responses)3600

Handling Rate Limits

429 Too Many Requests

When you exceed your rate limit, the API returns a 429 status code. Implement retry logic with exponential backoff for graceful handling.

Error Response

HTTP/1.1 429 Too Many Requests
X-RateLimit-Limit: 1000
X-RateLimit-Remaining: 0
X-RateLimit-Reset: 1640995200
Retry-After: 3600

{
  "error": {
    "code": "RATE_LIMIT_EXCEEDED",
    "message": "Too many requests",
    "retry_after": 3600
  }
}

Retry Logic Example

async function fetchWithRetry(url, options, retries = 3) {
  for (let i = 0; i <= retries; i++) {
    const res = await fetch(url, options);

    if (res.status === 429) {
      const wait = res.headers.get('Retry-After');
      const delay = wait
        ? parseInt(wait) * 1000
        : Math.pow(2, i) * 1000;

      if (i < retries) {
        await new Promise(r => setTimeout(r, delay));
        continue;
      }
    }
    return res;
  }
}

Best Practices

Monitor Rate Limit Headers

Track X-RateLimit-Remaining in every response to proactively manage your usage before hitting limits.

Implement Exponential Backoff

When rate limited, wait progressively longer between retries to avoid overwhelming the API.

Cache Responses

Store frequently accessed data locally to minimize redundant API calls and conserve your quota.

Use Batch Endpoints

Combine multiple operations into single requests using our batch APIs when available.

Pro Tip

Use webhooks instead of polling when possible. This eliminates unnecessary API calls and provides real-time updates without consuming your rate limit.

Need Higher Limits?

Enterprise customers get custom rate limits tailored to their specific requirements, dedicated infrastructure, and priority support.