Rate Limiting

Understand how rate limits work and how to handle them gracefully in your applications.

How Rate Limiting Works

Rate limits are applied per API key and are calculated using a sliding window approach. This ensures fair usage and maintains optimal performance for all users.

Rate Limits by Plan

Free

Free
1,000per hour
Burst limit: 50

Starter

Starter
10,000per hour
Burst limit: 200

Professional

Professional
100,000per hour
Burst limit: 1,000

Enterprise

Enterprise
Customnegotiable
Burst limit: Custom

Rate Limit Headers

Every API response includes headers that provide information about your current rate limit status:

HeaderDescriptionExample
X-RateLimit-LimitThe maximum number of requests allowed in the current time window1000
X-RateLimit-RemainingThe number of requests remaining in the current time window999
X-RateLimit-ResetThe time when the current rate limit window resets (Unix timestamp)1640995200
Retry-AfterNumber of seconds to wait before making another request (only when rate limited)3600

Handling Rate Limits

429 Too Many Requests

When you exceed your rate limit, the API will return a 429 status code. Your application should handle this gracefully by implementing retry logic.

Example Response

HTTP/1.1 429 Too Many Requests
X-RateLimit-Limit: 1000
X-RateLimit-Remaining: 0
X-RateLimit-Reset: 1640995200
Retry-After: 3600

{
  "error": {
    "code": "INTERNAL_TOO_MANY_REQUESTS",
    "message": "Rate limit exceeded",
    "description": "You have exceeded the rate limit for requests. Please try again later."
  }
}

Retry Logic Example

async function makeAPIRequest(url, options, maxRetries = 3) {
  for (let attempt = 0; attempt <= maxRetries; attempt++) {
    const response = await fetch(url, options);
    
    if (response.status === 429) {
      const retryAfter = response.headers.get('Retry-After');
      const delay = retryAfter ? parseInt(retryAfter) * 1000 : Math.pow(2, attempt) * 1000;
      
      if (attempt < maxRetries) {
        await new Promise(resolve => setTimeout(resolve, delay));
        continue;
      }
    }
    
    return response;
  }
}

Best Practices

Monitor Rate Limit Headers

Always check the rate limit headers in responses to understand your current usage.

Implement Exponential Backoff

Use exponential backoff when retrying failed requests to avoid overwhelming the API.

Cache Responses

Cache API responses when possible to reduce the number of requests needed.

Use Batch Endpoints

When available, use batch endpoints to perform multiple operations in a single request.

Need Higher Limits?

If your application requires higher rate limits, contact our sales team to discuss enterprise options.