Overview
The GameBoost API implements rate limiting to ensure fair usage and maintain service stability for all partners. Rate limits are applied per API key and tracked separately for read and write operations.Rate limit information is included in every API response via headers, allowing you to implement proactive rate limit handling.
Rate Limit Tiers
Rate limits vary based on your operation type and partner tier:Standard Rate Limits
All partners have access to the following rate limits:Operation Type | Limit | Description |
---|---|---|
Read Operations (GET) | 1000 requests per minute | All GET requests that retrieve data |
Write Operations (Global) | 500 requests per minute | All POST, PUT, PATCH, DELETE requests combined |
Write Operations (Per Resource) | 5 per minute, 10 per hour | Writes to the same resource (e.g., updating the same order) |
The per-resource limit prevents excessive updates to individual resources.
Need higher limits for your use case? Contact our team on Discord to discuss your requirements.
Rate Limit Headers
Every API response includes headers that provide real-time information about your current rate limit status:Header | Description | Example Value |
---|---|---|
X-RateLimit-Limit | Maximum requests allowed per window | 1000 |
X-RateLimit-Remaining | Requests remaining in current window | 987 |
X-Ratelimit-Reset | Unix timestamp when the limit resets | 1728649200 |
Retry-After | Seconds to wait before retrying (only when rate limited) | 42 |
Example Response Headers
Monitor the
X-RateLimit-Remaining
header to implement proactive throttling before hitting the rate limit.Handling Rate Limits
Rate Limit Response
When you exceed the rate limit, the API returns a429 Too Many Requests
status code:
Never ignore
429
responses. Implement proper retry logic with exponential backoff to avoid being temporarily blocked.Implementing Exponential Backoff
Exponential backoff is a strategy where you progressively increase the wait time between retries. This helps prevent overwhelming the API when rate limits are hit.Basic Exponential Backoff
Advanced: Proactive Rate Limiting
Implement proactive rate limiting by monitoring theX-RateLimit-Remaining
header:
Best Practices
Monitor rate limit headers
Monitor rate limit headers
Always check the
X-RateLimit-Remaining
and X-Ratelimit-Reset
headers in your responses. Use this information to implement intelligent throttling.Implement exponential backoff
Implement exponential backoff
Always use exponential backoff when retrying rate-limited requests. This prevents overwhelming the API and helps distribute load.
- Start with a 1-second delay
- Double the delay with each retry
- Cap the maximum delay at 60 seconds
- Respect the
Retry-After
header when provided
Cache responses when appropriate
Cache responses when appropriate
Cache API responses on your end to reduce the number of requests you need to make.
Distribute load across time
Distribute load across time
If you need to process large batches of data, spread the requests over time rather than making them all at once.
Use webhooks instead of polling
Use webhooks instead of polling
Instead of repeatedly polling for updates, configure webhooks to receive real-time notifications. This dramatically reduces your API usage.See our Webhooks Guide for setup instructions.
Monitoring Rate Limit Usage
Track your rate limit usage to optimize your integration:Troubleshooting
Getting rate limited frequently?
Check your request patterns
Check your request patterns
Review your API usage to identify patterns:
- Are you making requests in tight loops?
- Are you caching responses appropriately?
Implement request throttling
Implement request throttling
Add delays between requests or use a request queue to stay within limits.
Use webhooks instead of polling
Use webhooks instead of polling
Polling for updates can quickly consume your rate limit. Use webhooks for real-time notifications instead.