Overview
The GameBoost API implements rate limiting to ensure fair usage and maintain service stability for all partners. Rate limits are applied per API key and tracked separately for read and write operations.
Rate limit information is included in every API response via headers, allowing you to implement proactive rate limit handling.
Rate Limit Tiers
Rate limits vary based on your operation type and partner tier:
Standard Rate Limits
All partners have access to the following rate limits:
Operation Type Limit Description Read Operations (GET)1000 requests per minute All GET requests that retrieve data Write Operations (Global)500 requests per minute All POST, PUT, PATCH, DELETE requests combined Write Operations (Per Resource)5 per minute, 10 per hour Writes to the same resource (e.g., updating the same order)
The per-resource limit prevents excessive updates to individual resources.
Need higher limits for your use case? Contact our team on Discord to discuss your requirements.
Every API response includes headers that provide real-time information about your current rate limit status:
Header Description Example Value X-RateLimit-LimitMaximum requests allowed per window 1000X-RateLimit-RemainingRequests remaining in current window 987X-Ratelimit-ResetUnix timestamp when the limit resets 1728649200Retry-AfterSeconds to wait before retrying (only when rate limited) 42
HTTP / 1.1 200 OK
Content-Type : application/json
X-RateLimit-Limit : 1000
X-RateLimit-Remaining : 987
X-Ratelimit-Reset : 1728649200
X-GameBoost-Request-Id : req_abc123
Monitor the X-RateLimit-Remaining header to implement proactive throttling before hitting the rate limit.
Handling Rate Limits
Rate Limit Response
When you exceed the rate limit, the API returns a 429 Too Many Requests status code:
{
"message" : "Too many requests" ,
"request_id" : "0e8400-e29b..."
}
The response headers also provide rate limit information:
HTTP / 1.1 429 Too Many Requests
Content-Type : application/json
X-RateLimit-Limit : 1000
X-RateLimit-Remaining : 0
X-Ratelimit-Reset : 1728649200
Retry-After : 42
Never ignore 429 responses. Implement proper retry logic with exponential backoff to avoid being temporarily blocked.
Implementing Exponential Backoff
Exponential backoff is a strategy where you progressively increase the wait time between retries. This helps prevent overwhelming the API when rate limits are hit.
Basic Exponential Backoff
async function makeRequestWithBackoff ( url , options , maxRetries = 5 ) {
let retries = 0 ;
while ( retries < maxRetries ) {
try {
const response = await fetch ( url , options );
// Success
if ( response . ok ) {
return await response . json ();
}
// Rate limited
if ( response . status === 429 ) {
const retryAfter = response . headers . get ( 'Retry-After' );
// Use retry-after header if available, otherwise exponential backoff
const waitTime = retryAfter
? parseInt ( retryAfter ) * 1000
: Math . min ( 1000 * Math . pow ( 2 , retries ), 60000 );
console . log ( `Rate limited. Retrying after ${ waitTime } ms...` );
await sleep ( waitTime );
retries ++ ;
continue ;
}
// Other errors
const error = await response . json ();
throw new Error ( error . message );
} catch ( error ) {
if ( retries === maxRetries - 1 ) {
throw new Error ( `Max retries exceeded: ${ error . message } ` );
}
retries ++ ;
}
}
}
function sleep ( ms ) {
return new Promise ( resolve => setTimeout ( resolve , ms ));
}
// Usage
const result = await makeRequestWithBackoff (
'https://api.gameboost.com/v2/orders' ,
{
headers: {
'Authorization' : `Bearer ${ apiKey } ` ,
'Content-Type' : 'application/json'
}
}
);
Advanced: Proactive Rate Limiting
Implement proactive rate limiting by monitoring the X-RateLimit-Remaining header:
class RateLimitedAPIClient {
constructor ( apiKey ) {
this . apiKey = apiKey ;
this . baseUrl = 'https://api.gameboost.com/v2' ;
this . remaining = null ;
this . resetTime = null ;
}
async request ( endpoint , options = {}) {
// Proactively wait if we're close to the limit
await this . checkRateLimit ();
const url = ` ${ this . baseUrl }${ endpoint } ` ;
const headers = {
'Authorization' : `Bearer ${ this . apiKey } ` ,
'Content-Type' : 'application/json' ,
... options . headers
};
const response = await fetch ( url , { ... options , headers });
// Update rate limit info from headers
this . updateRateLimitInfo ( response . headers );
if ( response . status === 429 ) {
const retryAfter = response . headers . get ( 'Retry-After' );
await this . sleep ( parseInt ( retryAfter ) * 1000 );
return this . request ( endpoint , options ); // Retry
}
if ( ! response . ok ) {
const error = await response . json ();
throw new Error ( error . message );
}
return await response . json ();
}
updateRateLimitInfo ( headers ) {
this . remaining = parseInt ( headers . get ( 'X-RateLimit-Remaining' ));
this . resetTime = parseInt ( headers . get ( 'X-Ratelimit-Reset' ));
}
async checkRateLimit () {
// If we have less than 10% of requests remaining, wait
if ( this . remaining !== null && this . remaining < 10 ) {
const now = Math . floor ( Date . now () / 1000 );
const waitTime = Math . max ( 0 , this . resetTime - now );
if ( waitTime > 0 ) {
console . log ( `Proactively waiting ${ waitTime } s to avoid rate limit...` );
await this . sleep ( waitTime * 1000 );
}
}
}
sleep ( ms ) {
return new Promise ( resolve => setTimeout ( resolve , ms ));
}
}
// Usage
const client = new RateLimitedAPIClient ( apiKey );
const orders = await client . request ( '/orders' );
Best Practices
Monitor rate limit headers
Implement exponential backoff
Always use exponential backoff when retrying rate-limited requests. This prevents overwhelming the API and helps distribute load.
Start with a 1-second delay
Double the delay with each retry
Cap the maximum delay at 60 seconds
Respect the Retry-After header when provided
Cache responses when appropriate
Cache API responses on your end to reduce the number of requests you need to make. class CachedAPIClient {
constructor ( apiKey ) {
this . cache = new Map ();
this . cacheDuration = 60000 ; // 1 minute
}
async get ( endpoint ) {
const cached = this . cache . get ( endpoint );
if ( cached && Date . now () - cached . timestamp < this . cacheDuration ) {
return cached . data ;
}
const data = await this . fetchFromAPI ( endpoint );
this . cache . set ( endpoint , { data , timestamp: Date . now () });
return data ;
}
}
Distribute load across time
If you need to process large batches of data, spread the requests over time rather than making them all at once. async function processLargeDataset ( items , delayMs = 1000 ) {
const results = [];
for ( const item of items ) {
results . push ( await processItem ( item ));
// Wait between requests to avoid rate limits
await sleep ( delayMs );
}
return results ;
}
Use webhooks instead of polling
Instead of repeatedly polling for updates, configure webhooks to receive real-time notifications. This dramatically reduces your API usage. See our Webhooks Guide for setup instructions.
Monitoring Rate Limit Usage
Track your rate limit usage to optimize your integration:
class RateLimitMonitor {
constructor () {
this . stats = {
requests: 0 ,
rateLimited: 0 ,
retries: 0
};
}
recordRequest ( response ) {
this . stats . requests ++ ;
if ( response . status === 429 ) {
this . stats . rateLimited ++ ;
}
// Log rate limit info
console . log ({
timestamp: new Date (). toISOString (),
remaining: response . headers . get ( 'X-RateLimit-Remaining' ),
reset: new Date ( response . headers . get ( 'X-Ratelimit-Reset' ) * 1000 ). toISOString ()
});
}
recordRetry () {
this . stats . retries ++ ;
}
getStats () {
return {
... this . stats ,
retryRate: ( this . stats . retries / this . stats . requests * 100 ). toFixed ( 2 ) + '%' ,
rateLimitRate: ( this . stats . rateLimited / this . stats . requests * 100 ). toFixed ( 2 ) + '%'
};
}
}
Troubleshooting
Getting rate limited frequently?
Check your request patterns
Review your API usage to identify patterns:
Are you making requests in tight loops?
Are you caching responses appropriately?
Implement request throttling
Add delays between requests or use a request queue to stay within limits.
Use webhooks instead of polling
Polling for updates can quickly consume your rate limit. Use webhooks for real-time notifications instead.
Next Steps