Skip to main content
Rate limiting controls how many requests your app can make to JTL’s APIs within a given time window. It ensures fair usage across all developers and prevents any single app from overloading the system. Exceeding the limit returns an HTTP 429 “Too Many Requests” error. Some endpoints have lower limits than others to protect resource-intensive operations.

How Rate Limiting Works

Every API response includes headers that tell you your current quota status. Monitor these headers to stay within limits proactively rather than reacting to 429 errors.

Rate Limit Response Headers

Every response from JTL’s APIs includes these headers:
HeaderDescriptionExample
X-RateLimit-LimitMaximum requests allowed in the current interval10
X-RateLimit-Interval-Length-SecondsLength of the rate limit window in seconds45
X-RateLimit-RemainingRequests remaining in the current interval5
Read these headers after every request. They’re your real-time view of how much quota you have left. When X-RateLimit-Remaining approaches zero, slow down or pause until the interval resets.
const response = await fetch('https://api.jtl-cloud.com/erp/v2/products', {
  headers: {
    'Authorization': `Bearer ${accessToken}`,
    'X-Tenant-ID': tenantId,
  },
});
 
// Check rate limit status
const limit = response.headers.get('X-RateLimit-Limit');
const remaining = response.headers.get('X-RateLimit-Remaining');
const interval = response.headers.get('X-RateLimit-Interval-Length-Seconds');
 
console.log(`Rate limit: ${remaining}/${limit} remaining (resets every ${interval}s)`);

Rate Limits by API

Cloud-ERP API

The Cloud-ERP API applies rate limits to all endpoints. Use the X-RateLimit-* response headers to determine the limits for each endpoint in your environment.

SCX Channel API

The SCX Channel API has published rate limits per route pattern:
Route patternRequestsIntervalEffective Rate
Default (all other routes)1060 seconds~0.17/sec
/v[version]/channel*6060 seconds1/sec
/v[version]/channel/order*60060 seconds10/sec
/v[version]/channel/event*24060 seconds4/sec
/v[version]/channel/offer*1,50060 seconds25/sec
/v[version]/channel/attribute/category*86,40086,400 seconds1/sec (daily)
/v[version]/channel/attribute/global*103,600 seconds~0.003/sec (hourly)
/v[version]/channel/categories*103,600 seconds~0.003/sec (hourly)
Sandbox rate limits are different from production. Don’t assume your sandbox testing reflects production capacity. Always check the X-RateLimit-* headers in each environment.

Handling Rate Limit Errors

When you exceed the limit, the API returns HTTP 429 Too Many Requests. Here’s how to handle it properly.

1. Monitor Headers Proactively

Don’t wait for a 429, prevent it. Track X-RateLimit-Remaining and pause before you hit zero:
async function fetchWithRateLimit(
  url: string,
  options: RequestInit
): Promise<Response> {
  const response = await fetch(url, options);
 
  const remaining = parseInt(
    response.headers.get('X-RateLimit-Remaining') || '999'
  );
  const interval = parseInt(
    response.headers.get('X-RateLimit-Interval-Length-Seconds') || '60'
  );
 
  // If running low, pause before the next request
  if (remaining <= 2) {
    console.warn(`Rate limit nearly exhausted (${remaining} remaining). Pausing...`);
    await new Promise(resolve => setTimeout(resolve, interval * 1000));
  }
 
  return response;
}

2. Retry with Exponential Backoff

When you get a 429, don’t retry immediately. Wait at increasing intervals before retrying:
async function fetchWithRetry(
  url: string,
  options: RequestInit,
  maxRetries = 3
): Promise<Response> {
  for (let attempt = 0; attempt <= maxRetries; attempt++) {
    const response = await fetch(url, options);
 
    if (response.status === 429 && attempt < maxRetries) {
      const waitTime = Math.pow(2, attempt) * 1000 + Math.random() * 500;
      console.warn(`Rate limited. Retrying in ${Math.round(waitTime)}ms...`);
      await new Promise(resolve => setTimeout(resolve, waitTime));
      continue;
    }
 
    return response;
  }
 
  throw new Error('Max retries exceeded');
}
Backoff Schedule (with Jitter):
AttemptBase waitWith Jitter
1st retry1 second1.0–1.5 seconds
2nd retry2 seconds2.0–2.5 seconds
3rd retry4 seconds4.0–4.5 seconds

3. Queue and Spread Requests

Use a queue to enforce a fixed request rate and avoid bursts.
class RequestQueue {
  private queue: (() => Promise<void>)[] = [];
  private processing = false;
  private delayMs: number;
 
  constructor(requestsPerMinute: number) {
    this.delayMs = Math.ceil(60000 / requestsPerMinute);
  }
 
  async add<T>(fn: () => Promise<T>): Promise<T> {
    return new Promise((resolve, reject) => {
      this.queue.push(async () => {
        try {
          resolve(await fn());
        } catch (err) {
          reject(err);
        }
      });
      this.process();
    });
  }
 
  private async process() {
    if (this.processing) return;
    this.processing = true;
 
    while (this.queue.length > 0) {
      const task = this.queue.shift()!;
      await task();
      await new Promise(resolve => setTimeout(resolve, this.delayMs));
    }
 
    this.processing = false;
  }
}
 
// Usage: limit to 50 requests per minute
const queue = new RequestQueue(50);
 
const result = await queue.add(() =>
  fetch('https://api.jtl-cloud.com/erp/v2/products', { headers })
);

4. Cache Responses

Cache data based on how often it changes:
  • Rarely changes → cache longer (categories, attributes)
  • Frequently changes → cache briefly or not at all (orders)

Best Practices

PracticeWhy
Read rate limit headers on every responseKnow your remaining quota before it runs out
Spread requests evenlyAvoid sending many requests at once. It wastes your quota window
Add jitter to retriesPrevents multiple clients from retrying at the exact same time
Cache aggressively for reference dataCategories, attributes, and settings don’t change often
Use webhooks instead of pollingInstead of checking “did anything change?” every minute, let JTL tell you. See Webhooks.
Max 3 retries on 429If you’re still rate limited after 3 retries, something is fundamentally wrong with your request pattern
Log rate limit eventsTrack when you’re being throttled to identify patterns and optimise your request timing

Quick Reference

QuestionAnswer
What status code means rate limited?429 Too Many Requests
How do I check my remaining quota?Read the X-RateLimit-Remaining response header
How long until the limit resets?Check X-RateLimit-Interval-Length-Seconds
What’s the max requests per interval?Check X-RateLimit-Limit (varies by endpoint)
Should I retry on 429?Yes, with exponential backoff + jitter (max 3 retries)
Are sandbox limits the same as production?No. Always check the response headers per environment

Next steps

Webhooks

Reduce polling and API calls by reacting to events in real time.

Pagination

How to page through large result sets without hitting rate limits.

Error Handling

Handle 429 errors alongside other error types.

API Reference

Browse endpoints and their specific rate limits.