Skip to Content
APIRate Limits

Rate Limits

Understanding rate limits and pagination helps you build robust integrations.

Rate Limits

Default Limits

Endpoint TypeLimitWindow
Standard requests100per minute
Upload requests10concurrent
Export requests5concurrent

Rate Limit Headers

All responses include rate limit headers:

X-RateLimit-Limit: 100 X-RateLimit-Remaining: 95 X-RateLimit-Reset: 1705312260
HeaderDescription
X-RateLimit-LimitMax requests allowed
X-RateLimit-RemainingRequests remaining
X-RateLimit-ResetUnix timestamp when limit resets

Handling Rate Limits

When rate limited, the API returns 429 Too Many Requests:

{ "detail": "Request was throttled. Expected available in 30 seconds." }

Implement exponential backoff:

import time import requests def fetch_with_retry(url, headers, max_retries=3): for attempt in range(max_retries): response = requests.get(url, headers=headers) if response.status_code == 429: wait_time = 2 ** attempt time.sleep(wait_time) continue return response raise Exception("Max retries exceeded")

Pagination

List endpoints use cursor-based pagination.

Query Parameters

ParameterTypeDescription
cursorstringPagination cursor from previous response

Response Format

{ "count": 150, "next": "https://server.avala.ai/api/v1/datasets/?cursor=cD0yMDI0...", "previous": null, "results": [...] }

Example

import requests BASE_URL = "https://server.avala.ai/api/v1" headers = {"X-Avala-Api-Key": API_KEY} def fetch_all_datasets(): all_datasets = [] url = f"{BASE_URL}/datasets/" while url: response = requests.get(url, headers=headers) data = response.json() all_datasets.extend(data["results"]) url = data.get("next") return all_datasets

Best Practices

Respect Rate Limits

  • Check X-RateLimit-Remaining before making requests
  • Implement backoff when limits are reached
  • Spread requests over time for large operations

Efficient Pagination

  • Use the next URL directly instead of constructing cursors
  • Process results as you paginate rather than loading everything into memory

Caching

Cache responses when appropriate:

from functools import lru_cache @lru_cache(maxsize=100) def get_dataset(owner, slug): response = requests.get( f"{BASE_URL}/datasets/{owner}/{slug}/", headers=headers ) return response.json()
Last updated on