labelf.ai
Book a Demo
API Documentation
https://api.labelf.ai/v2 REST + JSON Bearer Auth

Errors & Rate Limits

Standard HTTP status codes, predictable error responses, and transparent rate limits. Every error returns JSON with enough context to debug without guessing.

Error Response Format

All error responses follow the same JSON structure. The status field mirrors the HTTP status code, and message describes what went wrong in plain English.

{
  "status": 400,
  "message": "texts field is required",
  "error_code": "MISSING_REQUIRED_FIELD",
  "request_id": "req_7f3a9b2c"
}

The request_id is useful for debugging — include it when contacting support. The error_code is a machine-readable identifier you can match on in your error handling logic.

Status Codes

Status Meaning Description
200 Success The request completed successfully.
400 Bad Request Invalid parameters, missing required fields, or malformed JSON.
401 Unauthorized Missing or invalid API key. Check your Authorization header.
403 Forbidden Valid API key but insufficient permissions or quota exceeded.
404 Not Found The requested resource (model, dataset, etc.) does not exist.
409 Conflict Resource state conflict — e.g. uploading to a dataset that is still processing.
429 Too Many Requests Rate limit exceeded. Back off and retry using the Retry-After header.
500 Internal Error Something went wrong on our end. Retry with exponential backoff.

Common Error Scenarios

These are the errors you will hit most often during integration, with specific debugging steps for each.

400 Model is archived

The model you are calling has been archived. Archived models cannot serve inference requests.

Fix: Redeploy the model from the dashboard, or switch to a different active model. Check available models via GET /v2/models.

400 texts field is required

The request body is missing the texts array, or it was sent as a different type.

Fix: Ensure your request body contains a "texts" key with an array of strings. Even for a single text, wrap it in an array.

403 Quota exceeded

Your workspace has used all classification requests for the current billing period.

Fix: Check your plan limits in Workspace Settings. Upgrade your plan or wait for the next billing cycle to reset your quota.

404 No deployed model found

The model ID exists but the model has not been deployed yet. Only deployed models can serve inference.

Fix: Deploy the model from the Labelf dashboard first. Once deployed, the model endpoint becomes active within seconds.

409 Empty texts array

The texts array was provided but contains no elements, or all elements are empty strings.

Fix: The texts array must contain at least one non-empty string. Remove any empty strings before sending.

429 Rate limit exceeded

You have sent too many requests in a short window.

Fix: Read the Retry-After header for how many seconds to wait. Implement exponential backoff. Consider batching texts (up to 8 per request) to reduce call volume.

Rate Limits

Rate limits protect service stability and are set per workspace. If you exceed a limit, the API returns 429 Too Many Requests with headers telling you when to retry.

Limit Value Note
Max texts per inference request 8 Batch up to 8 texts in a single call to minimize request count
Max texts per similarity request 200 Larger batches for embedding-based comparison
Requests per minute — Starter 60 Suitable for low-volume integrations and testing
Requests per minute — Growth 300 Production workloads with moderate volume
Requests per minute — Enterprise Custom Dedicated capacity, no shared rate limits
Monthly classification quota Plan-based Resets on billing cycle date
Burst capacity 2x sustained Short bursts up to 2x your per-minute limit for 10 seconds

Rate Limit Headers

Every API response includes rate limit headers so you can monitor usage proactively, not just react to 429s.

Header Description
Retry-After Seconds to wait before retrying. Present on 429 responses.
X-RateLimit-Limit Your per-minute request limit for the current plan.
X-RateLimit-Remaining Requests remaining in the current window.
X-RateLimit-Reset Unix timestamp when the current rate limit window resets.

Retry Strategy

Not all errors should be retried. Follow these rules to build a resilient integration.

Retry: 5xx errors

Server errors are transient. Use exponential backoff: 1s, 2s, 4s, 8s. Max 3 retries. If the error persists after retries, the issue is on our side — contact support with the request_id.

Retry: 429 rate limits

Wait for the duration specified in the Retry-After header, then retry. Consider batching texts (up to 8 per request) to reduce call volume.

Do not retry: 4xx errors

Client errors (400, 401, 403, 404, 409) indicate a problem with your request. Retrying the same request will produce the same error. Fix the request parameters, check your credentials, or verify the resource exists.

Idempotency

Classification and similarity requests are inherently idempotent — the same input always produces the same output. Safe to retry without side effects.

Example: Retry Logic

A recommended retry pattern for production integrations:

# Attempt 1: normal request
curl -X POST https://api.labelf.ai/v2/models/42/inference \
  -H "Authorization: Bearer $LABELF_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{"texts": ["I want to cancel my subscription"]}'

# If 429 → read Retry-After header, wait, retry
# If 5xx → wait 1s, retry. Then 2s, then 4s. Max 3 retries.
# If 4xx (not 429) → do not retry, fix the request

# Response headers on every request:
# X-RateLimit-Limit: 300
# X-RateLimit-Remaining: 247
# X-RateLimit-Reset: 1700000060

Pagination

List endpoints (models, datasets, records) support offset-based pagination. Use the offset and limit query parameters to page through results.

GET /v2/models?offset=20&limit=10

# Response includes pagination metadata
{
  "data": [...],
  "total": 47,
  "offset": 20,
  "limit": 10,
  "has_more": true
}
labelf.ai
Address:
Gamla Brogatan 26, Stockholm, Sweden
Contact:

© 2026 Labelf. All rights reserved.