Skip to main content

Error Response Format

All errors return a consistent JSON format:
{
  "error": {
    "message": "Human-readable error description",
    "type": "error_type",
    "code": "error_code",
    "param": "parameter_name"  // Optional, for validation errors
  }
}

HTTP Status Codes

CodeDescription
400Bad Request - Invalid parameters
401Unauthorized - Invalid or missing API key
402Payment Required - Insufficient balance
404Not Found - Model or resource not found
429Too Many Requests - Rate limit exceeded
500Internal Server Error
502Bad Gateway - Upstream provider error
503Service Unavailable - All channels failed

Error Types

Authentication Errors (401)

TypeDescription
unauthorizedNo API key provided
invalid_api_keyAPI key format is incorrect
expired_api_keyAPI key has been revoked
from openai import OpenAI, AuthenticationError

try:
    response = client.chat.completions.create(...)
except AuthenticationError as e:
    print(f"Authentication failed: {e.message}")

Payment Errors (402)

TypeDescription
insufficient_balanceAccount balance is too low
quota_exceededAPI key usage limit reached
from openai import OpenAI, APIStatusError

try:
    response = client.chat.completions.create(...)
except APIStatusError as e:
    if e.status_code == 402:
        print("Please top up your account balance")

Validation Errors (400)

TypeDescription
invalid_request_errorRequest parameters are invalid
context_length_exceededInput too long for model
model_not_foundRequested model doesn’t exist
{
  "error": {
    "message": "model: 'gpt-999' is not a valid model",
    "type": "invalid_request_error",
    "param": "model"
  }
}

Rate Limit Errors (429)

When you exceed rate limits:
{
  "error": {
    "message": "Rate limit exceeded. Please slow down.",
    "type": "rate_limit_exceeded"
  }
}
Headers included:
X-RateLimit-Limit: 1000
X-RateLimit-Remaining: 0
X-RateLimit-Reset: 1234567890
Retry-After: 60

Upstream Errors (502, 503)

TypeDescription
upstream_errorProvider returned an error
all_channels_failedNo available providers
timeout_errorRequest timed out

Handling Errors in Python

from openai import OpenAI, APIError, RateLimitError, APIConnectionError

client = OpenAI(
    api_key="sk-your-api-key",
    base_url="https://api.lemondata.cc/v1"
)

def chat_with_retry(messages, max_retries=3):
    for attempt in range(max_retries):
        try:
            return client.chat.completions.create(
                model="gpt-4o",
                messages=messages
            )
        except RateLimitError as e:
            if attempt < max_retries - 1:
                import time
                time.sleep(2 ** attempt)  # Exponential backoff
                continue
            raise
        except APIConnectionError as e:
            print(f"Connection error: {e}")
            raise
        except APIError as e:
            print(f"API error: {e.status_code} - {e.message}")
            raise

Handling Errors in JavaScript

import OpenAI from 'openai';

const client = new OpenAI({
  apiKey: 'sk-your-api-key',
  baseURL: 'https://api.lemondata.cc/v1'
});

async function chatWithRetry(messages, maxRetries = 3) {
  for (let attempt = 0; attempt < maxRetries; attempt++) {
    try {
      return await client.chat.completions.create({
        model: 'gpt-4o',
        messages
      });
    } catch (error) {
      if (error instanceof OpenAI.RateLimitError) {
        if (attempt < maxRetries - 1) {
          await new Promise(r => setTimeout(r, 2 ** attempt * 1000));
          continue;
        }
      }
      throw error;
    }
  }
}

Best Practices

When rate limited, wait progressively longer between retries:
wait_time = 2 ** attempt  # 1s, 2s, 4s, 8s...
Always set reasonable timeouts to avoid hanging requests:
client = OpenAI(timeout=60.0)  # 60 second timeout
Log the full error response including request ID for support:
except APIError as e:
    logger.error(f"API Error: {e.status_code} - {e.message}")
Some models have specific requirements (e.g., max tokens, image formats). Validate inputs before making requests.