Skip to main content

Model Selection

Choosing the right model can significantly impact cost and quality.

Task-Based Recommendations

TaskRecommended ModelsReasoning
Simple Q&Agpt-4.1-mini, gemini-2.0-flashFast, cheap, good enough
Complex reasoningo3, claude-sonnet-4-20250514, deepseek-r1Better logic and planning
Codingclaude-sonnet-4-20250514, gpt-4o, deepseek-v3Optimized for code
Creative writingclaude-3-5-sonnet-20241022, gpt-4oBetter prose quality
Vision/Imagesgpt-4o, claude-3-5-sonnet-20241022, gemini-2.0-flashNative vision support
Long contextgemini-2.5-pro, claude-3-5-sonnet-202410221M+ token windows
Cost-sensitivegpt-4.1-mini, gemini-2.0-flash, deepseek-v3Best value

Cost Tiers

$$$$ Premium: o3, claude-opus-4-20250805, gpt-4o
$$$  Standard: claude-3-5-sonnet-20241022, gpt-4o
$$   Budget:   gpt-4.1-mini, gemini-2.0-flash
$    Economy:  deepseek-v3, deepseek-r1

Cost Optimization

1. Use Smaller Models First

def smart_query(question: str, complexity: str = "auto"):
    """Use cheaper models for simple tasks."""

    if complexity == "simple":
        model = "gpt-4.1-mini"
    elif complexity == "complex":
        model = "gpt-4o"
    else:
        # Start cheap, escalate if needed
        model = "gpt-4.1-mini"

    response = client.chat.completions.create(
        model=model,
        messages=[{"role": "user", "content": question}]
    )
    return response

2. Set max_tokens

Always set a reasonable max_tokens limit:
# ❌ Bad: No limit, could generate thousands of tokens
response = client.chat.completions.create(
    model="gpt-4o",
    messages=[{"role": "user", "content": "Summarize this article"}]
)

# ✅ Good: Limit response length
response = client.chat.completions.create(
    model="gpt-4o",
    messages=[{"role": "user", "content": "Summarize this article"}],
    max_tokens=500  # Reasonable limit for a summary
)

3. Optimize Prompts

# ❌ Verbose prompt (more input tokens)
prompt = """
I would like you to please help me by analyzing the following text
and providing a comprehensive summary of the main points. Please be
thorough but also concise in your response. The text is as follows:
{text}
"""

# ✅ Concise prompt (fewer tokens)
prompt = "Summarize the key points:\n{text}"

4. Enable Caching

Take advantage of semantic caching:
# For repeated similar queries, caching provides major savings
response = client.chat.completions.create(
    model="gpt-4o",
    messages=[{"role": "user", "content": "What is machine learning?"}],
    temperature=0  # Deterministic = better cache hits
)

5. Batch Similar Requests

# ❌ Many small requests
for question in questions:
    response = client.chat.completions.create(
        model="gpt-4o",
        messages=[{"role": "user", "content": question}]
    )

# ✅ Fewer larger requests
combined_prompt = "\n".join([f"{i+1}. {q}" for i, q in enumerate(questions)])
response = client.chat.completions.create(
    model="gpt-4o",
    messages=[{"role": "user", "content": f"Answer each question:\n{combined_prompt}"}]
)

Performance Optimization

1. Use Streaming for UX

Streaming improves perceived performance:
stream = client.chat.completions.create(
    model="gpt-4o",
    messages=[{"role": "user", "content": "Write a long essay"}],
    stream=True
)

for chunk in stream:
    if chunk.choices[0].delta.content:
        print(chunk.choices[0].delta.content, end="", flush=True)

2. Choose Fast Models for Interactive Use

Use CaseRecommendedLatency
Chat UIgpt-4.1-mini, gemini-2.0-flash~200ms first token
Tab completionclaude-3-5-haiku-20241022~150ms first token
Background processinggpt-4o, claude-3-5-sonnet-20241022~500ms first token

3. Set Timeouts

client = OpenAI(
    api_key="sk-your-key",
    base_url="https://api.lemondata.cc/v1",
    timeout=60.0  # 60 second timeout
)

Reliability

1. Implement Retries

import time
from openai import RateLimitError, APIError

def chat_with_retry(messages, max_retries=3):
    for attempt in range(max_retries):
        try:
            return client.chat.completions.create(
                model="gpt-4o",
                messages=messages
            )
        except RateLimitError:
            wait = 2 ** attempt
            print(f"Rate limited, waiting {wait}s...")
            time.sleep(wait)
        except APIError as e:
            if attempt == max_retries - 1:
                raise
            time.sleep(1)
    raise Exception("Max retries exceeded")

2. Handle Errors Gracefully

from openai import APIError, AuthenticationError, RateLimitError

try:
    response = client.chat.completions.create(...)
except AuthenticationError:
    # Check API key
    notify_admin("Invalid API key")
except RateLimitError:
    # Queue for later or use backup
    add_to_queue(request)
except APIError as e:
    if e.status_code == 402:
        notify_admin("Balance low")
    elif e.status_code >= 500:
        # Server error, retry later
        schedule_retry(request)

3. Use Fallback Models

FALLBACK_CHAIN = ["gpt-4o", "claude-3-5-sonnet-20241022", "gemini-2.0-flash"]

def chat_with_fallback(messages):
    for model in FALLBACK_CHAIN:
        try:
            return client.chat.completions.create(
                model=model,
                messages=messages
            )
        except APIError:
            continue
    raise Exception("All models failed")

Security

1. Protect API Keys

# ❌ Never hardcode keys
client = OpenAI(api_key="sk-abc123...")

# ✅ Use environment variables
import os
client = OpenAI(api_key=os.environ["LEMONDATA_API_KEY"])

2. Validate User Input

def validate_message(content: str) -> bool:
    """Validate user input before sending to API."""
    if len(content) > 100000:
        raise ValueError("Message too long")
    # Add other validation as needed
    return True

3. Set API Key Limits

Create separate API keys with spending limits for:
  • Development/testing
  • Production
  • Different applications

Monitoring

1. Track Usage

Check your dashboard regularly for:
  • Token usage by model
  • Cost breakdown
  • Cache hit rates
  • Error rates

2. Log Important Metrics

import logging

response = client.chat.completions.create(...)

logging.info({
    "model": response.model,
    "prompt_tokens": response.usage.prompt_tokens,
    "completion_tokens": response.usage.completion_tokens,
    "total_tokens": response.usage.total_tokens,
})

3. Set Up Alerts

Configure low balance alerts in your dashboard to avoid service interruption.

Checklist

  • Using appropriate model for each task
  • Setting max_tokens limits
  • Prompts are concise
  • Caching enabled where appropriate
  • Batching similar requests
  • Streaming for interactive UX
  • Fast models for real-time use
  • Timeouts configured
  • Retry logic implemented
  • Error handling in place
  • Fallback models configured
  • API keys in environment variables
  • Input validation
  • Separate keys for dev/prod
  • Spending limits set