Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.lemondata.cc/llms.txt

Use this file to discover all available pages before exploring further.

Overview

LemonData works with the official OpenAI SDKs by pointing the client to https://api.lemondata.cc/v1. For most new projects, prefer Chat Completions as the default OpenAI-compatible path. Use Responses API only when you explicitly need Responses-specific behavior. Responses-specific fields are not guaranteed to behave identically across every model and routed path.
Python, JavaScript, and Go have official OpenAI SDKs. PHP works well with OpenAI-compatible community clients, but it is not an official OpenAI SDK.
Type: Native SDKPrimary Path: OpenAI-compatible / Chat CompletionsSupport Confidence: Supported core path

Installation

pip install openai
Use POST /v1/responses only when you explicitly need Responses-specific behavior. Some Responses-only fields can still depend on the selected model and routed path.

Configure the Client

from openai import OpenAI

client = OpenAI(
    api_key="sk-your-lemondata-key",
    base_url="https://api.lemondata.cc/v1",
)
response = client.chat.completions.create(
    model="gpt-5.4",
    messages=[{"role": "user", "content": "Explain what LemonData does in one sentence."}]
)

print(response.choices[0].message.content)

Advanced: Responses API

Use this path only when your tool or workflow explicitly depends on OpenAI Responses semantics.

Streaming with Responses

stream = client.responses.create(
    model="gpt-5.4",
    input="Write a short poem about coding.",
    stream=True,
)

for event in stream:
    if event.type == "response.output_text.delta":
        print(event.delta, end="")

Tools / Function Calling

response = client.responses.create(
    model="gpt-5.4",
    input="What's the weather in Tokyo?",
    tools=[{
        "type": "function",
        "name": "get_weather",
        "description": "Get weather for a location",
        "parameters": {
            "type": "object",
            "properties": {
                "location": {"type": "string"}
            },
            "required": ["location"]
        }
    }]
)

for item in response.output:
    if item.type == "function_call":
        print(item.name)
        print(item.arguments)

Vision with Responses

response = client.responses.create(
    model="gpt-4o",
    input=[{
        "role": "user",
        "content": [
            {"type": "input_text", "text": "What's in this image?"},
            {"type": "input_image", "image_url": "https://example.com/image.jpg"}
        ]
    }]
)

print(response.output_text)

Embeddings

response = client.embeddings.create(
    model="text-embedding-3-small",
    input="Hello world"
)

print(response.data[0].embedding[:5])

Chat Completions

Chat Completions is the default OpenAI-compatible path for LemonData:
response = client.chat.completions.create(
    model="gpt-4o",
    messages=[
        {"role": "system", "content": "You are a helpful assistant."},
        {"role": "user", "content": "Hello!"}
    ]
)

print(response.choices[0].message.content)

Troubleshooting

  • Verify the base URL is exactly https://api.lemondata.cc/v1
  • Check for proxy interference or custom HTTP client overrides
  • Make sure your SDK version is current before debugging provider behavior
  • Check that your API key starts with sk-
  • Verify the key is active in LemonData dashboard
  • Confirm the SDK is sending Authorization: Bearer ...
  • responses.create(...) sends requests to /v1/responses
  • chat.completions.create(...) sends requests to /v1/chat/completions
  • Use Chat Completions by default unless you explicitly need Responses-specific behavior