Overview
LemonData exposes multiple API formats so common coding tools, SDKs, and frameworks can integrate with minimal glue code.
This page is intentionally narrower than a marketing matrix:
Supported means we document a concrete setup path and LemonData exposes the protocol shape that path expects.
Strong native path means the repo also has direct adapter or request-format evidence for that protocol family.
Best-effort means the integration can work, but the upstream client does not treat this custom gateway workflow as a stable contract.
Unsupported fields are not handled uniformly. On compatibility routes, some fields are ignored or normalized. On /v1/responses, unsupported fields can return explicit 400 or 503 errors when that route cannot guarantee the requested behavior.
Endpoint Format Use Case /v1/chat/completionsOpenAI Chat Universal compatibility /v1/responsesOpenAI Responses Stateful conversations /v1/messagesAnthropic Messages Claude native features /v1beta/models/:model:generateContentGoogle Gemini Gemini native features
IDE & CLI Compatibility
Tool Support Level Format Notes Cursor Supported with limits OpenAI-compatible Works for BYOK standard chat/editor flows, not as a replacement for Cursor-managed features like Tab Completion Claude Code CLI Strong native path Anthropic Native /v1/messages route with adapter coverage for thinking and tool_choice Codex CLI Supported with model/path limits OpenAI Responses Treat /v1/responses as an advanced path for Codex-specific workflows; some Responses-only fields are not guaranteed across every model and routed path Gemini CLI Best-effort / experimental Gemini Custom LemonData base URL flow is not a stable upstream contract OpenCode Supported OpenAI-compatible Use an OpenAI-compatible provider by default; move to a Responses-based provider only when you explicitly need it
Other OpenAI-compatible editors and agent tools often work with the same base URL pattern, but this repo does not currently maintain tool-specific regression coverage for Windsurf, Aider, Continue.dev, Cline/Roo Code, GitHub Copilot, and similar clients.
Configuration Examples
Cursor
Claude Code
OpenCode
Aider
Base URL: https://api.lemondata.cc/v1
API Key: sk-your-lemondata-key
Cursor uses Anthropic-style tool format internally. LemonData supports both:
OpenAI format: { type: "function", function: { name, parameters } }
Anthropic format: { name, input_schema } (no type field)
export ANTHROPIC_BASE_URL = "https://api.lemondata.cc"
export ANTHROPIC_API_KEY = "sk-your-lemondata-key"
export OPENAI_API_KEY = "sk-your-lemondata-key"
export LOCAL_ENDPOINT = "https://api.lemondata.cc/v1"
export OPENAI_API_KEY = "sk-your-lemondata-key"
export OPENAI_BASE_URL = "https://api.lemondata.cc/v1"
aider --model gpt-5.4
SDK Compatibility
Documented SDK & Framework Paths
SDK / Framework Language Support Level Notes OpenAI SDK Python/JS/Go Supported core path Chat Completions and Embeddings are the default documented path; some Responses-only fields are not guaranteed across every model and routed path Anthropic SDK Python/JS Strong native path Native Messages route with direct evidence for tools, thinking, and prompt caching Vercel AI SDK TypeScript Recommended integration pattern Prefer @ai-sdk/openai-compatible; use @ai-sdk/openai only when you explicitly want Responses-native behavior LangChain Python/JS Supported standard surfaces ChatOpenAI and OpenAIEmbeddings are the intended scope; vendor-native extras are out of scopeLlamaIndex Python Supported via OpenAILike Use OpenAILike, not the built-in OpenAI classes, for third-party gateways such as LemonData Dify - Supported with scope limits OpenAI provider and chat-completions-oriented flows are the intended path; not a fit for Codex-specific Responses or WebSocket behavior
Chat Completions Parameters
Core Parameters
Parameter Type Description modelstring Model identifier (required) messagesarray Conversation messages (required) max_tokensinteger Maximum output tokens temperaturenumber Sampling temperature (0-2) top_pnumber Nucleus sampling (0-1) streamboolean Enable streaming
{
"tools" : [
{
"type" : "function" ,
"function" : {
"name" : "get_weather" ,
"description" : "Get weather for a location" ,
"parameters" : {
"type" : "object" ,
"properties" : {
"location" : { "type" : "string" }
}
},
"strict" : true
}
}
],
"tool_choice" : "auto" ,
"parallel_tool_calls" : true
}
Format Example Description String "auto", "none", "required"Simple selection OpenAI Object { "type": "function", "function": { "name": "fn" } }Force specific function Anthropic Object { "type": "tool", "name": "fn", "disable_parallel_tool_use": true }Anthropic native format
Advanced Parameters
Parameter Type Description stream_optionsobject { include_usage: true } for token countsreasoning_effortstring "low", "medium", "high" for reasoning-enabled GPT-5 modelsservice_tierstring "auto" or "default"seedinteger Deterministic outputs logprobsboolean Return log probabilities top_logprobsinteger Number of top logprobs (0-20) logit_biasobject Token bias map (-100 to 100) frequency_penaltynumber Repetition penalty (-2 to 2) presence_penaltynumber Topic penalty (-2 to 2) stopstring/array Stop sequences ninteger Number of completions (1-128) userstring User identifier for tracking
OpenAI Advanced Features
Parameter Type Description modalitiesarray ["text", "audio"] for multimodalaudioobject Audio output config (voice, format) predictionobject Predicted output for faster completion metadataobject Key-value pairs for tracking storeboolean Store for later retrieval
Provider-Specific Options
{
"anthropic_options" : {
"thinking" : {
"type" : "enabled" ,
"budget_tokens" : 10000
},
"prompt_caching" : true
},
"google_options" : {
"safety_settings" : [ ... ],
"google_search" : true ,
"code_execution" : true
}
}
Anthropic Messages Parameters
Core Parameters
Parameter Type Description modelstring Model identifier messagesarray Conversation messages max_tokensinteger Maximum output (up to 128000) systemstring/array System prompt streamboolean Enable streaming
{
"tools" : [
{
"name" : "get_weather" ,
"description" : "Get weather" ,
"input_schema" : {
"type" : "object" ,
"properties" : {
"location" : { "type" : "string" }
}
}
}
],
"tool_choice" : {
"type" : "auto" ,
"disable_parallel_tool_use" : false
}
}
Extended Thinking
{
"model" : "claude-opus-4-6" ,
"thinking" : {
"type" : "enabled" ,
"budget_tokens" : 10000
}
}
Responses API Parameters
Core Parameters
Parameter Type Description modelstring Model identifier inputstring/array Input content instructionsstring System instructions max_output_tokensinteger Maximum output tokens previous_response_idstring Continue conversation
Advanced Parameters
Parameter Type Description truncation_strategystring "auto" or "disabled"includearray ["reasoning.encrypted_content"]reasoning_effortstring For reasoning models service_tierstring Priority tier
Supports both OpenAI and Anthropic tool formats:
// OpenAI format
{ "type" : "function" , "name" : "fn" , "parameters" : { ... } }
// Anthropic format (Cursor compatibility)
{ "name" : "fn" , "input_schema" : { ... } }
Gemini API Parameters
Core Parameters
Parameter Type Description contentsarray Conversation content systemInstructionobject System prompt generationConfigobject Generation settings
{
"tools" : [{
"functionDeclarations" : [{
"name" : "search" ,
"description" : "Search the web" ,
"parameters" : { ... }
}],
"codeExecution" : {},
"googleSearch" : {}
}],
"toolConfig" : {
"functionCallingConfig" : {
"mode" : "AUTO"
}
}
}
Safety Settings
{
"safetySettings" : [
{
"category" : "HARM_CATEGORY_HARASSMENT" ,
"threshold" : "BLOCK_MEDIUM_AND_ABOVE"
}
]
}
Additional Parameters
Parameter Type Description cachedContentstring Cached content reference responseMimeTypestring "text/plain" or "application/json"responseSchemaobject JSON schema for structured output
Streaming
All endpoints support Server-Sent Events (SSE) streaming:
# Chat Completions
curl https://api.lemondata.cc/v1/chat/completions \
-H "Authorization: Bearer sk-xxx" \
-d '{"model": "gpt-4o", "messages": [...], "stream": true}'
# With usage tracking
-d '{"...", "stream_options": {"include_usage": true}}'
Error Handling
LemonData returns OpenAI-compatible error responses:
{
"error" : {
"message" : "Invalid API key" ,
"type" : "invalid_api_key" ,
"code" : "invalid_api_key"
}
}
See Error Handling Guide for details.
Best Practices
Use passthrough for unknown parameters
All schemas use .passthrough() - unknown parameters are forwarded to upstream providers.
Prefer stream_options for accurate billing
Enable stream_options.include_usage for accurate token counts in streaming responses.
Use appropriate tool_choice format
Match your SDK’s expected format. LemonData accepts both OpenAI and Anthropic formats.