Overview
LemonData expone múltiples formatos de API para que las herramientas de codificación comunes, SDKs y frameworks puedan integrarse con una cantidad mínima de código puente.
Esta página es intencionalmente más limitada que una matriz de marketing:
Supported significa que documentamos una ruta de configuración concreta y que LemonData expone la forma de protocolo que esa ruta espera.
Strong native path significa que el repositorio también tiene adaptadores directos o evidencia de formato de solicitud para esa familia de protocolos.
Best-effort significa que la integración puede funcionar, pero el cliente upstream no trata este flujo de gateway personalizado como un contrato estable.
Los campos no soportados no se manejan de forma uniforme. En rutas de compatibilidad, algunos campos se ignoran o normalizan. En /v1/responses, los campos no soportados pueden devolver errores explícitos 400 o 503 cuando esa ruta no puede garantizar el comportamiento solicitado.
Endpoint Format Use Case /v1/chat/completionsOpenAI Chat Universal compatibility /v1/responsesOpenAI Responses Stateful conversations /v1/messagesAnthropic Messages Claude native features /v1beta/models/:model:generateContentGoogle Gemini Gemini native features
IDE & CLI Compatibility
Tool Support Level Format Notes Cursor Supported with limits OpenAI-compatible Works for BYOK standard chat/editor flows, not as a replacement for Cursor-managed features like Tab Completion Claude Code CLI Strong native path Anthropic Native /v1/messages route with adapter coverage for thinking and tool_choice Codex CLI Supported with model/path limits OpenAI Responses Treat /v1/responses as an advanced path for Codex-specific workflows; some Responses-only fields are not guaranteed across every model and routed path Gemini CLI Best-effort / experimental Gemini Custom LemonData base URL flow is not a stable upstream contract OpenCode Supported OpenAI-compatible Use an OpenAI-compatible provider by default; move to a Responses-based provider only when you explicitly need it
Otros editores y herramientas de agentes compatibles con OpenAI suelen funcionar con el mismo patrón de base URL, pero este repositorio no mantiene actualmente cobertura de regresión específica para herramientas como Windsurf, Aider, Continue.dev, Cline/Roo Code, GitHub Copilot y clientes similares.
Configuration Examples
Cursor
Claude Code
OpenCode
Aider
Base URL: https://api.lemondata.cc/v1
API Key: sk-your-lemondata-key
Cursor usa internamente el formato de herramientas estilo Anthropic. LemonData soporta ambos:
OpenAI format: { type: "function", function: { name, parameters } }
Anthropic format: { name, input_schema } (no type field)
export ANTHROPIC_BASE_URL = "https://api.lemondata.cc"
export ANTHROPIC_API_KEY = "sk-your-lemondata-key"
export OPENAI_API_KEY = "sk-your-lemondata-key"
export LOCAL_ENDPOINT = "https://api.lemondata.cc/v1"
export OPENAI_API_KEY = "sk-your-lemondata-key"
export OPENAI_BASE_URL = "https://api.lemondata.cc/v1"
aider --model gpt-5.4
SDK Compatibility
Documented SDK & Framework Paths
SDK / Framework Language Support Level Notes OpenAI SDK Python/JS/Go Supported core path Chat Completions and Embeddings are the default documented path; some Responses-only fields are not guaranteed across every model and routed path Anthropic SDK Python/JS Strong native path Native Messages route with direct evidence for tools, thinking, and prompt caching Vercel AI SDK TypeScript Recommended integration pattern Prefer @ai-sdk/openai-compatible; use @ai-sdk/openai only when you explicitly want Responses-native behavior LangChain Python/JS Supported standard surfaces ChatOpenAI and OpenAIEmbeddings are the intended scope; vendor-native extras are out of scopeLlamaIndex Python Supported via OpenAILike Use OpenAILike, not the built-in OpenAI classes, for third-party gateways such as LemonData Dify - Supported with scope limits OpenAI provider and chat-completions-oriented flows are the intended path; not a fit for Codex-specific Responses or WebSocket behavior
Chat Completions Parameters
Core Parameters
Parameter Type Description modelstring Model identifier (required) messagesarray Conversation messages (required) max_tokensinteger Maximum output tokens temperaturenumber Sampling temperature (0-2) top_pnumber Nucleus sampling (0-1) streamboolean Enable streaming
{
"tools" : [
{
"type" : "function" ,
"function" : {
"name" : "get_weather" ,
"description" : "Get weather for a location" ,
"parameters" : {
"type" : "object" ,
"properties" : {
"location" : { "type" : "string" }
}
},
"strict" : true
}
}
],
"tool_choice" : "auto" ,
"parallel_tool_calls" : true
}
Format Example Description String "auto", "none", "required"Simple selection OpenAI Object { "type": "function", "function": { "name": "fn" } }Force specific function Anthropic Object { "type": "tool", "name": "fn", "disable_parallel_tool_use": true }Anthropic native format
Advanced Parameters
Parameter Type Description stream_optionsobject { include_usage: true } for token countsreasoning_effortstring "low", "medium", "high" for reasoning-enabled GPT-5 modelsservice_tierstring "auto" or "default"seedinteger Deterministic outputs logprobsboolean Return log probabilities top_logprobsinteger Number of top logprobs (0-20) logit_biasobject Token bias map (-100 to 100) frequency_penaltynumber Repetition penalty (-2 to 2) presence_penaltynumber Topic penalty (-2 to 2) stopstring/array Stop sequences ninteger Number of completions (1-128) userstring User identifier for tracking
OpenAI Advanced Features
Parameter Type Description modalitiesarray ["text", "audio"] for multimodalaudioobject Audio output config (voice, format) predictionobject Predicted output for faster completion metadataobject Key-value pairs for tracking storeboolean Store for later retrieval
Provider-Specific Options
{
"anthropic_options" : {
"thinking" : {
"type" : "enabled" ,
"budget_tokens" : 10000
},
"prompt_caching" : true
},
"google_options" : {
"safety_settings" : [ ... ],
"google_search" : true ,
"code_execution" : true
}
}
Anthropic Messages Parameters
Core Parameters
Parameter Type Description modelstring Model identifier messagesarray Conversation messages max_tokensinteger Maximum output (up to 128000) systemstring/array System prompt streamboolean Enable streaming
{
"tools" : [
{
"name" : "get_weather" ,
"description" : "Get weather" ,
"input_schema" : {
"type" : "object" ,
"properties" : {
"location" : { "type" : "string" }
}
}
}
],
"tool_choice" : {
"type" : "auto" ,
"disable_parallel_tool_use" : false
}
}
Extended Thinking
{
"model" : "claude-opus-4-6" ,
"thinking" : {
"type" : "enabled" ,
"budget_tokens" : 10000
}
}
Responses API Parameters
Core Parameters
Parameter Type Description modelstring Model identifier inputstring/array Input content instructionsstring System instructions max_output_tokensinteger Maximum output tokens previous_response_idstring Continue conversation
Advanced Parameters
Parameter Type Description truncation_strategystring "auto" or "disabled"includearray ["reasoning.encrypted_content"]reasoning_effortstring For reasoning models service_tierstring Priority tier
Supports both OpenAI and Anthropic tool formats:
// OpenAI format
{ "type" : "function" , "name" : "fn" , "parameters" : { ... } }
// Anthropic format (Cursor compatibility)
{ "name" : "fn" , "input_schema" : { ... } }
Gemini API Parameters
Core Parameters
Parameter Type Description contentsarray Conversation content systemInstructionobject System prompt generationConfigobject Generation settings
{
"tools" : [{
"functionDeclarations" : [{
"name" : "search" ,
"description" : "Search the web" ,
"parameters" : { ... }
}],
"codeExecution" : {},
"googleSearch" : {}
}],
"toolConfig" : {
"functionCallingConfig" : {
"mode" : "AUTO"
}
}
}
Safety Settings
{
"safetySettings" : [
{
"category" : "HARM_CATEGORY_HARASSMENT" ,
"threshold" : "BLOCK_MEDIUM_AND_ABOVE"
}
]
}
Additional Parameters
Parameter Type Description cachedContentstring Cached content reference responseMimeTypestring "text/plain" or "application/json"responseSchemaobject JSON schema for structured output
Streaming
Todos los endpoints soportan Server-Sent Events (SSE) para streaming:
# Chat Completions
curl https://api.lemondata.cc/v1/chat/completions \
-H "Authorization: Bearer sk-xxx" \
-d '{"model": "gpt-4o", "messages": [...], "stream": true}'
# With usage tracking
-d '{"...", "stream_options": {"include_usage": true}}'
Error Handling
LemonData devuelve respuestas de error compatibles con OpenAI:
{
"error" : {
"message" : "Invalid API key" ,
"type" : "invalid_api_key" ,
"code" : "invalid_api_key"
}
}
See Error Handling Guide for details.
Best Practices
Use passthrough for unknown parameters
All schemas use .passthrough() - unknown parameters are forwarded to upstream providers.
Prefer stream_options for accurate billing
Enable stream_options.include_usage for accurate token counts in streaming responses.
Use appropriate tool_choice format
Match your SDK’s expected format. LemonData accepts both OpenAI and Anthropic formats.