Overview
LemonData API is designed for drop-in compatibility with all major AI development tools. This guide documents supported parameters and verified integrations.
All parameters are validated but passed through to upstream providers. Unsupported parameters for specific models are silently ignored, ensuring maximum compatibility.
Endpoint Format Use Case /v1/chat/completionsOpenAI Chat Universal compatibility /v1/responsesOpenAI Responses Stateful conversations /v1/messagesAnthropic Messages Claude native features /v1beta/models/:model:generateContentGoogle Gemini Gemini native features
IDE & CLI Compatibility
Tool Status Format Notes Cursor ✅ Full OpenAI Anthropic tool format supported Claude Code CLI ✅ Full Anthropic Extended thinking, tool_choice Windsurf ✅ Full OpenAI Standard OpenAI format Aider ✅ Full OpenAI All models supported Continue.dev ✅ Full OpenAI/Anthropic Dual format support OpenCode ✅ Full OpenAI Multi-provider support Cline/Roo Code ✅ Full OpenAI Via OpenRouter format GitHub Copilot ✅ Full OpenAI Standard format Codex CLI ✅ Full OpenAI OpenAI Responses API Gemini CLI ✅ Full Gemini Native Gemini format
Configuration Examples
Cursor
Claude Code
OpenCode
Aider
Base URL: https://api.lemondata.cc/v1
API Key: sk-your-lemondata-key
Cursor uses Anthropic-style tool format internally. LemonData supports both:
OpenAI format: { type: "function", function: { name, parameters } }
Anthropic format: { name, input_schema } (no type field)
export ANTHROPIC_BASE_URL = "https://api.lemondata.cc"
export ANTHROPIC_API_KEY = "sk-your-lemondata-key"
export OPENAI_API_KEY = "sk-your-lemondata-key"
export LOCAL_ENDPOINT = "https://api.lemondata.cc/v1"
export OPENAI_API_KEY = "sk-your-lemondata-key"
export OPENAI_API_BASE = "https://api.lemondata.cc/v1"
aider --model gpt-4o
SDK Compatibility
Verified SDKs
SDK Language Status Notes OpenAI SDK Python/JS/Go ✅ Full All parameters supported Anthropic SDK Python/JS ✅ Full Extended thinking, tools Vercel AI SDK TypeScript ✅ Full streamText, generateObject LangChain Python/JS ✅ Full ChatOpenAI, bind_tools LlamaIndex Python ✅ Full OpenAI-compatible Dify - ✅ Full OpenAI format
Chat Completions Parameters
Core Parameters
Parameter Type Description modelstring Model identifier (required) messagesarray Conversation messages (required) max_tokensinteger Maximum output tokens temperaturenumber Sampling temperature (0-2) top_pnumber Nucleus sampling (0-1) streamboolean Enable streaming
{
"tools" : [
{
"type" : "function" ,
"function" : {
"name" : "get_weather" ,
"description" : "Get weather for a location" ,
"parameters" : {
"type" : "object" ,
"properties" : {
"location" : { "type" : "string" }
}
},
"strict" : true
}
}
],
"tool_choice" : "auto" ,
"parallel_tool_calls" : true
}
Format Example Description String "auto", "none", "required"Simple selection OpenAI Object { "type": "function", "function": { "name": "fn" } }Force specific function Anthropic Object { "type": "tool", "name": "fn", "disable_parallel_tool_use": true }Anthropic native format
Advanced Parameters
Parameter Type Description stream_optionsobject { include_usage: true } for token countsreasoning_effortstring "low", "medium", "high" for o1/o3 modelsservice_tierstring "auto" or "default"seedinteger Deterministic outputs logprobsboolean Return log probabilities top_logprobsinteger Number of top logprobs (0-20) logit_biasobject Token bias map (-100 to 100) frequency_penaltynumber Repetition penalty (-2 to 2) presence_penaltynumber Topic penalty (-2 to 2) stopstring/array Stop sequences ninteger Number of completions (1-128) userstring User identifier for tracking
OpenAI Advanced Features
Parameter Type Description modalitiesarray ["text", "audio"] for multimodalaudioobject Audio output config (voice, format) predictionobject Predicted output for faster completion metadataobject Key-value pairs for tracking storeboolean Store for later retrieval
Provider-Specific Options
{
"anthropic_options" : {
"thinking" : {
"type" : "enabled" ,
"budget_tokens" : 10000
},
"prompt_caching" : true
},
"google_options" : {
"safety_settings" : [ ... ],
"google_search" : true ,
"code_execution" : true
}
}
Anthropic Messages Parameters
Core Parameters
Parameter Type Description modelstring Model identifier messagesarray Conversation messages max_tokensinteger Maximum output (up to 128000) systemstring/array System prompt streamboolean Enable streaming
{
"tools" : [
{
"name" : "get_weather" ,
"description" : "Get weather" ,
"input_schema" : {
"type" : "object" ,
"properties" : {
"location" : { "type" : "string" }
}
}
}
],
"tool_choice" : {
"type" : "auto" ,
"disable_parallel_tool_use" : false
}
}
Extended Thinking
{
"model" : "claude-opus-4-5" ,
"thinking" : {
"type" : "enabled" ,
"budget_tokens" : 10000
}
}
Responses API Parameters
Core Parameters
Parameter Type Description modelstring Model identifier inputstring/array Input content instructionsstring System instructions max_output_tokensinteger Maximum output tokens previous_response_idstring Continue conversation
Advanced Parameters
Parameter Type Description truncation_strategystring "auto" or "disabled"includearray ["reasoning.encrypted_content"]reasoning_effortstring For reasoning models service_tierstring Priority tier
Supports both OpenAI and Anthropic tool formats:
// OpenAI format
{ "type" : "function" , "name" : "fn" , "parameters" : { ... } }
// Anthropic format (Cursor compatibility)
{ "name" : "fn" , "input_schema" : { ... } }
Gemini API Parameters
Core Parameters
Parameter Type Description contentsarray Conversation content systemInstructionobject System prompt generationConfigobject Generation settings
{
"tools" : [{
"functionDeclarations" : [{
"name" : "search" ,
"description" : "Search the web" ,
"parameters" : { ... }
}],
"codeExecution" : {},
"googleSearch" : {}
}],
"toolConfig" : {
"functionCallingConfig" : {
"mode" : "AUTO"
}
}
}
Safety Settings
{
"safetySettings" : [
{
"category" : "HARM_CATEGORY_HARASSMENT" ,
"threshold" : "BLOCK_MEDIUM_AND_ABOVE"
}
]
}
Additional Parameters
Parameter Type Description cachedContentstring Cached content reference responseMimeTypestring "text/plain" or "application/json"responseSchemaobject JSON schema for structured output
Streaming
All endpoints support Server-Sent Events (SSE) streaming:
# Chat Completions
curl https://api.lemondata.cc/v1/chat/completions \
-H "Authorization: Bearer sk-xxx" \
-d '{"model": "gpt-4o", "messages": [...], "stream": true}'
# With usage tracking
-d '{"...", "stream_options": {"include_usage": true}}'
Error Handling
LemonData returns OpenAI-compatible error responses:
{
"error" : {
"message" : "Invalid API key" ,
"type" : "invalid_api_key" ,
"code" : "invalid_api_key"
}
}
See Error Handling Guide for details.
Best Practices
Use passthrough for unknown parameters
All schemas use .passthrough() - unknown parameters are forwarded to upstream providers.
Prefer stream_options for accurate billing
Enable stream_options.include_usage for accurate token counts in streaming responses.
Use appropriate tool_choice format
Match your SDK’s expected format. LemonData accepts both OpenAI and Anthropic formats.