Skip to main content
POST
/
v1
/
chat
/
completions
curl -X POST "https://api.lemondata.cc/v1/chat/completions" \
  -H "Authorization: Bearer sk-your-api-key" \
  -H "Content-Type: application/json" \
  -d '{
    "model": "gpt-4o",
    "messages": [
      {"role": "system", "content": "You are a helpful assistant."},
      {"role": "user", "content": "Hello!"}
    ],
    "temperature": 0.7,
    "max_tokens": 1000
  }'
{
  "id": "chatcmpl-abc123",
  "object": "chat.completion",
  "created": 1706000000,
  "model": "gpt-4o",
  "choices": [
    {
      "index": 0,
      "message": {
        "role": "assistant",
        "content": "Hello! How can I help you today?"
      },
      "finish_reason": "stop"
    }
  ],
  "usage": {
    "prompt_tokens": 20,
    "completion_tokens": 9,
    "total_tokens": 29
  }
}

Request Body

model
string
required
ID of the model to use. See Models for available options.
messages
array
required
A list of messages comprising the conversation.Each message object contains:
  • role (string): system, user, or assistant
  • content (string | array): The message content
temperature
number
default:"1"
Sampling temperature between 0 and 2. Higher values make output more random.
max_tokens
integer
Maximum number of tokens to generate.
stream
boolean
default:"false"
If true, partial message deltas will be sent as SSE events.
top_p
number
default:"1"
Nucleus sampling parameter. We recommend altering this or temperature, not both.
frequency_penalty
number
default:"0"
Number between -2.0 and 2.0. Positive values penalize repeated tokens.
presence_penalty
number
default:"0"
Number between -2.0 and 2.0. Positive values penalize tokens already in the text.
stop
string | array
Up to 4 sequences where the API will stop generating tokens.
tools
array
A list of tools the model may call (function calling).

Response

id
string
Unique identifier for the completion.
object
string
Always chat.completion.
created
integer
Unix timestamp of when the completion was created.
model
string
The model used for completion.
choices
array
List of completion choices.Each choice contains:
  • index (integer): Index of the choice
  • message (object): The generated message
  • finish_reason (string): Why the model stopped (stop, length, tool_calls)
usage
object
Token usage statistics.
  • prompt_tokens (integer): Tokens in the prompt
  • completion_tokens (integer): Tokens in the completion
  • total_tokens (integer): Total tokens used
curl -X POST "https://api.lemondata.cc/v1/chat/completions" \
  -H "Authorization: Bearer sk-your-api-key" \
  -H "Content-Type: application/json" \
  -d '{
    "model": "gpt-4o",
    "messages": [
      {"role": "system", "content": "You are a helpful assistant."},
      {"role": "user", "content": "Hello!"}
    ],
    "temperature": 0.7,
    "max_tokens": 1000
  }'
{
  "id": "chatcmpl-abc123",
  "object": "chat.completion",
  "created": 1706000000,
  "model": "gpt-4o",
  "choices": [
    {
      "index": 0,
      "message": {
        "role": "assistant",
        "content": "Hello! How can I help you today?"
      },
      "finish_reason": "stop"
    }
  ],
  "usage": {
    "prompt_tokens": 20,
    "completion_tokens": 9,
    "total_tokens": 29
  }
}