Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.lemondata.cc/llms.txt

Use this file to discover all available pages before exploring further.

Overview

For coding agents, discover the current recommended image shortlist first with GET /v1/models?recommended_for=image, then send the selected model explicitly to this endpoint. gpt-image-2 is a token-priced GPT Image model. LemonData follows OpenAI’s official usage breakdown for text input, image input, reported cached input, and image output tokens; it is not billed as a fixed per-image model. For gpt-image-2 image generation, supported public parameters are prompt, n, size, quality, response_format, async, background, output_format, output_compression or compression, moderation, partial_images, and user. Omit size or quality to let LemonData use auto; custom size values must use the flexible WIDTHxHEIGHT contract documented below. Compatibility note: if input_fidelity is sent with gpt-image-2, LemonData removes it before forwarding because GPT Image 2 already handles image inputs at high fidelity.

Model behavior notes

Google Gemini image-family models do not share the same selector contract:
  • gemini-3-pro-image-preview and nano-banana-pro support aspect_ratio plus resolution (1k, 2k, 4k).
  • gemini-2.5-flash-image, gemini-3.1-flash-image-preview, nano-banana, and nano-banana-edit support aspect_ratio but do not expose public resolution selection.
  • gemini-2.0-flash-preview-image-generation is documented here as prompt-only text-to-image.
For Google image families, prefer aspect_ratio and only send resolution when the model explicitly supports it.

Request Body

Synchronous request timeout: Some routed image providers return the final image inline and wait for generation to finish. High-resolution or high-quality requests can take close to a minute or longer, so set your HTTP client timeout to at least 120s. If the create response includes status: "pending", task_id, or poll_url, follow the returned poll_url instead.
model
string
default:"dall-e-3"
Model to use (e.g., gpt-image-2, dall-e-3, flux-pro, midjourney).
prompt
string
required
Text description of the desired image.
n
integer
default:"1"
Number of images to generate (1-10, model dependent).
size
string
default:"1024x1024"
Image size. Use this for OpenAI-style image families and other models that accept exact pixel sizes.For gpt-image-2, size accepts auto or WIDTHxHEIGHT. Custom dimensions must both be multiples of 16, the longest edge must be at most 3840px, the long/short ratio must be at most 3:1, and total pixels must be between 655,360 and 8,294,400. aspect_ratio and resolution are not part of the current LemonData public contract for gpt-image-2.For Google Gemini image families, size is treated as a compatibility alias that maps onto the model’s public aspect_ratio and, where supported, resolution contract. Prefer sending aspect_ratio directly for those models.
aspect_ratio
string
Model-dependent aspect ratio selector.Common Google image-family values include 1:1, 16:9, 9:16, 3:2, and 2:3.
resolution
string
Model-dependent output resolution selector.Supported on gemini-3-pro-image-preview, nano-banana-pro, nano-banana-2, and similar high-resolution families. Typical values are 1k, 2k, and 4k. Do not send this parameter to Gemini Flash image families unless the model explicitly documents it.
quality
string
default:"standard"
Image quality. DALL-E models use standard or hd; GPT Image models such as gpt-image-2 use auto, low, medium, or high.
response_format
string
default:"url"
Response format: url or b64_json. The default is url.For Azure Official or Azure-compatible gpt-image-2 routes, LemonData does not forward response_format upstream. The gateway always receives upstream image data as b64_json; for url requests it uploads every image to the CDN and returns data[].url. If CDN storage is unavailable or upload fails, the request fails instead of falling back to Base64. For b64_json, the raw Base64 is returned.
async
boolean
default:"false"
Set to true with gpt-image-2 or official FLUX/BFL image models to create a task first. Completed async image tasks return URLs regardless of the requested response_format; use synchronous requests when you need b64_json.
style
string
default:"vivid"
Style for DALL-E 3: vivid or natural.
user
string
A unique identifier for the end-user.

Response

Inline Response

created
integer
Unix timestamp of creation.
data
array
Array of generated images.Each object contains:
  • url (string): URL of the generated image
  • b64_json (string): Base64-encoded image (if requested)
  • revised_prompt (string): The prompt used (DALL-E 3)

Async Task Response

Set async: true with gpt-image-2 or official FLUX/BFL image models to create a task instead of waiting for the final image in the create request. The response includes status: "pending", task_id, and poll_url. Poll /v1/tasks/{task_id} until the task reaches completed or failed. Async image tasks return final image URLs only. If you need raw b64_json image data, use a synchronous request. Billing may reserve the estimated amount when the task is created. Completed tasks are billed by actual usage, and failed or timed-out tasks are released or refunded.
created
integer
Unix timestamp of creation.
task_id
string
Unique task identifier for polling.
status
string
Initial status: pending.
poll_url
string
Relative URL to poll for results, for example /v1/tasks/{id}.
data
array
Empty while the task is pending. Completed image tasks return generated image URLs in data[].url.
When you receive status: "pending", use poll_url or GET /v1/tasks/{task_id} to retrieve the result.
curl -X POST "https://api.lemondata.cc/v1/images/generations" \
  -H "Authorization: Bearer sk-your-api-key" \
  -H "Content-Type: application/json" \
  -d '{
    "model": "gemini-3-pro-image-preview",
    "prompt": "A cinematic portrait of a white cat sitting on a rainy windowsill",
    "aspect_ratio": "16:9",
    "resolution": "2k",
    "n": 1
  }'
{
  "created": 1706000000,
  "data": [
    {
      "url": "https://...",
      "revised_prompt": "A fluffy white cat with bright eyes sitting peacefully on a wooden windowsill, watching raindrops stream down the glass window..."
    }
  ]
}

Available Models

ModelTypeFeatures
dall-e-3Usually inlineBest quality, prompt enhancement
dall-e-2Usually inlineFaster, more affordable
flux-proOften task-basedPhotorealistic, high quality
flux-schnellUsually inlineVery fast
midjourneyOften task-basedArtistic style
ideogram-v3Often task-basedBest text rendering
stable-diffusion-3Usually inlineOpen source, customizable
Do not hard-code a model as always synchronous or always asynchronous. If the create response returns status: "pending", follow poll_url and poll until completion.

Handling Task-Based Responses

For image models, always check whether the response contains status: "pending":
import requests
import time

def generate_image(prompt, model="midjourney"):
    # Create image request
    response = requests.post(
        "https://api.lemondata.cc/v1/images/generations",
        headers={"Authorization": "Bearer sk-your-api-key"},
        json={"model": model, "prompt": prompt}
    )
    data = response.json()

    # Check if task-based
    if data.get("status") == "pending":
        task_id = data["task_id"]
        poll_url = data.get("poll_url")
        print(f"Image task started: {task_id}")

        # Poll for result
        while True:
            status_resp = requests.get(
                f"https://api.lemondata.cc{poll_url}" if poll_url else f"https://api.lemondata.cc/v1/tasks/{task_id}",
                headers={"Authorization": "Bearer sk-your-api-key"}
            )
            status_data = status_resp.json()

            if status_data["status"] == "completed":
                return status_data["data"][0]["url"]
            elif status_data["status"] == "failed":
                raise Exception(status_data.get("error", "Generation failed"))

            time.sleep(3)
    else:
        # Inline response
        return data["data"][0]["url"]

# Usage
url = generate_image("a beautiful sunset over mountains", model="midjourney")
print(f"Generated image: {url}")