Skip to main content

Overview

LemonData provides access to 69+ video generation models from 11 providers through a single unified API. Video generation is asynchronous — you submit a request and receive a task ID, then poll for the result.
The model list is updated frequently. For the latest available models and pricing, visit the Models page or use the Models API.

Async Workflow

import requests
import time

API_KEY = "sk-your-api-key"
BASE = "https://api.lemondata.cc/v1"
headers = {"Authorization": f"Bearer {API_KEY}"}

# Step 1: Submit generation request
resp = requests.post(f"{BASE}/videos/generations",
    headers=headers,
    json={
        "model": "kling-v2.6-pro",
        "prompt": "A golden retriever running on a beach at sunset, cinematic 4K",
        "duration": 5,
        "aspect_ratio": "16:9"
    }
)
task_id = resp.json()["task_id"]

# Step 2: Poll for result
while True:
    status = requests.get(f"{BASE}/videos/generations/{task_id}", headers=headers).json()
    if status["status"] in ("completed", "succeeded"):
        print(f"Video URL: {status['video_url']}")
        break
    elif status["status"] == "failed":
        print(f"Failed: {status.get('error')}")
        break
    time.sleep(10)

Model Capability Matrix

Different models excel at different tasks. Use this matrix to choose the right model for your use case. Legend: ✅ Supported | ❌ Not supported
SeriesProviderT2VI2VKeyframeExtensionEditingMax Duration
SoraOpenAI~20s
KlingKuaishou10s
VeoGoogle8s
SeedanceByteDance10s
HailuoMiniMax6s
WanAlibaba5s
RunwayRunway10s
LumaLuma5s
ViduVidu8s
GrokxAI~10s
HiggsfieldHiggsfield~5s

Capability Definitions

  • T2V (Text-to-Video): Generate video from a text prompt
  • I2V (Image-to-Video): Animate a static image into video using image_url or image
  • Keyframe: Control start and end frames with start_image + end_image
  • Extension: Extend an existing video’s duration
  • Editing: Modify specific aspects of an existing video

Available Models by Series

Sora (OpenAI)

ModelQualityNotes
sora-2StandardDefault model, good balance of quality and speed
sora-2-proHighHigher quality, longer generation time
sora-2-charactersStandardCharacter-focused generation

Kling (Kuaishou)

ModelCapabilityNotes
kling-v2.6-proT2VLatest generation, professional quality
kling-v2.6-stdT2VLatest generation, fast
kling-v2.5-turbo-proT2VTurbo speed, pro quality
kling-v2.1-masterT2V/I2VMaster quality
kling-v2.1-proT2V/I2VProfessional quality
kling-v2.1-standardT2V/I2VStandard quality
kling-videoT2V/I2VBase model
kling-video-extendExtensionExtend existing videos
kling-video-o1-proT2VO1 reasoning, pro quality
kling-video-o1-stdT2VO1 reasoning, standard
kling-effectsEffectsApply visual effects
kling-omni-videoT2VOmni model
kling-motion-controlT2VMotion-controlled generation

Veo (Google)

ModelQualityNotes
veo3.1StandardGoogle’s latest video model
veo3.1-proHighProfessional quality
veo3.1-4kUltra4K resolution output
veo3.1-fastFastFaster generation
veo3.1-fast-4kFast + 4KFast generation with 4K output
veo3.1-componentsStandardComponent-based generation
veo3StandardPrevious generation
veo3-proHighPrevious gen, professional
veo3-fastFastPrevious gen, fast

Seedance (ByteDance)

ModelCapabilityNotes
seedance-2-0T2V/I2V/Keyframe/Extension/EditingLatest, most capable
seedance-1-5-proT2V/I2VPrevious generation, pro quality
seedance-1-0-proT2V/I2VFirst generation, pro
seedance-1-0-pro-fastT2V/I2VFirst generation, fast
seedance-1-0-lite-t2vT2VLightweight text-to-video
seedance-1-0-lite-i2vI2VLightweight image-to-video
Seedance 2.0 supports the widest range of capabilities including multimodal-to-video, video extension, and video editing — all through the same API endpoint.

Hailuo (MiniMax)

ModelQualityNotes
hailuo-2.3StandardGood quality
hailuo-2.3-proHighHigher quality output
hailuo-2.3-fastFastFaster generation
hailuo-2.3-standardStandardStandard tier
video-01StandardMiniMax video-01
video-01-liveStandardLive-style generation

Wan (Alibaba)

ModelCapabilityNotes
wan-2.6T2VLatest text-to-video
wan2.6-i2vI2VLatest image-to-video
wan-2.5T2VPrevious generation
wan2.5-i2v-previewI2VPrevious gen I2V
wan-2.2-plusT2VEarlier generation
vace-14bT2VVACE architecture

Runway

ModelDurationNotes
runwayml-gen4-turbo-55sFast generation
runwayml-gen4-turbo-1010sLonger clips

Luma

ModelCapabilityNotes
luma-video-apiT2VText-to-video
luma-video-extend-apiExtensionExtend existing videos

Vidu (Shengshu)

ModelQualityNotes
viduq3-proHighLatest generation
viduq2-proHighPrevious gen, pro
viduq2-pro-fastFastPrevious gen, fast pro
viduq2StandardPrevious gen, standard
viduq2-turboFastTurbo speed
vidu2.0StandardBase model

Grok (xAI)

ModelNotes
grok-video-3xAI’s video generation model
grok-video-3-10s10-second variant

Higgsfield

ModelNotes
higgsfield-turboFastest, lower cost
higgsfield-standardStandard quality
higgsfield-liteLightweight

Usage Examples

Text-to-Video (T2V)

The most common use case. All models support this.
response = requests.post(f"{BASE}/videos/generations",
    headers=headers,
    json={
        "model": "veo3.1-pro",
        "prompt": "Aerial drone shot of a coastal city at golden hour, waves crashing against cliffs",
        "duration": 5,
        "aspect_ratio": "16:9",
        "resolution": "1080p"
    }
)

Image-to-Video (I2V)

Animate a static image. Use image_url for a URL or image for base64 data.
# Using image URL
response = requests.post(f"{BASE}/videos/generations",
    headers=headers,
    json={
        "model": "wan2.6-i2v",
        "prompt": "The person slowly turns and smiles at the camera",
        "image_url": "https://example.com/portrait.jpg"
    }
)

# Using base64 image
import base64
with open("photo.jpg", "rb") as f:
    image_b64 = base64.b64encode(f.read()).decode()

response = requests.post(f"{BASE}/videos/generations",
    headers=headers,
    json={
        "model": "kling-v2.1-master",
        "prompt": "Gentle wind blows through the scene",
        "image": f"data:image/jpeg;base64,{image_b64}"
    }
)

Keyframe Control (Start + End Image)

Control both the first and last frames for precise transitions. Currently supported by Seedance 2.0.
response = requests.post(f"{BASE}/videos/generations",
    headers=headers,
    json={
        "model": "seedance-2-0",
        "prompt": "Smooth transition from day to night, city lights gradually turning on",
        "start_image": "https://example.com/city-day.jpg",
        "end_image": "https://example.com/city-night.jpg",
        "duration": 5
    }
)

Video Extension

Extend an existing video’s duration. Use models with extension capability.
response = requests.post(f"{BASE}/videos/generations",
    headers=headers,
    json={
        "model": "kling-video-extend",
        "prompt": "Continue the scene naturally",
        "image_url": "https://example.com/last-frame.jpg"
    }
)

Parameters Reference

ParameterTypeDescription
modelstringModel ID (default: sora-2)
promptstringRequired. Text description of the video
image_urlstringURL of starting image (for I2V)
imagestringBase64-encoded image with data URL prefix (for I2V)
durationintegerVideo duration in seconds (1-60, model-dependent)
aspect_ratiostring16:9, 9:16, 1:1, etc.
resolutionstring1080p, 720p, 4k
fpsintegerFrames per second (1-120)
negative_promptstringWhat to avoid in generation
seedintegerRandom seed for reproducibility
cfg_scalenumberGuidance scale (0-20)
motion_strengthnumberMotion intensity (0-1)
start_imagestringURL of starting keyframe
end_imagestringURL of ending keyframe
Not all parameters are supported by every model. Unsupported parameters are silently ignored. Check the model’s documentation for supported parameters.

Model Selection Guide

Best Quality

Seedance 2.0 or Kling v2.6 Pro — cinematic quality, rich detail, natural motion

Fastest Generation

Higgsfield Turbo or Hailuo 2.3 — quick results for prototyping and iteration

Most Versatile

Seedance 2.0 — supports T2V, I2V, keyframe, extension, and editing in one model

Best Value

Wan 2.6 or Hailuo 2.3 — competitive quality at lower cost per generation

Billing

Video generation uses fixed per-generation pricing. You are charged once when the task is submitted, regardless of video duration. If generation fails, the charge is automatically refunded. Check current pricing on the Models page or via the Pricing API.