概览
LangChain 是一个用于构建 LLM 应用程序的流行框架。LemonData 与 LangChain 的 OpenAI 集成无缝协作。安装
复制
pip install langchain langchain-openai
基础配置
复制
from langchain_openai import ChatOpenAI
llm = ChatOpenAI(
model="gpt-4o",
api_key="sk-your-lemondata-key",
base_url="https://api.lemondata.cc/v1"
)
response = llm.invoke("Hello, how are you?")
print(response.content)
使用不同模型
访问任何 LemonData 模型:复制
# OpenAI GPT-4o
gpt4 = ChatOpenAI(
model="gpt-4o",
api_key="sk-your-key",
base_url="https://api.lemondata.cc/v1"
)
# Anthropic Claude
claude = ChatOpenAI(
model="claude-sonnet-4-5",
api_key="sk-your-key",
base_url="https://api.lemondata.cc/v1"
)
# Google Gemini
gemini = ChatOpenAI(
model="gemini-2.5-flash",
api_key="sk-your-key",
base_url="https://api.lemondata.cc/v1"
)
# DeepSeek
deepseek = ChatOpenAI(
model="deepseek-r1",
api_key="sk-your-key",
base_url="https://api.lemondata.cc/v1"
)
带有消息历史的对话
复制
from langchain_core.messages import HumanMessage, SystemMessage
messages = [
SystemMessage(content="You are a helpful assistant."),
HumanMessage(content="What is the capital of France?")
]
response = llm.invoke(messages)
print(response.content)
流式传输
复制
for chunk in llm.stream("Write a poem about coding"):
print(chunk.content, end="", flush=True)
异步用法
复制
import asyncio
async def main():
response = await llm.ainvoke("Hello!")
print(response.content)
asyncio.run(main())
链 (Chains)
复制
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.output_parsers import StrOutputParser
prompt = ChatPromptTemplate.from_messages([
("system", "You are a helpful assistant that translates {input_language} to {output_language}."),
("human", "{text}")
])
chain = prompt | llm | StrOutputParser()
result = chain.invoke({
"input_language": "English",
"output_language": "French",
"text": "Hello, how are you?"
})
print(result)
RAG (检索增强生成)
复制
from langchain_openai import OpenAIEmbeddings
from langchain_community.vectorstores import FAISS
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.runnables import RunnablePassthrough
# Embeddings
embeddings = OpenAIEmbeddings(
model="text-embedding-3-small",
api_key="sk-your-key",
base_url="https://api.lemondata.cc/v1"
)
# Create vector store
texts = ["LemonData supports 300+ AI models", "API is OpenAI compatible"]
vectorstore = FAISS.from_texts(texts, embeddings)
retriever = vectorstore.as_retriever()
# RAG chain
template = """Answer based on context:
{context}
Question: {question}
"""
prompt = ChatPromptTemplate.from_template(template)
rag_chain = (
{"context": retriever, "question": RunnablePassthrough()}
| prompt
| llm
)
response = rag_chain.invoke("How many models does LemonData support?")
print(response.content)
智能体 (Agents)
LangChain 中的智能体 API 正在不断演进。对于新项目,请考虑使用 LangGraph 以获得更灵活的智能体架构。
复制
from langchain.agents import create_openai_tools_agent, AgentExecutor
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.tools import tool
@tool
def search(query: str) -> str:
"""Search for information."""
return f"Search results for: {query}"
tools = [search]
prompt = ChatPromptTemplate.from_messages([
("system", "You are a helpful assistant with access to tools."),
("human", "{input}"),
("placeholder", "{agent_scratchpad}")
])
agent = create_openai_tools_agent(llm, tools, prompt)
executor = AgentExecutor(agent=agent, tools=tools)
result = executor.invoke({"input": "Search for LemonData pricing"})
print(result["output"])
环境变量
为了使代码更整洁,请使用环境变量:复制
export OPENAI_API_KEY="sk-your-lemondata-key"
export OPENAI_API_BASE="https://api.lemondata.cc/v1"
复制
from langchain_openai import ChatOpenAI
# Will automatically use environment variables
llm = ChatOpenAI(model="gpt-4o")
回调与追踪
复制
from langchain_core.callbacks import StdOutCallbackHandler
llm = ChatOpenAI(
model="gpt-4o",
api_key="sk-your-key",
base_url="https://api.lemondata.cc/v1",
callbacks=[StdOutCallbackHandler()]
)
最佳实践
根据成本选择合适的模型
根据成本选择合适的模型
在链中的简单任务使用更便宜的模型 (GPT-4o-mini)。
实现重试机制
实现重试机制
LangChain 针对瞬时错误内置了重试逻辑。
监控 token 使用情况
监控 token 使用情况
使用回调来追踪 token 消耗。