主题
Prompts
1. Prompt Templates (提示模板)
基础模板
python
from langchain.prompts import PromptTemplate
# 基础模板示例
template = """
问题: {question}
请一步一步回答以上的问题
"""
prompt = PromptTemplate(
input_variables=["question"],
template=template
)
# 使用模板
formatted_prompt = prompt.format(
question="江苏的省会是哪里"
)
print(formatted_prompt)
输出结果
shell
问题: 江苏的省会是哪里
请一步一步回答以上的问题
聊天模板
python
from langchain.prompts import ChatPromptTemplate
from langchain.prompts.chat import SystemMessagePromptTemplate, HumanMessagePromptTemplate
# 系统消息模板
system_template = "You are a helpful assistant that translates {input_language} to {output_language}."
system_prompt = SystemMessagePromptTemplate.from_template(system_template)
# 人类消息模板
human_template = "{text}"
human_prompt = HumanMessagePromptTemplate.from_template(human_template)
# 组合聊天模板
chat_prompt = ChatPromptTemplate.from_messages([
system_prompt,
human_prompt
])
# 使用模板
messages = chat_prompt.format_messages(
input_language="English",
output_language="French",
text="I love programming."
)
print(messages)
输出结果
shell
[SystemMessage(content='You are a helpful assistant that translates English to French.', additional_kwargs={}, response_metadata={}), HumanMessage(content='I love programming.', additional_kwargs={}, response_metadata={})]
高级特性
1. 部分变量填充
python
from langchain.prompts import PromptTemplate
prompt = PromptTemplate(
template="You are a {role}. Help {name} with {task}.",
input_variables=["name", "task", "role"]
)
# 部分填充
partial_prompt = prompt.partial(role="professional programmer")
# 后续可以只填充其他变量
final_prompt = partial_prompt.format(
name="Patrick",
task="debugging Python code"
)
print(final_prompt)
输出
shell
You are a professional programmer. Help Patrick with debugging Python code.
2. 模板验证
python
from langchain.prompts import PromptTemplate
from pydantic import BaseModel, validator
class CustomPromptTemplate(PromptTemplate):
@validator('input_variables')
def validate_variables(cls, v):
if 'required_field' not in v:
raise ValueError("required_field must be an input variable")
return v
2. Example Selectors (示例选择器)
类型
- LengthBasedExampleSelector (基于长度选择)
- SemanticSimilarityExampleSelector (基于语义相似度)
- NGramOverlapExampleSelector (基于N-gram重叠)
实现示例
python
from langchain.prompts.example_selector import SemanticSimilarityExampleSelector
from langchain.vectorstores import Chroma
from langchain.embeddings import OpenAIEmbeddings
# 准备示例
examples = [
{"input": "What is the capital of France?", "output": "The capital of France is Paris."},
{"input": "What is the largest planet?", "output": "Jupiter is the largest planet."},
{"input": "Who wrote Romeo and Juliet?", "output": "William Shakespeare wrote Romeo and Juliet."}
]
# 创建选择器
example_selector = SemanticSimilarityExampleSelector.from_examples(
examples,
OpenAIEmbeddings(),
Chroma,
k=2 # 返回最相似的2个示例
)
# 使用选择器
selected_examples = example_selector.select_examples({"input": "What is the capital of Spain?"})
3. Output Parsers (输出解析器)
常用解析器
- PydanticOutputParser
- CommaSeparatedListOutputParser
- StructuredOutputParser
- RegexParser
实现示例
Pydantic 解析器
python
from langchain.output_parsers import PydanticOutputParser
from pydantic import BaseModel, Field
from typing import List
class Analysis(BaseModel):
summary: str = Field(description="Brief summary of the main points")
keywords: List[str] = Field(description="Key terms or concepts")
sentiment: str = Field(description="Overall sentiment (positive/negative/neutral)")
parser = PydanticOutputParser(pydantic_object=Analysis)
prompt = PromptTemplate(
template="Analyze the following text:\n{text}\n{format_instructions}",
input_variables=["text"],
partial_variables={"format_instructions": parser.get_format_instructions()}
)
结构化输出解析器
python
from langchain.output_parsers import StructuredOutputParser, ResponseSchema
response_schemas = [
ResponseSchema(name="answer", description="The final answer to the question"),
ResponseSchema(name="confidence", description="Confidence score from 0 to 100"),
ResponseSchema(name="reasoning", description="The step-by-step reasoning process")
]
parser = StructuredOutputParser.from_response_schemas(response_schemas)
prompt = PromptTemplate(
template="Question: {question}\n{format_instructions}\n",
input_variables=["question"],
partial_variables={"format_instructions": parser.get_format_instructions()}
)
错误处理
python
from langchain.output_parsers import OutputFixingParser
# 创建修复解析器
fixing_parser = OutputFixingParser.from_llm(
parser=parser,
llm=llm
)
try:
parsed_output = parser.parse(llm_output)
except:
# 尝试修复并重新解析
parsed_output = fixing_parser.parse(llm_output)
最佳实践
模板设计
- 使用清晰的指令
- 提供具体的示例
- 指定输出格式
- 包含错误处理指南
示例选择
- 保持示例多样性
- 根据任务特点选择合适的选择器
- 定期更新示例库
输出解析
- 使用适当的验证规则
- 实现优雅的错误处理
- 考虑输出修复机制
性能优化
- 缓存常用提示
- 批量处理示例
- 异步处理大量请求