langchain.chains.structured_output.base
.create_structured_output_runnable¶
- langchain.chains.structured_output.base.create_structured_output_runnable(output_schema: Union[Dict[str, Any], Type[BaseModel]], llm: Runnable, prompt: Optional[BasePromptTemplate] = None, *, output_parser: Optional[Union[BaseOutputParser, BaseGenerationOutputParser]] = None, enforce_function_usage: bool = True, return_single: bool = True, mode: Literal['openai-functions', 'openai-tools', 'openai-json'] = 'openai-functions', **kwargs: Any) Runnable [source]¶
自版本0.1.17以来已弃用:LangChain引入了一种名为with_structured_output的方法,该方法适用于具有工具调用功能的ChatModels。您可以在以下链接中了解更多关于此方法的信息:<https://python.langchain.ac.cn/docs/modules/model_io/chat/structured_output/>。请参考我们的提取用例文档以获取更多关于如何使用LLM进行信息提取的指南。<https://python.langchain.ac.cn/docs/use_cases/extraction/>。如果您发现其他问题,请在此处提供反馈:<https://github.com/langchain-ai/langchain/discussions/18154> 使用 `from langchain_core.pydantic_v1 import BaseModel, Field from langchain_anthropic import ChatAnthropic`。
setup: str = Field(description="笑话的设置") punchline: str = Field(description="笑话的结尾")
或者任何其他支持工具的聊天模型。请参考结构化输出文档以查看支持 `with_structured_output` 的模型列表。模型文档:<https://python.langchain.ac.cn/docs/modules/model_io/chat/structured_output/>。model = ChatAnthropic(model="claude-3-opus-20240229",temperature=0) structured_llm = model.with_structured_output(Joke) structured_llm.invoke("Tell me a joke about cats.
Make sure to call the Joke function.”)
“` instead.
创建用于提取结构化输出的可运行实例。
- 参数
output_schema (Union[Dict[str, Any], Type[BaseModel]]) – 可以是一个字典或pydantic.BaseModel类。如果传入的字典,则假定它已经是有效的JsonSchema。为了获得最佳结果,pydantic.BaseModels应该包含描述该模式内容的docstrings以及描述参数的描述。
llm (Runnable) – 要使用的语言模型。当模式为‘openai-function’时,假定支持OpenAI函数调用API。当模式为‘openai-json’时,假定支持OpenAI的response_format参数。
prompt (Optional[BasePromptTemplate]) – 要传递到模型中的BasePromptTemplate。当模式为‘openai-json’且prompt包含输入变量‘output_schema’时,给定的output_schema将被转换成JsonSchema并插入到提示中。
output_parser (Optional[Union[BaseOutputParser, BaseGenerationOutputParser]]) – 用于解析模型输出的输出解析器。默认情况下,将根据函数类型推导。如果传入pydantic.BaseModel,则OutputParser将尝试使用pydantic类解析输出。否则,模型输出将被解析为JSON。
mode (Literal['openai-functions', 'openai-tools', 'openai-json']) – 从模型中提取结构化输出的方式。如果为‘openai-functions’,则使用已弃用的‘functions’,‘function_call’模式进行OpenAI函数调用。如果为‘openai-tools’,则使用最新的‘tools’,‘tool_choice’模式进行OpenAI函数调用。这比使用‘openai-functions’更为推荐。如果为‘openai-json’,则使用response_format设置为JSON的OpenAI模型。
enforce_function_usage (bool) – 仅在模式为‘openai-tools’或‘openai-functions’时适用。如果为True,则模型将强制使用所提供的输出模式。如果为False,则模型可以选择是否使用输出模式。
return_single (布尔) – 仅当模式为‘openai-tools’时适用。是否返回结构化输出列表或单个输出。如果为 True 且模型未返回任何结构化输出,则链式输出为 None。如果为 False 且模型未返回任何结构化输出,则链式输出为一个空列表。
kwargs (任何类型) – 额外的命名参数。
- 返回
- 一个可运行的序列,将返回与给定
output_schema 匹配的结构化输出(s)。
- 返回类型
- 使用 Pydantic 模式 OpenAI 工具示例(mode=’openai-tools’)
from typing import Optional from langchain.chains import create_structured_output_runnable from langchain_openai import ChatOpenAI from langchain_core.pydantic_v1 import BaseModel, Field class RecordDog(BaseModel): '''Record some identifying information about a dog.''' name: str = Field(..., description="The dog's name") color: str = Field(..., description="The dog's color") fav_food: Optional[str] = Field(None, description="The dog's favorite food") llm = ChatOpenAI(model="gpt-3.5-turbo-0125", temperature=0) prompt = ChatPromptTemplate.from_messages( [ ("system", "You are an extraction algorithm. Please extract every possible instance"), ('human', '{input}') ] ) structured_llm = create_structured_output_runnable( RecordDog, llm, mode="openai-tools", enforce_function_usage=True, return_single=True ) structured_llm.invoke({"input": "Harry was a chubby brown beagle who loved chicken"}) # -> RecordDog(name="Harry", color="brown", fav_food="chicken")
- 使用字典模式 OpenAI 工具示例(mode="openai-tools")
from typing import Optional from langchain.chains import create_structured_output_runnable from langchain_openai import ChatOpenAI dog_schema = { "type": "function", "function": { "name": "record_dog", "description": "Record some identifying information about a dog.", "parameters": { "type": "object", "properties": { "name": { "description": "The dog's name", "type": "string" }, "color": { "description": "The dog's color", "type": "string" }, "fav_food": { "description": "The dog's favorite food", "type": "string" } }, "required": ["name", "color"] } } } llm = ChatOpenAI(model="gpt-3.5-turbo-0125", temperature=0) structured_llm = create_structured_output_runnable( dog_schema, llm, mode="openai-tools", enforce_function_usage=True, return_single=True ) structured_llm.invoke("Harry was a chubby brown beagle who loved chicken") # -> {'name': 'Harry', 'color': 'brown', 'fav_food': 'chicken'}
- OpenAI 函数示例(mode="openai-functions")
from typing import Optional from langchain.chains import create_structured_output_runnable from langchain_openai import ChatOpenAI from langchain_core.pydantic_v1 import BaseModel, Field class Dog(BaseModel): '''Identifying information about a dog.''' name: str = Field(..., description="The dog's name") color: str = Field(..., description="The dog's color") fav_food: Optional[str] = Field(None, description="The dog's favorite food") llm = ChatOpenAI(model="gpt-3.5-turbo-0125", temperature=0) structured_llm = create_structured_output_runnable(Dog, llm, mode="openai-functions") structured_llm.invoke("Harry was a chubby brown beagle who loved chicken") # -> Dog(name="Harry", color="brown", fav_food="chicken")
- 带有提示的 OpenAI 函数示例
from typing import Optional from langchain.chains import create_structured_output_runnable from langchain_openai import ChatOpenAI from langchain_core.prompts import ChatPromptTemplate from langchain_core.pydantic_v1 import BaseModel, Field class Dog(BaseModel): '''Identifying information about a dog.''' name: str = Field(..., description="The dog's name") color: str = Field(..., description="The dog's color") fav_food: Optional[str] = Field(None, description="The dog's favorite food") llm = ChatOpenAI(model="gpt-3.5-turbo-0125", temperature=0) structured_llm = create_structured_output_runnable(Dog, llm, mode="openai-functions") system = '''Extract information about any dogs mentioned in the user input.''' prompt = ChatPromptTemplate.from_messages( [("system", system), ("human", "{input}"),] ) chain = prompt | structured_llm chain.invoke({"input": "Harry was a chubby brown beagle who loved chicken"}) # -> Dog(name="Harry", color="brown", fav_food="chicken")
- OpenAI JSON 响应格式示例(mode="openai-json")
from typing import Optional from langchain.chains import create_structured_output_runnable from langchain_openai import ChatOpenAI from langchain_core.prompts import ChatPromptTemplate from langchain_core.pydantic_v1 import BaseModel, Field class Dog(BaseModel): '''Identifying information about a dog.''' name: str = Field(..., description="The dog's name") color: str = Field(..., description="The dog's color") fav_food: Optional[str] = Field(None, description="The dog's favorite food") llm = ChatOpenAI(model="gpt-3.5-turbo-0125", temperature=0) structured_llm = create_structured_output_runnable(Dog, llm, mode="openai-json") system = '''You are a world class assistant for extracting information in structured JSON formats. Extract a valid JSON blob from the user input that matches the following JSON Schema: {output_schema}''' prompt = ChatPromptTemplate.from_messages( [("system", system), ("human", "{input}"),] ) chain = prompt | structured_llm chain.invoke({"input": "Harry was a chubby brown beagle who loved chicken"})