langchain_community.chains.ernie_functions.base
.create_structured_output_runnable¶
- langchain_community.chains.ernie_functions.base.create_structured_output_runnable(output_schema: Union[Dict[str, Any], Type[BaseModel]], llm: Runnable, prompt: BasePromptTemplate, *, output_parser: Optional[Union[BaseOutputParser, BaseGenerationOutputParser]] = None, **kwargs: Any) Runnable [源代码]¶
使用Ernie函数获取结构化输出的runnable。
- 参数
output_schema (Union[Dict[str, Any], Type[BaseModel]]) – 字典或pydantic.BaseModel类。如果传入字典,则假定它已经是有效的JsonSchema。为了获得最佳效果,pydantic.BaseModels应包含用于描述模式所代表的内容的文档字符串和参数的描述。
llm (Runnable) – 要使用的语言模型,假定其支持Ernie函数调用API。
prompt (BasePromptTemplate) – 要传递给模型的BasePromptTemplate。
output_parser (可选[Union[BaseOutputParser, BaseGenerationOutputParser]]) – 用于解析模型输出的BaseLLMOutputParser。默认情况下将从函数类型推断。如果传递了pydantic.BaseModels,则OutputParser将尝试使用它们解析输出。否则模型输出将简单地解析为JSON。
kwargs (任何类型) –
- 返回值
一个可运行的序列,当运行时将给定的函数传递给模型。
- 返回类型
示例
from typing import Optional from langchain.chains.ernie_functions import create_structured_output_chain from langchain_community.chat_models import ErnieBotChat from langchain_core.prompts import ChatPromptTemplate from langchain.pydantic_v1 import BaseModel, Field class Dog(BaseModel): """Identifying information about a dog.""" name: str = Field(..., description="The dog's name") color: str = Field(..., description="The dog's color") fav_food: Optional[str] = Field(None, description="The dog's favorite food") llm = ErnieBotChat(model_name="ERNIE-Bot-4") prompt = ChatPromptTemplate.from_messages( [ ("user", "Use the given format to extract information from the following input: {input}"), ("assistant", "OK!"), ("user", "Tip: Make sure to answer in the correct format"), ] ) chain = create_structured_output_chain(Dog, llm, prompt) chain.invoke({"input": "Harry was a chubby brown beagle who loved chicken"}) # -> Dog(name="Harry", color="brown", fav_food="chicken")