langchain_community.chains.openapi.requests_chain
.APIRequesterOutputParser¶
注意 (Note)
APIRequesterOutputParser 实现了标准的 Runnable Interface
。 🏃
Runnable Interface
具有在 runnables 上可用的其他方法,例如 with_types
, with_retry
, assign
, bind
, get_graph
, 以及更多。
- class langchain_community.chains.openapi.requests_chain.APIRequesterOutputParser[源代码 (source)]¶
基类:
BaseOutputParser
解析请求和错误标签。(Parse the request and error tags.)
- async abatch(inputs: List[Input], config: Optional[Union[RunnableConfig, List[RunnableConfig]]] = None, *, return_exceptions: bool = False, **kwargs: Optional[Any]) List[Output] ¶
默认实现使用 asyncio.gather 并行运行 ainvoke。(Default implementation runs ainvoke in parallel using asyncio.gather.)
batch 的默认实现非常适合 IO 绑定的 runnables。(The default implementation of batch works well for IO bound runnables.)
如果子类可以更有效地进行批量处理,则应覆盖此方法;例如,如果底层的 Runnable 使用支持批量模式的 API。(Subclasses should override this method if they can batch more efficiently; e.g., if the underlying Runnable uses an API which supports a batch mode.)
- 参数 (Parameters)
inputs (List[Input]) – Runnable 的输入列表。(A list of inputs to the Runnable.)
config (Optional[Union[RunnableConfig, List[RunnableConfig]]]) – 调用 Runnable 时要使用的配置。(A config to use when invoking the Runnable.) 该配置支持标准键,例如用于跟踪目的的 ‘tags’、‘metadata’,用于控制并行执行多少工作的 ‘max_concurrency’,以及其他键。(The config supports standard keys like ‘tags’, ‘metadata’ for tracing purposes, ‘max_concurrency’ for controlling how much work to do in parallel, and other keys.) 有关更多详细信息,请参阅 RunnableConfig。(Please refer to the RunnableConfig for more details.) 默认为 None。(Defaults to None.)
return_exceptions (bool) – 是否返回异常而不是引发异常。(Whether to return exceptions instead of raising them.) 默认为 False。(Defaults to False.)
kwargs (Optional[Any]) – 要传递给 Runnable 的其他关键字参数。(Additional keyword arguments to pass to the Runnable.)
- 返回 (Returns)
来自 Runnable 的输出列表。(A list of outputs from the Runnable.)
- 返回类型 (Return type)
List[Output]
- async abatch_as_completed(inputs: Sequence[Input], config: Optional[Union[RunnableConfig, Sequence[RunnableConfig]]] = None, *, return_exceptions: bool = False, **kwargs: Optional[Any]) AsyncIterator[Tuple[int, Union[Output, Exception]]] ¶
并行运行 ainvoke 处理输入列表,并在完成时生成结果。(Run ainvoke in parallel on a list of inputs, yielding results as they complete.)
- 参数 (Parameters)
inputs (Sequence[Input]) – Runnable 的输入列表。(A list of inputs to the Runnable.)
config (Optional[Union[RunnableConfig, Sequence[RunnableConfig]]]) – 调用 Runnable 时要使用的配置。(A config to use when invoking the Runnable.) 该配置支持标准键,例如用于跟踪目的的 ‘tags’、‘metadata’,用于控制并行执行多少工作的 ‘max_concurrency’,以及其他键。(The config supports standard keys like ‘tags’, ‘metadata’ for tracing purposes, ‘max_concurrency’ for controlling how much work to do in parallel, and other keys.) 有关更多详细信息,请参阅 RunnableConfig。(Please refer to the RunnableConfig for more details.) 默认为 None。(Defaults to None.) 默认为 None。(Defaults to None.)
return_exceptions (bool) – 是否返回异常而不是引发异常。(Whether to return exceptions instead of raising them.) 默认为 False。(Defaults to False.)
kwargs (Optional[Any]) – 要传递给 Runnable 的其他关键字参数。(Additional keyword arguments to pass to the Runnable.)
- 产生 (Yields)
输入索引和 Runnable 输出的元组。(A tuple of the index of the input and the output from the Runnable.)
- 返回类型 (Return type)
AsyncIterator[Tuple[int, Union[Output, Exception]]]
- async ainvoke(input: Union[str, BaseMessage], config: Optional[RunnableConfig] = None, **kwargs: Optional[Any]) T ¶
ainvoke 的默认实现,从线程调用 invoke。(Default implementation of ainvoke, calls invoke from a thread.)
即使 Runnable 没有实现 invoke 的原生异步版本,默认实现也允许使用异步代码。(The default implementation allows usage of async code even if the Runnable did not implement a native async version of invoke.)
如果子类可以异步运行,则应覆盖此方法。(Subclasses should override this method if they can run asynchronously.)
- 参数 (Parameters)
input (Union[str, BaseMessage]) –
config (Optional[RunnableConfig]) –
kwargs (Optional[Any]) –
- 返回类型 (Return type)
T
- async aparse(text: str) T ¶
异步将单个字符串模型输出解析为某种结构。(Async parse a single string model output into some structure.)
- 参数 (Parameters)
text (str) – 语言模型的字符串输出。(String output of a language model.)
- 返回 (Returns)
结构化输出。(Structured output.)
- 返回类型 (Return type)
T
- async aparse_result(result: List[Generation], *, partial: bool = False) T ¶
异步将候选模型 Generations 列表解析为特定格式。(Async parse a list of candidate model Generations into a specific format.)
- 返回值仅从结果中的第一个 Generation 解析,该 Generation
被假定为最高可能性的 Generation。(is assumed to be the highest-likelihood Generation.)
- 参数 (Parameters)
result (List[Generation]) – 要解析的 Generations 列表。(A list of Generations to be parsed.) Generations 被假定为单个模型输入的不同候选输出。(The Generations are assumed to be different candidate outputs for a single model input.)
partial (bool) – 是否将输出解析为部分结果。(Whether to parse the output as a partial result.) 这对于可以解析部分结果的解析器很有用。(This is useful for parsers that can parse partial results.) 默认为 False。(Default is False.)
- 返回 (Returns)
结构化输出。(Structured output.)
- 返回类型 (Return type)
T
- as_tool(args_schema: Optional[Type[BaseModel]] = None, *, name: Optional[str] = None, description: Optional[str] = None, arg_types: Optional[Dict[str, Type]] = None) BaseTool ¶
Beta
此 API 处于 Beta 阶段,将来可能会发生变化。(This API is in beta and may change in the future.)
从 Runnable 创建 BaseTool。(Create a BaseTool from a Runnable.)
as_tool
将从 Runnable 实例化具有名称、描述和args_schema
的 BaseTool。(as_tool will instantiate a BaseTool with a name, description, and args_schema from a Runnable.) 如果可能,模式将从runnable.get_input_schema
推断。(Where possible, schemas are inferred from runnable.get_input_schema.) 或者(例如,如果 Runnable 接受 dict 作为输入,并且未键入特定的 dict 键),可以使用args_schema
直接指定模式。(Alternatively (e.g., if the Runnable takes a dict as input and the specific dict keys are not typed), the schema can be specified directly with args_schema.) 您还可以传递arg_types
以仅指定必需的参数及其类型。(You can also pass arg_types to just specify the required arguments and their types.)- 参数 (Parameters)
args_schema (Optional[Type[BaseModel]]) – 工具的模式。(The schema for the tool.) 默认为 None。(Defaults to None.)
name (Optional[str]) – 工具的名称。(The name of the tool.) 默认为 None。(Defaults to None.)
description (Optional[str]) – 工具的描述。(The description of the tool.) 默认为 None。(Defaults to None.)
arg_types (Optional[Dict[str, Type]]) – 参数名称到类型的字典。(A dictionary of argument names to types.) 默认为 None。(Defaults to None.)
- 返回 (Returns)
BaseTool 实例。(A BaseTool instance.)
- 返回类型 (Return type)
Typed dict input
from typing import List from typing_extensions import TypedDict from langchain_core.runnables import RunnableLambda class Args(TypedDict): a: int b: List[int] def f(x: Args) -> str: return str(x["a"] * max(x["b"])) runnable = RunnableLambda(f) as_tool = runnable.as_tool() as_tool.invoke({"a": 3, "b": [1, 2]})
dict
输入,通过args_schema
指定模式。(dict input, specifying schema via args_schema)from typing import Any, Dict, List from langchain_core.pydantic_v1 import BaseModel, Field from langchain_core.runnables import RunnableLambda def f(x: Dict[str, Any]) -> str: return str(x["a"] * max(x["b"])) class FSchema(BaseModel): """Apply a function to an integer and list of integers.""" a: int = Field(..., description="Integer") b: List[int] = Field(..., description="List of ints") runnable = RunnableLambda(f) as_tool = runnable.as_tool(FSchema) as_tool.invoke({"a": 3, "b": [1, 2]})
dict
输入,通过arg_types
指定模式。(dict input, specifying schema via arg_types)from typing import Any, Dict, List from langchain_core.runnables import RunnableLambda def f(x: Dict[str, Any]) -> str: return str(x["a"] * max(x["b"])) runnable = RunnableLambda(f) as_tool = runnable.as_tool(arg_types={"a": int, "b": List[int]}) as_tool.invoke({"a": 3, "b": [1, 2]})
String input
from langchain_core.runnables import RunnableLambda def f(x: str) -> str: return x + "a" def g(x: str) -> str: return x + "z" runnable = RunnableLambda(f) | g as_tool = runnable.as_tool() as_tool.invoke("b")
0.2.14 版本中新增。(New in version 0.2.14.)
- async astream(input: Input, config: Optional[RunnableConfig] = None, **kwargs: Optional[Any]) AsyncIterator[Output] ¶
astream 的默认实现,它调用 ainvoke。(Default implementation of astream, which calls ainvoke.) 如果子类支持流式输出,则应覆盖此方法。(Subclasses should override this method if they support streaming output.)
- 参数 (Parameters)
input (Input) – Runnable 的输入。(The input to the Runnable.)
config (Optional[RunnableConfig]) – 用于 Runnable 的配置。(The config to use for the Runnable.) 默认为 None。(Defaults to None.)
kwargs (Optional[Any]) – 要传递给 Runnable 的其他关键字参数。(Additional keyword arguments to pass to the Runnable.)
- 产生 (Yields)
Runnable 的输出。(The output of the Runnable.)
- 返回类型 (Return type)
AsyncIterator[Output]
- astream_events(input: Any, config: Optional[RunnableConfig] = None, *, version: Literal['v1', 'v2'], include_names: Optional[Sequence[str]] = None, include_types: Optional[Sequence[str]] = None, include_tags: Optional[Sequence[str]] = None, exclude_names: Optional[Sequence[str]] = None, exclude_types: Optional[Sequence[str]] = None, exclude_tags: Optional[Sequence[str]] = None, **kwargs: Any) AsyncIterator[Union[StandardStreamEvent, CustomStreamEvent]] ¶
Beta
此 API 处于 Beta 阶段,将来可能会发生变化。(This API is in beta and may change in the future.)
生成事件流。(Generate a stream of events.)
用于创建 StreamEvents 的迭代器,该迭代器提供有关 Runnable 进度的实时信息,包括来自中间结果的 StreamEvents。(Use to create an iterator over StreamEvents that provide real-time information about the progress of the Runnable, including StreamEvents from intermediate results.)
StreamEvent 是具有以下模式的字典 (A StreamEvent is a dictionary with the following schema)
event
: str - 事件名称的格式为:(Event names are of the)格式:on_[runnable_type]_(start|stream|end)。(format: on_[runnable_type]_(start|stream|end).)
name
: str - 生成事件的 Runnable 的名称。(The name of the Runnable that generated the event.)run_id
: str - 与发出事件的 Runnable 的给定执行相关的随机生成的 ID。(randomly generated ID associated with the given execution of)发出事件的 Runnable。(the Runnable that emitted the event.) 作为父 Runnable 执行的一部分调用的子 Runnable 将被分配其自己的唯一 ID。(A child Runnable that gets invoked as part of the execution of a parent Runnable is assigned its own unique ID.)
parent_ids
: List[str] - 生成事件的父 runnables 的 ID。(The IDs of the parent runnables that)生成事件。(generated the event.) 根 Runnable 将具有一个空列表。(The root Runnable will have an empty list.) 父 ID 的顺序是从根到直接父级。(The order of the parent IDs is from the root to the immediate parent.) 仅适用于 API 的 v2 版本。(Only available for v2 version of the API.) API 的 v1 版本将返回一个空列表。(The v1 version of the API will return an empty list.)
tags
: Optional[List[str]] - 生成事件的 Runnable 的标签。(The tags of the Runnable that generated)事件。(the event.)
metadata
: Optional[Dict[str, Any]] - 生成事件的 Runnable 的元数据。(The metadata of the Runnable)生成事件。(that generated the event.)
data
: Dict[str, Any]
下表说明了各种链可能发出的一些事件。(Below is a table that illustrates some evens that might be emitted by various chains.) 为了简洁起见,元数据字段已从表中省略。(Metadata fields have been omitted from the table for brevity.) 链定义已包含在表后。(Chain definitions have been included after the table.)
注意 (ATTENTION) 此参考表适用于模式的 V2 版本。(This reference table is for the V2 version of the schema.)
事件 (event)
名称 (name)
块 (chunk)
输入 (input)
输出 (output)
on_chat_model_start
[模型名称 (model name)]
{“messages”: [[SystemMessage, HumanMessage]]}
on_chat_model_stream
[模型名称 (model name)]
AIMessageChunk(content=”hello”)
on_chat_model_end
[模型名称 (model name)]
{“messages”: [[SystemMessage, HumanMessage]]}
AIMessageChunk(content=”hello world”)
on_llm_start
[模型名称 (model name)]
{‘input’: ‘hello’}
on_llm_stream
[模型名称 (model name)]
‘Hello’
on_llm_end
[模型名称 (model name)]
‘Hello human!’
on_chain_start
format_docs
on_chain_stream
format_docs
“hello world!, goodbye world!”
on_chain_end
format_docs
[Document(…)]
“hello world!, goodbye world!”
on_tool_start
some_tool
{“x”: 1, “y”: “2”}
on_tool_end
some_tool
{“x”: 1, “y”: “2”}
on_retriever_start
[检索器名称 (retriever name)]
{“query”: “hello”}
on_retriever_end
[检索器名称 (retriever name)]
{“query”: “hello”}
[Document(…), ..]
on_prompt_start
[模板名称 (template_name)]
{“question”: “hello”}
on_prompt_end
[模板名称 (template_name)]
{“question”: “hello”}
ChatPromptValue(messages: [SystemMessage, …])
除了标准事件外,用户还可以调度自定义事件(请参阅下面的示例)。(In addition to the standard events, users can also dispatch custom events (see example below).)
自定义事件将仅在使用 v2 版本的 API 中显示!(Custom events will be only be surfaced with in the v2 version of the API!)
自定义事件具有以下格式 (A custom event has following format)
属性 (Attribute)
类型 (Type)
描述 (Description)
名称 (name)
str
用户定义的事件名称。(A user defined name for the event.)
data
Any
与事件关联的数据。(The data associated with the event.) 这可以是任何内容,但我们建议使其可 JSON 序列化。(This can be anything, though we suggest making it JSON serializable.)
以下是与上面显示的标准事件关联的声明 (Here are declarations associated with the standard events shown above)
format_docs:
def format_docs(docs: List[Document]) -> str: '''Format the docs.''' return ", ".join([doc.page_content for doc in docs]) format_docs = RunnableLambda(format_docs)
some_tool:
@tool def some_tool(x: int, y: str) -> dict: '''Some_tool.''' return {"x": x, "y": y}
prompt:
template = ChatPromptTemplate.from_messages( [("system", "You are Cat Agent 007"), ("human", "{question}")] ).with_config({"run_name": "my_template", "tags": ["my_template"]})
示例 (Example)
from langchain_core.runnables import RunnableLambda async def reverse(s: str) -> str: return s[::-1] chain = RunnableLambda(func=reverse) events = [ event async for event in chain.astream_events("hello", version="v2") ] # will produce the following events (run_id, and parent_ids # has been omitted for brevity): [ { "data": {"input": "hello"}, "event": "on_chain_start", "metadata": {}, "name": "reverse", "tags": [], }, { "data": {"chunk": "olleh"}, "event": "on_chain_stream", "metadata": {}, "name": "reverse", "tags": [], }, { "data": {"output": "olleh"}, "event": "on_chain_end", "metadata": {}, "name": "reverse", "tags": [], }, ]
示例:调度自定义事件 (Example: Dispatch Custom Event)
from langchain_core.callbacks.manager import ( adispatch_custom_event, ) from langchain_core.runnables import RunnableLambda, RunnableConfig import asyncio async def slow_thing(some_input: str, config: RunnableConfig) -> str: """Do something that takes a long time.""" await asyncio.sleep(1) # Placeholder for some slow operation await adispatch_custom_event( "progress_event", {"message": "Finished step 1 of 3"}, config=config # Must be included for python < 3.10 ) await asyncio.sleep(1) # Placeholder for some slow operation await adispatch_custom_event( "progress_event", {"message": "Finished step 2 of 3"}, config=config # Must be included for python < 3.10 ) await asyncio.sleep(1) # Placeholder for some slow operation return "Done" slow_thing = RunnableLambda(slow_thing) async for event in slow_thing.astream_events("some_input", version="v2"): print(event)
- 参数 (Parameters)
input (Any) – Runnable 的输入。(The input to the Runnable.)
config (Optional[RunnableConfig]) – 用于 Runnable 的配置。(The config to use for the Runnable.)
version (Literal['v1', 'v2']) – 要使用的模式版本,v2 或 v1。(The version of the schema to use either v2 or v1.) 用户应使用 v2。(Users should use v2.) v1 用于向后兼容性,将在 0.4.0 中弃用。(v1 is for backwards compatibility and will be deprecated in 0.4.0.) 在 API 稳定之前,不会分配默认值。(No default will be assigned until the API is stabilized.) 自定义事件将仅在 v2 中显示。(custom events will only be surfaced in v2.)
include_names (Optional[Sequence[str]]) – 仅包含来自具有匹配名称的 runnables 的事件。(Only include events from runnables with matching names.)
include_types (Optional[Sequence[str]]) – 仅包含来自具有匹配类型的 runnables 的事件。(Only include events from runnables with matching types.)
include_tags (Optional[Sequence[str]]) – 仅包含来自具有匹配标签的 runnables 的事件。(Only include events from runnables with matching tags.)
exclude_names (Optional[Sequence[str]]) – 排除来自具有匹配名称的 runnables 的事件。(Exclude events from runnables with matching names.)
exclude_types (Optional[Sequence[str]]) – 排除来自具有匹配类型的 runnables 的事件。(Exclude events from runnables with matching types.)
exclude_tags (Optional[Sequence[str]]) – 排除来自具有匹配标签的 runnables 的事件。(Exclude events from runnables with matching tags.)
kwargs (Any) – 要传递给 Runnable 的其他关键字参数。(Additional keyword arguments to pass to the Runnable.) 这些参数将传递给 astream_log,因为此 astream_events 的实现是构建在 astream_log 之上的。(These will be passed to astream_log as this implementation of astream_events is built on top of astream_log.)
- 产生 (Yields)
StreamEvents 的异步流。(An async stream of StreamEvents.)
- Raises
NotImplementedError – 如果版本不是 v1 或 v2。(If the version is not v1 or v2.)
- 返回类型 (Return type)
AsyncIterator[Union[StandardStreamEvent, CustomStreamEvent]]
- batch(inputs: List[Input], config: Optional[Union[RunnableConfig, List[RunnableConfig]]] = None, *, return_exceptions: bool = False, **kwargs: Optional[Any]) List[Output] ¶
默认实现使用线程池执行器并行运行 invoke。
batch 的默认实现非常适合 IO 绑定的 runnables。(The default implementation of batch works well for IO bound runnables.)
如果子类可以更有效地进行批量处理,则应覆盖此方法;例如,如果底层的 Runnable 使用支持批量模式的 API。(Subclasses should override this method if they can batch more efficiently; e.g., if the underlying Runnable uses an API which supports a batch mode.)
- 参数 (Parameters)
inputs (List[Input]) –
config (Optional[Union[RunnableConfig, List[RunnableConfig]]]) –
return_exceptions (bool) –
kwargs (Optional[Any]) –
- 返回类型 (Return type)
List[Output]
- batch_as_completed(inputs: Sequence[Input], config: Optional[Union[RunnableConfig, Sequence[RunnableConfig]]] = None, *, return_exceptions: bool = False, **kwargs: Optional[Any]) Iterator[Tuple[int, Union[Output, Exception]]] ¶
并行运行 invoke 在输入列表上,并在结果完成时产生结果。
- 参数 (Parameters)
inputs (Sequence[Input]) –
config (Optional[Union[RunnableConfig, Sequence[RunnableConfig]]]) –
return_exceptions (bool) –
kwargs (Optional[Any]) –
- 返回类型 (Return type)
Iterator[Tuple[int, Union[Output, Exception]]]
- configurable_alternatives(which: ConfigurableField, *, default_key: str = 'default', prefix_keys: bool = False, **kwargs: Union[Runnable[Input, Output], Callable[[], Runnable[Input, Output]]]) RunnableSerializable[Input, Output] ¶
配置可在运行时设置的 Runnable 的备选项。
- 参数 (Parameters)
which (ConfigurableField) – 将用于选择备选项的 ConfigurableField 实例。
default_key (str) – 如果未选择备选项,则使用的默认键。默认为“default”。
prefix_keys (bool) – 是否使用 ConfigurableField id 作为键的前缀。默认为 False。
**kwargs (Union[Runnable[Input, Output], Callable[[], Runnable[Input, Output]]]) – 键到 Runnable 实例或返回 Runnable 实例的可调用对象的字典。
- 返回 (Returns)
配置了备选项的新 Runnable。
- 返回类型 (Return type)
RunnableSerializable[Input, Output]
from langchain_anthropic import ChatAnthropic from langchain_core.runnables.utils import ConfigurableField from langchain_openai import ChatOpenAI model = ChatAnthropic( model_name="claude-3-sonnet-20240229" ).configurable_alternatives( ConfigurableField(id="llm"), default_key="anthropic", openai=ChatOpenAI() ) # uses the default model ChatAnthropic print(model.invoke("which organization created you?").content) # uses ChatOpenAI print( model.with_config( configurable={"llm": "openai"} ).invoke("which organization created you?").content )
- configurable_fields(**kwargs: Union[ConfigurableField, ConfigurableFieldSingleOption, ConfigurableFieldMultiOption]) RunnableSerializable[Input, Output] ¶
在运行时配置特定的 Runnable 字段。
- 参数 (Parameters)
**kwargs (Union[ConfigurableField, ConfigurableFieldSingleOption, ConfigurableFieldMultiOption]) – 要配置的 ConfigurableField 实例的字典。
- 返回 (Returns)
配置了字段的新 Runnable。
- 返回类型 (Return type)
RunnableSerializable[Input, Output]
from langchain_core.runnables import ConfigurableField from langchain_openai import ChatOpenAI model = ChatOpenAI(max_tokens=20).configurable_fields( max_tokens=ConfigurableField( id="output_token_number", name="Max tokens in the output", description="The maximum number of tokens in the output", ) ) # max_tokens = 20 print( "max_tokens_20: ", model.invoke("tell me something about chess").content ) # max_tokens = 200 print("max_tokens_200: ", model.with_config( configurable={"output_token_number": 200} ).invoke("tell me something about chess").content )
- get_format_instructions() str ¶
关于应如何格式化 LLM 输出的说明。
- 返回类型 (Return type)
str
- invoke(input: Union[str, BaseMessage], config: Optional[RunnableConfig] = None) T ¶
将单个输入转换为输出。覆盖以实现。
- 参数 (Parameters)
input (Union[str, BaseMessage]) – Runnable 的输入。
config (Optional[RunnableConfig]) – 调用 Runnable 时要使用的配置。该配置支持标准键,如用于跟踪目的的“tags”、“metadata”,用于控制并行执行多少工作的“max_concurrency”以及其他键。请参阅 RunnableConfig 以获取更多详细信息。
- 返回 (Returns)
Runnable 的输出。(The output of the Runnable.)
- 返回类型 (Return type)
T
- parse(llm_output: str) str [source]¶
解析请求和错误标签。(Parse the request and error tags.)
- 参数 (Parameters)
llm_output (str) –
- 返回类型 (Return type)
str
- parse_result(result: List[Generation], *, partial: bool = False) T ¶
将候选模型 Generations 列表解析为特定格式。
- 返回值仅从结果中的第一个 Generation 解析,该 Generation
被假定为最高可能性的 Generation。(is assumed to be the highest-likelihood Generation.)
- 参数 (Parameters)
result (List[Generation]) – 要解析的 Generations 列表。(A list of Generations to be parsed.) Generations 被假定为单个模型输入的不同候选输出。(The Generations are assumed to be different candidate outputs for a single model input.)
partial (bool) – 是否将输出解析为部分结果。(Whether to parse the output as a partial result.) 这对于可以解析部分结果的解析器很有用。(This is useful for parsers that can parse partial results.) 默认为 False。(Default is False.)
- 返回 (Returns)
结构化输出。(Structured output.)
- 返回类型 (Return type)
T
- parse_with_prompt(completion: str, prompt: PromptValue) Any ¶
使用输入提示的上下文解析 LLM 调用的输出。
提供提示主要是为了在 OutputParser 想要重试或以某种方式修复输出的情况下,并且需要来自提示的信息来执行此操作。
- 参数 (Parameters)
completion (str) – 语言模型的字符串输出。
prompt (PromptValue) – 输入 PromptValue。
- 返回 (Returns)
结构化输出。(Structured output.)
- 返回类型 (Return type)
Any
- stream(input: Input, config: Optional[RunnableConfig] = None, **kwargs: Optional[Any]) Iterator[Output] ¶
流式传输的默认实现,它调用 invoke。如果子类支持流式输出,则应覆盖此方法。
- 参数 (Parameters)
input (Input) – Runnable 的输入。(The input to the Runnable.)
config (Optional[RunnableConfig]) – 用于 Runnable 的配置。(The config to use for the Runnable.) 默认为 None。(Defaults to None.)
kwargs (Optional[Any]) – 要传递给 Runnable 的其他关键字参数。(Additional keyword arguments to pass to the Runnable.)
- 产生 (Yields)
Runnable 的输出。(The output of the Runnable.)
- 返回类型 (Return type)
Iterator[Output]
- to_json() Union[SerializedConstructor, SerializedNotImplemented] ¶
将 Runnable 序列化为 JSON。
- 返回 (Returns)
Runnable 的 JSON 可序列化表示形式。
- 返回类型 (Return type)