langchain.agents.conversational.base
.ConversationalAgent¶
- 类langchain.agents.conversational.base.ConversationalAgent[源代码]¶
基础:
Agent
自版本 0.1.0 被弃用: 请使用
create_react_agent
代替。除了使用工具外,还能进行对话的代理。
通过解析和验证关键字参数传入的数据来创建一个新的模型。
如果输入数据无法解析成有效的模型,会引发 ValidationError。
- 参数ai_prefix: str = 'AI'¶
用于 AI 输出的前缀。
- 参数allowed_tools: Optional[List[str]] = None¶
代理允许使用的工具。如果为 None,则允许使用所有工具。
- 参数output_parser: AgentOutputParser [Optional]¶
代理的输出解析器。
- async aplan(intermediate_steps: List[Tuple[AgentAction, str]], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]], **kwargs: Any) Union[AgentAction, AgentFinish] ¶
根据输入异步给出决定。
- 参数
intermediate_steps (列表[元组[AgentAction, 字符串]]) – LLM 到目前为止所采取的步骤,包括观察结果。
callbacks (可选[联合[列表[BaseCallbackHandler], BaseCallbackManager]]) – 要运行的回调。
**kwargs (任何) – 用户输入。
- 返回
指定要使用何种工具的操作。
- 返回类型
Union[AgentAction, AgentFinish]
- classmethod create_prompt(tools: Sequence[BaseTool], prefix: str = 'Assistant is a large language model trained by OpenAI.\n\nAssistant is designed to be able to assist with a wide range of tasks, from answering simple questions to providing in-depth explanations and discussions on a wide range of topics. As a language model, Assistant is able to generate human-like text based on the input it receives, allowing it to engage in natural-sounding conversations and provide responses that are coherent and relevant to the topic at hand.\n\nAssistant is constantly learning and improving, and its capabilities are constantly evolving. It is able to process and understand large amounts of text, and can use this knowledge to provide accurate and informative responses to a wide range of questions. Additionally, Assistant is able to generate its own text based on the input it receives, allowing it to engage in discussions and provide explanations and descriptions on a wide range of topics.\n\nOverall, Assistant is a powerful tool that can help with a wide range of tasks and provide valuable insights and information on a wide range of topics. Whether you need help with a specific question or just want to have a conversation about a particular topic, Assistant is here to assist.\n\nTOOLS:\n------\n\nAssistant has access to the following tools:', suffix: str = 'Begin!\n\nPrevious conversation history:\n{chat_history}\n\nNew input: {input}\n{agent_scratchpad}', format_instructions: str = 'To use a tool, please use the following format:\n\n```\nThought: Do I need to use a tool? Yes\nAction: the action to take, should be one of [{tool_names}]\nAction Input: the input to the action\nObservation: the result of the action\n```\n\nWhen you have a response to say to the Human, or if you do not need to use a tool, you MUST use the format:\n\n```\nThought: Do I need to use a tool? No\n{ai_prefix}: [your response here]\n```', ai_prefix: str = 'AI', human_prefix: str = 'Human', input_variables: Optional[List[str]] = None) PromptTemplate [source]¶
创建类似于零样本代理风格的提示。
- 参数
tools (序列[BaseTool]) – 代理将能够访问的工具列表,用于格式化提示。
prefix (字符串) – 要放在工具列表之前的字符串。默认为 PREFIX。
suffix (字符串) – 要放在工具列表之后的字符串。默认为 SUFFIX。
format_instructions (字符串) – 如何使用工具的说明。默认为 FORMAT_INSTRUCTIONS。
ai_prefix (字符串) – 在 AI 输出之前使用的字符串。默认为“AI”。
human_prefix (字符串) – 在人类输出之前使用的字符串。默认为“Human”。
输入变量 (可选[列表[字符串]]) – 最终提示将期望的输入变量列表。默认为[“input”, “chat_history”, “agent_scratchpad”]。
- 返回
由这里的片段组装而成的PromptTemplate。
- 返回类型
- classmethod from_llm_and_tools(llm: BaseLanguageModel, tools: Sequence[BaseTool], callback_manager: Optional[BaseCallbackManager] = None, output_parser: Optional[AgentOutputParser] = None, prefix: str = 'Assistant is a large language model trained by OpenAI.\n\nAssistant is designed to be able to assist with a wide range of tasks, from answering simple questions to providing in-depth explanations and discussions on a wide range of topics. As a language model, Assistant is able to generate human-like text based on the input it receives, allowing it to engage in natural-sounding conversations and provide responses that are coherent and relevant to the topic at hand.\n\nAssistant is constantly learning and improving, and its capabilities are constantly evolving. It is able to process and understand large amounts of text, and can use this knowledge to provide accurate and informative responses to a wide range of questions. Additionally, Assistant is able to generate its own text based on the input it receives, allowing it to engage in discussions and provide explanations and descriptions on a wide range of topics.\n\nOverall, Assistant is a powerful tool that can help with a wide range of tasks and provide valuable insights and information on a wide range of topics. Whether you need help with a specific question or just want to have a conversation about a particular topic, Assistant is here to assist.\n\nTOOLS:\n------\n\nAssistant has access to the following tools:', suffix: str = 'Begin!\n\nPrevious conversation history:\n{chat_history}\n\nNew input: {input}\n{agent_scratchpad}', format_instructions: str = 'To use a tool, please use the following format:\n\n```\nThought: Do I need to use a tool? Yes\nAction: the action to take, should be one of [{tool_names}]\nAction Input: the input to the action\nObservation: the result of the action\n```\n\nWhen you have a response to say to the Human, or if you do not need to use a tool, you MUST use the format:\n\n```\nThought: Do I need to use a tool? No\n{ai_prefix}: [your response here]\n```', ai_prefix: str = 'AI', human_prefix: str = 'Human', input_variables: Optional[List[str]] = None, **kwargs: Any) Agent [source]¶
从前LLM和工具构建代理。
- 参数
llm (基语言模型) – 要使用的语言模型。
tools (序列[基工具]) – 要使用的工具列表。
callback_manager (可选[基回调管理器]) – 要使用的回调管理器。默认为None。
output_parser (可选[代理输出解析器]) – 要使用的输出解析器。默认为None。
prefix (字符串) – 提示中使用的前缀。默认是PREFIX。
suffix (字符串) – 提示中使用的后缀。默认是SUFFIX。
format_instructions (字符串) – 要使用的格式说明。默认是FORMAT_INSTRUCTIONS。
ai_prefix (字符串) – 在AI输出之前使用的前缀。默认是“AI”。
human_prefix (字符串) – 在人类输出之前使用的前缀。默认是“Human”。
输入变量 (可选[列表[字符串]]) – 要使用的输入变量。默认是None。
**kwargs (任何) – 要传递给代理的任何附加关键字参数。
- 返回
一个代理。
- 返回类型
- get_allowed_tools() 可选[列表[字符串]]¶
获取允许的工具。
- 返回类型
可选[列表[字符串]]
- get_full_inputs(intermediate_steps: List[Tuple[AgentAction, 字符串]], **kwargs: 任何) 字典[字符串, 任何]¶
从中间步骤创建LLMChain的完整输入。
- 参数
intermediate_steps (列表[元组[AgentAction, 字符串]]) – LLM 到目前为止所采取的步骤,包括观察结果。
**kwargs (任何) – 用户输入。
- 返回
LLMChain的完整输入。
- 返回类型
Dict[str, Any]
- plan(intermediate_steps: List[Tuple[AgentAction, str]], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) Union[AgentAction, AgentFinish] ¶
根据输入,决定做什么。
- 参数
intermediate_steps (列表[元组[AgentAction, 字符串]]) – LLM 到目前为止所采取的步骤,包括观察结果。
callbacks (可选[联合[列表[BaseCallbackHandler], BaseCallbackManager]]) – 要运行的回调。
**kwargs (任何) – 用户输入。
- 返回
指定要使用何种工具的操作。
- 返回类型
Union[AgentAction, AgentFinish]
- return_stopped_response(early_stopping_method: str, intermediate_steps: List[Tuple[AgentAction, str]], **kwargs: Any) AgentFinish ¶
当代理因最大迭代次数而被停止时,返回响应。
- 参数
early_stopping_method (str) – 用于早停的方法。
intermediate_steps (列表[元组[AgentAction, 字符串]]) – LLM 到目前为止所采取的步骤,包括观察结果。
**kwargs (任何) – 用户输入。
- 返回
代理完成对象。
- 返回类型
- 引发
ValueError – 如果 early_stopping_method 不在 [‘force’,‘generate’] 中。
- save(file_path: Union[Path, str]) None ¶
保存代理。
- 参数
file_path (Union[Path, str]) – 保存代理的文件路径。
- 返回类型
None
示例:.. code-block:: python
如果使用代理执行器,agent.agent.save(file_path="path/agent.yaml")
- tool_run_logging_kwargs() Dict ¶
返回工具运行的日志参数。
- 返回类型
Dict
- property llm_prefix: str¶
将前缀附加到llm调用的位置。
- 返回
''
- 返回类型
“Thought”
- property observation_prefix: str¶
将前缀附加到观察数据上。
- 返回
''
- 返回类型
“Observation”
- property return_values: List[str]¶
代理的返回值。