langchain.agents.conversational_chat.base
.ConversationalChatAgent¶
- class langchain.agents.conversational_chat.base.ConversationalChatAgent[来源]¶
继承自:
Agent
自版本 0.1.0 已废弃: 请使用
create_json_chat_agent
代替。一个既能使用工具又能进行对话的智能体。
通过解析和验证关键字参数的输入数据创建一个新模型。
如果输入数据无法解析成有效的模型,则引发 ValidationError。
- 参数 allowed_tools: Optional[List[str]] = None¶
智能体的可用工具。如果为 None,则允许所有工具。
- 参数 output_parser: AgentOutputParser [可选]¶
智能体的输出解析器。
- param template_tool_response: str = "TOOL RESPONSE: \n---------------------\n{observation}\n\nUSER'S INPUT\n--------------------\n\nOkay, so what is the response to my last comment? If using information obtained from the tools you must mention it explicitly without mentioning the tool names - I have forgotten all TOOL RESPONSES! Remember to respond with a markdown code snippet of a json blob with a single action, and NOTHING else."¶
工具响应的模板。
- async aplan(intermediate_steps: List[Tuple[AgentAction, str]], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]], **kwargs: Any) Union[AgentAction, AgentFinish]¶
异步地给出输入,决定要做什么。
- 参数
中间步骤 (列表[元组[AgentAction, str]]) – LLM 到目前为止采取的步骤以及观察结果。
callbacks (可选[Union[列表[BaseCallbackHandler], BaseCallbackManager]]) – 要运行的回调。
**kwargs (任何类型) – 用户输入。
- 返回值
指定使用什么工具的操作。
- 返回类型
Union[AgentAction, AgentFinish]
- classmethod create_prompt(tools: Sequence[BaseTool], system_message: str = 'Assistant is a large language model trained by OpenAI.\n\nAssistant is designed to be able to assist with a wide range of tasks, from answering simple questions to providing in-depth explanations and discussions on a wide range of topics. As a language model, Assistant is able to generate human-like text based on the input it receives, allowing it to engage in natural-sounding conversations and provide responses that are coherent and relevant to the topic at hand.\n\nAssistant is constantly learning and improving, and its capabilities are constantly evolving. It is able to process and understand large amounts of text, and can use this knowledge to provide accurate and informative responses to a wide range of questions. Additionally, Assistant is able to generate its own text based on the input it receives, allowing it to engage in discussions and provide explanations and descriptions on a wide range of topics.\n\nOverall, Assistant is a powerful system that can help with a wide range of tasks and provide valuable insights and information on a wide range of topics. Whether you need help with a specific question or just want to have a conversation about a particular topic, Assistant is here to assist.', human_message: str = "TOOLS\n------\nAssistant can ask the user to use tools to look up information that may be helpful in answering the users original question. The tools the human can use are:\n\n{{tools}}\n\n{format_instructions}\n\nUSER'S INPUT\n--------------------\nHere is the user's input (remember to respond with a markdown code snippet of a json blob with a single action, and NOTHING else):\n\n{{{{input}}}}", input_variables: Optional[List[str]] = None, output_parser: Optional[BaseOutputParser] = None) BasePromptTemplate [source]¶
为代理创建提示。
- 参数
tools (序列[BaseTool]) – 要使用的工具。
system_message (str) – 要使用的系统消息。默认为 PREFIX。
human_message (str) – 要使用的人类消息。默认为 SUFFIX。
input_variables (可选[列表[str]]) – 要使用的输入变量。默认为 None。
output_parser (可选[BaseOutputParser]) – 要使用的输出解析器。默认为 None。
- 返回值
一个 PromptTemplate。
- 返回类型
- classmethod from_llm_and_tools(llm: BaseLanguageModel, tools: Sequence[BaseTool], callback_manager: Optional[BaseCallbackManager] = None, output_parser: Optional[AgentOutputParser] = None, system_message: str = 'Assistant is a large language model trained by OpenAI.\n\nAssistant is designed to be able to assist with a wide range of tasks, from answering simple questions to providing in-depth explanations and discussions on a wide range of topics. As a language model, Assistant is able to generate human-like text based on the input it receives, allowing it to engage in natural-sounding conversations and provide responses that are coherent and relevant to the topic at hand.\n\nAssistant is constantly learning and improving, and its capabilities are constantly evolving. It is able to process and understand large amounts of text, and can use this knowledge to provide accurate and informative responses to a wide range of questions. Additionally, Assistant is able to generate its own text based on the input it receives, allowing it to engage in discussions and provide explanations and descriptions on a wide range of topics.\n\nOverall, Assistant is a powerful system that can help with a wide range of tasks and provide valuable insights and information on a wide range of topics. Whether you need help with a specific question or just want to have a conversation about a particular topic, Assistant is here to assist.', human_message: str = "TOOLS\n------\nAssistant can ask the user to use tools to look up information that may be helpful in answering the users original question. The tools the human can use are:\n\n{{tools}}\n\n{format_instructions}\n\nUSER'S INPUT\n--------------------\nHere is the user's input (remember to respond with a markdown code snippet of a json blob with a single action, and NOTHING else):\n\n{{{{input}}}}", input_variables: Optional[List[str]] = None, **kwargs: Any) Agent [source]¶
从 LLM 和工具构造一个代理。
- 参数
llm (BaseLanguageModel) – 要使用的语言模型。
tools (序列[BaseTool]) – 要使用的工具列表。
callback_manager (可选[BaseCallbackManager]) – 要使用的回调管理器。默认为 None。
output_parser (可选[AgentOutputParser]) – 要使用的输出解析器。默认为 None。
system_message (str) – 要使用的系统消息。默认为 PREFIX。
human_message (str) – 要使用的人类消息。默认为 SUFFIX。
input_variables (可选[列表[str]]) – 要使用的输入变量。默认为 None。
**kwargs (任何类型) – 任何额外的参数。
- 返回值
一个代理。
- 返回类型
- get_allowed_tools() Optional[List[str]]¶
获取允许使用的工具。
- 返回类型
可选[列表[str]]
- get_full_inputs(intermediate_steps: List[Tuple[AgentAction, str]], **kwargs: Any) Dict[str, Any] ¶
根据中间步骤创建LLMChain的全输入。
- 参数
中间步骤 (列表[元组[AgentAction, str]]) – LLM 到目前为止采取的步骤以及观察结果。
**kwargs (任何类型) – 用户输入。
- 返回值
LLMChain的全输入。
- 返回类型
Dict[str, Any]
- plan(intermediate_steps: List[Tuple[AgentAction, str]], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) Union[AgentAction, AgentFinish] ¶
根据输入决定要做什么。
- 参数
中间步骤 (列表[元组[AgentAction, str]]) – LLM 到目前为止采取的步骤以及观察结果。
callbacks (可选[Union[列表[BaseCallbackHandler], BaseCallbackManager]]) – 要运行的回调。
**kwargs (任何类型) – 用户输入。
- 返回值
指定使用什么工具的操作。
- 返回类型
Union[AgentAction, AgentFinish]
- return_stopped_response(early_stopping_method: str, intermediate_steps: List[Tuple[AgentAction, str]], **kwargs: Any) AgentFinish ¶
由于迭代次数达到最大值而停止代理时返回响应。
- 参数
early_stopping_method (str) – 早期停止使用的方法。
中间步骤 (列表[元组[AgentAction, str]]) – LLM 到目前为止采取的步骤以及观察结果。
**kwargs (任何类型) – 用户输入。
- 返回值
代理完成对象。
- 返回类型
- 引发异常
ValueError – 如果 early_stopping_method 不在 [‘force’,‘generate’] 中。
- save(file_path: Union[Path, str]) None ¶
保存代理。
- 参数
file_path (Union[Path, str]) – 保存代理的文件路径。
- 返回类型
None
示例: .. code-block:: python
# 与代理执行器一起使用时:agent.agent.save(file_path="path/agent.yaml")
- tool_run_logging_kwargs() Dict ¶
返回工具运行的日志关键字。
- 返回类型
Dict
- property llm_prefix: str¶
附加到 llm 调用的前缀。
- 返回值
“
- 返回类型
“Thought”
- property observation_prefix: str¶
附加到观察结果的前缀。
- 返回值
“
- 返回类型
“Observation”
- property return_values: List[str]¶
代理的返回值。