langchain.agents.xml.base.create_xml_agent

langchain.agents.xml.base.create_xml_agent(llm: ~langchain_core.language_models.base.BaseLanguageModel, tools: ~typing.Sequence[~langchain_core.tools.BaseTool], prompt: ~langchain_core.prompts.base.BasePromptTemplate, tools_renderer: ~typing.Callable[[~typing.List[~langchain_core.tools.BaseTool]], str] = <function render_text_description>, *, stop_sequence: ~typing.Union[bool, ~typing.List[str]] = True) Runnable[source]

创建一个使用 XML 格式其逻辑的智能代理。

参数
  • llm (BaseLanguageModel) – 作为代理使用的 LLM。

  • tools (Sequence[BaseTool]) – 该代理可访问的工具。

  • prompt (BasePromptTemplate) – 要使用的提示,必须具有输入键 tools:每个工具的描述。 agent_scratchpad:包含先前智能代理的操作和工具输出。

  • tools_renderer (Callable[[List[BaseTool]], str]) – 这控制工具如何转换为字符串,然后传递给 LLM。默认值为 render_text_description

  • stop_sequence (Union[bool, List[str]]) –

    布尔值或字符串列表。如果为真,则为“</tool_input>”添加停止令牌以避免幻觉。如果为假,则不添加停止令牌。如果是一个字符串列表,则使用提供的列表作为停止令牌。

    默认值为真。您可以将此设置为假,如果正在使用的 LLM 不支持停止序列。

返回

表示智能代理的可运行序列。它接受与传递的提示相同的所有输入变量。它返回输出为 AgentAction 或 AgentFinish。

返回类型

Runnable

示例

from langchain import hub
from langchain_community.chat_models import ChatAnthropic
from langchain.agents import AgentExecutor, create_xml_agent

prompt = hub.pull("hwchase17/xml-agent-convo")
model = ChatAnthropic(model="claude-3-haiku-20240307")
tools = ...

agent = create_xml_agent(model, tools, prompt)
agent_executor = AgentExecutor(agent=agent, tools=tools)

agent_executor.invoke({"input": "hi"})

# Use with chat history
from langchain_core.messages import AIMessage, HumanMessage
agent_executor.invoke(
    {
        "input": "what's my name?",
        # Notice that chat_history is a string
        # since this prompt is aimed at LLMs, not chat models
        "chat_history": "Human: My name is Bob\nAI: Hello Bob!",
    }
)

提示

提示必须具有输入键
  • tools:包含每个工具的描述。

  • agent_scratchpad:包含先前智能代理的操作和一个工具输出的 XML 字符串。

下面是一个示例

from langchain_core.prompts import PromptTemplate

template = '''You are a helpful assistant. Help the user answer any questions.

You have access to the following tools:

{tools}

In order to use a tool, you can use <tool></tool> and <tool_input></tool_input> tags. You will then get back a response in the form <observation></observation>
For example, if you have a tool called 'search' that could run a google search, in order to search for the weather in SF you would respond:

<tool>search</tool><tool_input>weather in SF</tool_input>
<observation>64 degrees</observation>

When you are done, respond with a final answer between <final_answer></final_answer>. For example:

<final_answer>The weather in SF is 64 degrees</final_answer>

Begin!

Previous Conversation:
{chat_history}

Question: {input}
{agent_scratchpad}'''
prompt = PromptTemplate.from_template(template)