跳转至

调用工具

工具 封装了一个可调用函数及其输入模式。这些可以传递给兼容的聊天模型,允许模型决定是否调用工具并确定适当的参数。

你可以 定义自己的工具 或使用 预构建工具

定义工具

使用 @tool 装饰器定义基本工具:

from langchain_core.tools import tool

# highlight-next-line
@tool
def multiply(a: int, b: int) -> int:
    """Multiply two numbers."""
    return a * b

使用 tool 函数定义基本工具:

import { tool } from "@langchain/core/tools";
import { z } from "zod";

// highlight-next-line
const multiply = tool(
  (input) => {
    return input.a * input.b;
  },
  {
    name: "multiply",
    description: "Multiply two numbers.",
    schema: z.object({
      a: z.number().describe("First operand"),
      b: z.number().describe("Second operand"),
    }),
  }
);

运行工具

工具符合 Runnable 接口,这意味着你可以使用 invoke 方法运行工具:

multiply.invoke({"a": 6, "b": 7})  # 返回 42
await multiply.invoke({ a: 6, b: 7 }); // 返回 42

如果使用 type="tool_call" 调用工具,它将返回一个 ToolMessage

tool_call = {
    "type": "tool_call",
    "id": "1",
    "args": {"a": 42, "b": 7}
}
multiply.invoke(tool_call) # 返回 ToolMessage 对象

输出:

ToolMessage(content='294', name='multiply', tool_call_id='1')
const toolCall = {
  type: "tool_call",
  id: "1",
  name: "multiply",
  args: { a: 42, b: 7 },
};
await multiply.invoke(toolCall); // 返回 ToolMessage 对象

输出:

ToolMessage {
  content: "294",
  name: "multiply",
  tool_call_id: "1"
}

在智能代理中使用

要创建工具调用智能代理,可以使用预构建的 @[create_react_agent][]:

from langchain_core.tools import tool
# highlight-next-line
from langgraph.prebuilt import create_react_agent

@tool
def multiply(a: int, b: int) -> int:
    """Multiply two numbers."""
    return a * b

# highlight-next-line
agent = create_react_agent(
    model="anthropic:claude-3-7-sonnet",
    tools=[multiply]
)
agent.invoke({"messages": [{"role": "user", "content": "what's 42 x 7?"}]})

要创建工具调用智能代理,可以使用预构建的 createReactAgent

import { tool } from "@langchain/core/tools";
import { z } from "zod";
// highlight-next-line
import { createReactAgent } from "@langchain/langgraph/prebuilt";

const multiply = tool(
  (input) => {
    return input.a * input.b;
  },
  {
    name: "multiply",
    description: "Multiply two numbers.",
    schema: z.object({
      a: z.number().describe("First operand"),
      b: z.number().describe("Second operand"),
    }),
  }
);

// highlight-next-line
const agent = createReactAgent({
  llm: new ChatAnthropic({ model: "claude-3-5-sonnet-20240620" }),
  tools: [multiply],
});

await agent.invoke({
  messages: [{ role: "user", content: "what's 42 x 7?" }],
});

动态选择工具

根据上下文在运行时配置工具可用性:

from dataclasses import dataclass
from typing import Literal

from langchain.chat_models import init_chat_model
from langchain_core.tools import tool

from langgraph.prebuilt import create_react_agent
from langgraph.prebuilt.chat_agent_executor import AgentState
from langgraph.runtime import Runtime


@dataclass
class CustomContext:
    tools: list[Literal["weather", "compass"]]


@tool
def weather() -> str:
    """Returns the current weather conditions."""
    return "It's nice and sunny."


@tool
def compass() -> str:
    """Returns the direction the user is facing."""
    return "North"

model = init_chat_model("anthropic:claude-sonnet-4-20250514")

# highlight-next-line
def configure_model(state: AgentState, runtime: Runtime[CustomContext]):
    """根据运行时上下文配置带工具的模型。"""
    selected_tools = [
        tool
        for tool in [weather, compass]
        if tool.name in runtime.context.tools
    ]
    return model.bind_tools(selected_tools)


agent = create_react_agent(
    # 根据运行时上下文动态配置带工具的模型
    # highlight-next-line
    configure_model,
    # 初始化时启用所有工具
    # highlight-next-line
    tools=[weather, compass]
)

output = agent.invoke(
    {
        "messages": [
            {
                "role": "user",
                "content": "Who are you and what tools do you have access to?",
            }
        ]
    },
    # highlight-next-line
    context=CustomContext(tools=["weather"]),  # 只启用天气工具
)

print(output["messages"][-1].text())

在版本 0.6.0 中添加

在工作流中使用

如果你正在编写自定义工作流,你需要:

  1. 向聊天模型注册工具
  2. 如果模型决定使用工具,则调用该工具

使用 model.bind_tools() 向模型注册工具。

from langchain.chat_models import init_chat_model

model = init_chat_model(model="claude-3-5-haiku-latest")

# highlight-next-line
model_with_tools = model.bind_tools([multiply])

使用 model.bindTools() 向模型注册工具。

import { ChatOpenAI } from "@langchain/openai";

const model = new ChatOpenAI({ model: "gpt-4o" });

// highlight-next-line
const modelWithTools = model.bindTools([multiply]);

LLM 会自动判断是否需要调用工具,并使用适当的参数处理工具调用。

扩展示例:将工具附加到聊天模型
from langchain_core.tools import tool
from langchain.chat_models import init_chat_model

@tool
def multiply(a: int, b: int) -> int:
    """Multiply two numbers."""
    return a * b

model = init_chat_model(model="claude-3-5-haiku-latest")
# highlight-next-line
model_with_tools = model.bind_tools([multiply])

response_message = model_with_tools.invoke("what's 42 x 7?")
tool_call = response_message.tool_calls[0]

multiply.invoke(tool_call)
ToolMessage(
    content='294',
    name='multiply',
    tool_call_id='toolu_0176DV4YKSD8FndkeuuLj36c'
)
import { tool } from "@langchain/core/tools";
import { ChatOpenAI } from "@langchain/openai";
import { z } from "zod";

const multiply = tool(
  (input) => {
    return input.a * input.b;
  },
  {
    name: "multiply",
    description: "Multiply two numbers.",
    schema: z.object({
      a: z.number().describe("First operand"),
      b: z.number().describe("Second operand"),
    }),
  }
);

const model = new ChatOpenAI({ model: "gpt-4o" });
// highlight-next-line
const modelWithTools = model.bindTools([multiply]);

const responseMessage = await modelWithTools.invoke("what's 42 x 7?");
const toolCall = responseMessage.tool_calls[0];

await multiply.invoke(toolCall);
ToolMessage {
  content: "294",
  name: "multiply",
  tool_call_id: "toolu_0176DV4YKSD8FndkeuuLj36c"
}

ToolNode

要在自定义工作流中执行工具,可以使用预构建的 @[ToolNode][] 或实现自己的自定义节点。

ToolNode 是一个专门用于在工作流中执行工具的节点。它提供以下功能:

  • 支持同步和异步工具。
  • 并发执行多个工具。
  • 处理工具执行期间的错误(handle_tool_errors=True,默认启用)。更多详情请参阅 处理工具错误

ToolNode 基于 MessagesState 操作:

  • 输入MessagesState,其中最后一条消息是包含 tool_calls 参数的 AIMessage
  • 输出:更新后的 MessagesState,包含已执行工具的 ToolMessage 结果。
# highlight-next-line
from langgraph.prebuilt import ToolNode

def get_weather(location: str):
    """Call to get the current weather."""
    if location.lower() in ["sf", "san francisco"]:
        return "It's 60 degrees and foggy."
    else:
        return "It's 90 degrees and sunny."

def get_coolest_cities():
    """Get a list of coolest cities"""
    return "nyc, sf"

# highlight-next-line
tool_node = ToolNode([get_weather, get_coolest_cities])
tool_node.invoke({"messages": [...]})

要在自定义工作流中执行工具,可以使用预构建的 ToolNode 或实现自己的自定义节点。

ToolNode 是一个专门用于在工作流中执行工具的节点。它提供以下功能:

  • 支持同步和异步工具。
  • 并发执行多个工具。
  • 处理工具执行期间的错误(handleToolErrors: true,默认启用)。更多详情请参阅 处理工具错误

  • 输入MessagesZodState,其中最后一条消息是包含 tool_calls 参数的 AIMessage

  • 输出:更新后的 MessagesZodState,包含已执行工具的 ToolMessage 结果。
// highlight-next-line
import { ToolNode } from "@langchain/langgraph/prebuilt";

const getWeather = tool(
  (input) => {
    if (["sf", "san francisco"].includes(input.location.toLowerCase())) {
      return "It's 60 degrees and foggy.";
    } else {
      return "It's 90 degrees and sunny.";
    }
  },
  {
    name: "get_weather",
    description: "Call to get the current weather.",
    schema: z.object({
      location: z.string().describe("Location to get the weather for."),
    }),
  }
);

const getCoolestCities = tool(
  () => {
    return "nyc, sf";
  },
  {
    name: "get_coolest_cities",
    description: "Get a list of coolest cities",
    schema: z.object({
      noOp: z.string().optional().describe("No-op parameter."),
    }),
  }
);

// highlight-next-line
const toolNode = new ToolNode([getWeather, getCoolestCities]);
await toolNode.invoke({ messages: [...] });
单个工具调用
from langchain_core.messages import AIMessage
from langgraph.prebuilt import ToolNode

# 定义工具
@tool
def get_weather(location: str):
    """Call to get the current weather."""
    if location.lower() in ["sf", "san francisco"]:
        return "It's 60 degrees and foggy."
    else:
        return "It's 90 degrees and sunny."

# highlight-next-line
tool_node = ToolNode([get_weather])

message_with_single_tool_call = AIMessage(
    content="",
    tool_calls=[
        {
            "name": "get_weather",
            "args": {"location": "sf"},
            "id": "tool_call_id",
            "type": "tool_call",
        }
    ],
)

tool_node.invoke({"messages": [message_with_single_tool_call]})
{'messages': [ToolMessage(content="It's 60 degrees and foggy.", name='get_weather', tool_call_id='tool_call_id')]}
import { AIMessage } from "@langchain/core/messages";
import { ToolNode } from "@langchain/langgraph/prebuilt";
import { tool } from "@langchain/core/tools";
import { z } from "zod";

// 定义工具
const getWeather = tool(
  (input) => {
    if (["sf", "san francisco"].includes(input.location.toLowerCase())) {
      return "It's 60 degrees and foggy.";
    } else {
      return "It's 90 degrees and sunny.";
    }
  },
  {
    name: "get_weather",
    description: "Call to get the current weather.",
    schema: z.object({
      location: z.string().describe("Location to get the weather for."),
    }),
  }
);

// highlight-next-line
const toolNode = new ToolNode([getWeather]);

const messageWithSingleToolCall = new AIMessage({
  content: "",
  tool_calls: [
    {
      name: "get_weather",
      args: { location: "sf" },
      id: "tool_call_id",
      type: "tool_call",
    }
  ],
});

await toolNode.invoke({ messages: [messageWithSingleToolCall] });
{ messages: [ToolMessage { content: "It's 60 degrees and foggy.", name: "get_weather", tool_call_id: "tool_call_id" }] }
多个工具调用
from langchain_core.messages import AIMessage
from langgraph.prebuilt import ToolNode

# 定义工具

def get_weather(location: str):
    """Call to get the current weather."""
    if location.lower() in ["sf", "san francisco"]:
        return "It's 60 degrees and foggy."
    else:
        return "It's 90 degrees and sunny."

def get_coolest_cities():
    """Get a list of coolest cities"""
    return "nyc, sf"

# highlight-next-line
tool_node = ToolNode([get_weather, get_coolest_cities])

message_with_multiple_tool_calls = AIMessage(
    content="",
    tool_calls=[
        {
            "name": "get_coolest_cities",
            "args": {},
            "id": "tool_call_id_1",
            "type": "tool_call",
        },
        {
            "name": "get_weather",
            "args": {"location": "sf"},
            "id": "tool_call_id_2",
            "type": "tool_call",
        },
    ],
)

# highlight-next-line
tool_node.invoke({"messages": [message_with_multiple_tool_calls]})  # (1)!
  1. ToolNode 将并行执行两个工具
{
    'messages': [
        ToolMessage(content='nyc, sf', name='get_coolest_cities', tool_call_id='tool_call_id_1'),
        ToolMessage(content="It's 60 degrees and foggy.", name='get_weather', tool_call_id='tool_call_id_2')
    ]
}
import { AIMessage } from "@langchain/core/messages";
import { ToolNode } from "@langchain/langgraph/prebuilt";
import { tool } from "@langchain/core/tools";
import { z } from "zod";

// 定义工具
const getWeather = tool(
  (input) => {
    if (["sf", "san francisco"].includes(input.location.toLowerCase())) {
      return "It's 60 degrees and foggy.";
    } else {
      return "It's 90 degrees and sunny.";
    }
  },
  {
    name: "get_weather",
    description: "Call to get the current weather.",
    schema: z.object({
      location: z.string().describe("Location to get the weather for."),
    }),
  }
);

const getCoolestCities = tool(
  () => {
    return "nyc, sf";
  },
  {
    name: "get_coolest_cities",
    description: "Get a list of coolest cities",
    schema: z.object({
      noOp: z.string().optional().describe("No-op parameter."),
    }),
  }
);

// highlight-next-line
const toolNode = new ToolNode([getWeather, getCoolestCities]);

const messageWithMultipleToolCalls = new AIMessage({
  content: "",
  tool_calls: [
    {
      name: "get_coolest_cities",
      args: {},
      id: "tool_call_id_1",
      type: "tool_call",
    },
    {
      name: "get_weather",
      args: { location: "sf" },
      id: "tool_call_id_2",
      type: "tool_call",
    },
  ],
});

// highlight-next-line
await toolNode.invoke({ messages: [messageWithMultipleToolCalls] }); // (1)!
  1. ToolNode 将并行执行两个工具
{
  messages: [
    ToolMessage { content: "nyc, sf", name: "get_coolest_cities", tool_call_id: "tool_call_id_1" },
    ToolMessage { content: "It's 60 degrees and foggy.", name: "get_weather", tool_call_id: "tool_call_id_2" }
  ]
}
与聊天模型一起使用
from langchain.chat_models import init_chat_model
from langgraph.prebuilt import ToolNode

def get_weather(location: str):
    """Call to get the current weather."""
    if location.lower() in ["sf", "san francisco"]:
        return "It's 60 degrees and foggy."
    else:
        return "It's 90 degrees and sunny."

# highlight-next-line
tool_node = ToolNode([get_weather])

model = init_chat_model(model="claude-3-5-haiku-latest")
# highlight-next-line
model_with_tools = model.bind_tools([get_weather])  # (1)!


# highlight-next-line
response_message = model_with_tools.invoke("what's the weather in sf?")
tool_node.invoke({"messages": [response_message]})
  1. 使用 .bind_tools() 将工具模式附加到聊天模型
{'messages': [ToolMessage(content="It's 60 degrees and foggy.", name='get_weather', tool_call_id='toolu_01Pnkgw5JeTRxXAU7tyHT4UW')]}
import { ChatOpenAI } from "@langchain/openai";
import { ToolNode } from "@langchain/langgraph/prebuilt";
import { tool } from "@langchain/core/tools";
import { z } from "zod";

const getWeather = tool(
  (input) => {
    if (["sf", "san francisco"].includes(input.location.toLowerCase())) {
      return "It's 60 degrees and foggy.";
    } else {
      return "It's 90 degrees and sunny.";
    }
  },
  {
    name: "get_weather",
    description: "Call to get the current weather.",
    schema: z.object({
      location: z.string().describe("Location to get the weather for."),
    }),
  }
);

// highlight-next-line
const toolNode = new ToolNode([getWeather]);

const model = new ChatOpenAI({ model: "gpt-4o" });
// highlight-next-line
const modelWithTools = model.bindTools([getWeather]); // (1)!

// highlight-next-line
const responseMessage = await modelWithTools.invoke("what's the weather in sf?");
await toolNode.invoke({ messages: [responseMessage] });
  1. 使用 .bindTools() 将工具模式附加到聊天模型
{ messages: [ToolMessage { content: "It's 60 degrees and foggy.", name: "get_weather", tool_call_id: "toolu_01Pnkgw5JeTRxXAU7tyHT4UW" }] }
在工具调用智能代理中使用

这是一个使用 ToolNode 从头创建工具调用智能代理的示例。你也可以使用 LangGraph 的预构建 agent

from langchain.chat_models import init_chat_model
from langgraph.prebuilt import ToolNode
from langgraph.graph import StateGraph, MessagesState, START, END

def get_weather(location: str):
    """Call to get the current weather."""
    if location.lower() in ["sf", "san francisco"]:
        return "It's 60 degrees and foggy."
    else:
        return "It's 90 degrees and sunny."

# highlight-next-line
tool_node = ToolNode([get_weather])

model = init_chat_model(model="claude-3-5-haiku-latest")
# highlight-next-line
model_with_tools = model.bind_tools([get_weather])

def should_continue(state: MessagesState):
    messages = state["messages"]
    last_message = messages[-1]
    if last_message.tool_calls:
        return "tools"
    return END

def call_model(state: MessagesState):
    messages = state["messages"]
    response = model_with_tools.invoke(messages)
    return {"messages": [response]}

builder = StateGraph(MessagesState)

# 定义我们将循环的两个节点
builder.add_node("call_model", call_model)
# highlight-next-line
builder.add_node("tools", tool_node)

builder.add_edge(START, "call_model")
builder.add_conditional_edges("call_model", should_continue, ["tools", END])
builder.add_edge("tools", "call_model")

graph = builder.compile()

graph.invoke({"messages": [{"role": "user", "content": "what's the weather in sf?"}]})
{
    'messages': [
        HumanMessage(content="what's the weather in sf?"),
        AIMessage(
            content=[{'text': "I'll help you check the weather in San Francisco right now.", 'type': 'text'}, {'id': 'toolu_01A4vwUEgBKxfFVc5H3v1CNs', 'input': {'location': 'San Francisco'}, 'name': 'get_weather', 'type': 'tool_use'}],
            tool_calls=[{'name': 'get_weather', 'args': {'location': 'San Francisco'}, 'id': 'toolu_01A4vwUEgBKxfFVc5H3v1CNs', 'type': 'tool_call'}]
        ),
        ToolMessage(content="It's 60 degrees and foggy."),
        AIMessage(content="The current weather in San Francisco is 60 degrees and foggy. Typical San Francisco weather with its famous marine layer!")
    ]
}
import { ChatOpenAI } from "@langchain/openai";
import { ToolNode } from "@langchain/langgraph/prebuilt";
import { StateGraph, MessagesZodState, START, END } from "@langchain/langgraph";
import { tool } from "@langchain/core/tools";
import { z } from "zod";
import { isAIMessage } from "@langchain/core/messages";

const getWeather = tool(
  (input) => {
    if (["sf", "san francisco"].includes(input.location.toLowerCase())) {
      return "It's 60 degrees and foggy.";
    } else {
      return "It's 90 degrees and sunny.";
    }
  },
  {
    name: "get_weather",
    description: "Call to get the current weather.",
    schema: z.object({
      location: z.string().describe("Location to get the weather for."),
    }),
  }
);

// highlight-next-line
const toolNode = new ToolNode([getWeather]);

const model = new ChatOpenAI({ model: "gpt-4o" });
// highlight-next-line
const modelWithTools = model.bindTools([getWeather]);

const shouldContinue = (state: z.infer<typeof MessagesZodState>) => {
  const messages = state.messages;
  const lastMessage = messages.at(-1);
  if (lastMessage && isAIMessage(lastMessage) && lastMessage.tool_calls?.length) {
    return "tools";
  }
  return END;
};

const callModel = async (state: z.infer<typeof MessagesZodState>) => {
  const messages = state.messages;
  const response = await modelWithTools.invoke(messages);
  return { messages: [response] };
};

const builder = new StateGraph(MessagesZodState)
  // 定义我们将循环的两个节点
  .addNode("agent", callModel)
  // highlight-next-line
  .addNode("tools", toolNode)
  .addEdge(START, "agent")
  .addConditionalEdges("agent", shouldContinue, ["tools", END])
  .addEdge("tools", "agent");

const graph = builder.compile();

await graph.invoke({
  messages: [{ role: "user", content: "what's the weather in sf?" }]
});
{
  messages: [
    HumanMessage { content: "what's the weather in sf?" },
    AIMessage {
      content: [{ text: "I'll help you check the weather in San Francisco right now.", type: "text" }, { id: "toolu_01A4vwUEgBKxfFVc5H3v1CNs", input: { location: "San Francisco" }, name: "get_weather", type: "tool_use" }],
      tool_calls: [{ name: "get_weather", args: { location: "San Francisco" }, id: "toolu_01A4vwUEgBKxfFVc5H3v1CNs", type: "tool_call" }]
    },
    ToolMessage { content: "It's 60 degrees and foggy." },
    AIMessage { content: "The current weather in San Francisco is 60 degrees and foggy. Typical San Francisco weather with its famous marine layer!" }
  ]
}

工具自定义

要更好地控制工具行为,请使用 @tool 装饰器。

参数描述

从文档字符串自动生成描述:

# highlight-next-line
from langchain_core.tools import tool

# highlight-next-line
@tool("multiply_tool", parse_docstring=True)
def multiply(a: int, b: int) -> int:
    """Multiply two numbers.

    Args:
        a: First operand
        b: Second operand
    """
    return a * b

从模式自动生成描述:

import { tool } from "@langchain/core/tools";
import { z } from "zod";

// highlight-next-line
const multiply = tool(
  (input) => {
    return input.a * input.b;
  },
  {
    name: "multiply",
    description: "Multiply two numbers.",
    schema: z.object({
      a: z.number().describe("First operand"),
      b: z.number().describe("Second operand"),
    }),
  }
);

显式输入模式

使用 args_schema 定义模式:

from pydantic import BaseModel, Field
from langchain_core.tools import tool

class MultiplyInputSchema(BaseModel):
    """Multiply two numbers"""
    a: int = Field(description="First operand")
    b: int = Field(description="Second operand")

# highlight-next-line
@tool("multiply_tool", args_schema=MultiplyInputSchema)
def multiply(a: int, b: int) -> int:
    return a * b

工具名称

使用第一个参数或 name 属性覆盖默认工具名称:

from langchain_core.tools import tool

# highlight-next-line
@tool("multiply_tool")
def multiply(a: int, b: int) -> int:
    """Multiply two numbers."""
    return a * b
import { tool } from "@langchain/core/tools";
import { z } from "zod";

// highlight-next-line
const multiply = tool(
  (input) => {
    return input.a * input.b;
  },
  {
    name: "multiply_tool", // 自定义名称
    description: "Multiply two numbers.",
    schema: z.object({
      a: z.number().describe("First operand"),
      b: z.number().describe("Second operand"),
    }),
  }
);

上下文管理

LangGraph 中的工具有时需要上下文数据,例如仅运行时参数(如用户 ID 或会话详情),这些不应由模型控制。LangGraph 提供三种管理此类上下文的方法:

类型 使用场景 可变 生命周期
配置 静态、不可变的运行时数据 单次调用
短期记忆 调用期间动态变化的数据 单次调用
长期记忆 持久化的跨会话数据 跨多个会话

配置

当你有工具所需的 不可变 运行时数据(如用户标识符)时,使用配置。通过 RunnableConfig 在调用时传递这些参数并在工具中访问它们:

from langchain_core.tools import tool
from langchain_core.runnables import RunnableConfig

@tool
# highlight-next-line
def get_user_info(config: RunnableConfig) -> str:
    """Retrieve user information based on user ID."""
    user_id = config["configurable"].get("user_id")
    return "User is John Smith" if user_id == "user_123" else "Unknown user"

# 使用智能代理的调用示例
agent.invoke(
    {"messages": [{"role": "user", "content": "look up user info"}]},
    # highlight-next-line
    config={"configurable": {"user_id": "user_123"}}
)

当你有工具所需的 不可变 运行时数据(如用户标识符)时,使用配置。通过 LangGraphRunnableConfig 在调用时传递这些参数并在工具中访问它们:

import { tool } from "@langchain/core/tools";
import { z } from "zod";
import type { LangGraphRunnableConfig } from "@langchain/langgraph";

const getUserInfo = tool(
  // highlight-next-line
  async (_, config: LangGraphRunnableConfig) => {
    const userId = config?.configurable?.user_id;
    return userId === "user_123" ? "User is John Smith" : "Unknown user";
  },
  {
    name: "get_user_info",
    description: "Retrieve user information based on user ID.",
    schema: z.object({}),
  }
);

// 使用智能代理的调用示例
await agent.invoke(
  { messages: [{ role: "user", content: "look up user info" }] },
  // highlight-next-line
  { configurable: { user_id: "user_123" } }
);
扩展示例:在工具中访问配置
from langchain_core.runnables import RunnableConfig
from langchain_core.tools import tool
from langgraph.prebuilt import create_react_agent

def get_user_info(
    # highlight-next-line
    config: RunnableConfig,
) -> str:
    """Look up user info."""
    # highlight-next-line
    user_id = config["configurable"].get("user_id")
    return "User is John Smith" if user_id == "user_123" else "Unknown user"

agent = create_react_agent(
    model="anthropic:claude-3-7-sonnet-latest",
    tools=[get_user_info],
)

agent.invoke(
    {"messages": [{"role": "user", "content": "look up user information"}]},
    # highlight-next-line
    config={"configurable": {"user_id": "user_123"}}
)
import { tool } from "@langchain/core/tools";
import { z } from "zod";
import { createReactAgent } from "@langchain/langgraph/prebuilt";
import type { LangGraphRunnableConfig } from "@langchain/langgraph";
import { ChatAnthropic } from "@langchain/anthropic";

const getUserInfo = tool(
  // highlight-next-line
  async (_, config: LangGraphRunnableConfig) => {
    // highlight-next-line
    const userId = config?.configurable?.user_id;
    return userId === "user_123" ? "User is John Smith" : "Unknown user";
  },
  {
    name: "get_user_info",
    description: "Look up user info.",
    schema: z.object({}),
  }
);

const agent = createReactAgent({
  llm: new ChatAnthropic({ model: "claude-3-5-sonnet-20240620" }),
  tools: [getUserInfo],
});

await agent.invoke(
  { messages: [{ role: "user", content: "look up user information" }] },
  // highlight-next-line
  { configurable: { user_id: "user_123" } }
);

短期记忆

短期记忆维护在单次执行期间变化的 动态 状态。

要在工具内部 访问(读取)图状态,你可以使用特殊的参数 注解 — @[InjectedState][]:

from typing import Annotated, NotRequired
from langchain_core.tools import tool
from langgraph.prebuilt import InjectedState, create_react_agent
from langgraph.prebuilt.chat_agent_executor import AgentState

class CustomState(AgentState):
    # 短期状态中的 user_name 字段
    user_name: NotRequired[str]

@tool
def get_user_name(
    # highlight-next-line
    state: Annotated[CustomState, InjectedState]
) -> str:
    """Retrieve the current user-name from state."""
    # 返回存储的名称,如果未设置则返回默认值
    return state.get("user_name", "Unknown user")

# 示例智能代理设置
agent = create_react_agent(
    model="anthropic:claude-3-7-sonnet-latest",
    tools=[get_user_name],
    state_schema=CustomState,
)

# 调用:从状态读取名称(初始为空)
agent.invoke({"messages": "what's my name?"})

要在工具内部 访问(读取)图状态,你可以使用 @[getContextVariable][] 函数:

import { tool } from "@langchain/core/tools";
import { z } from "zod";
import { getContextVariable } from "@langchain/core/context";
import { MessagesZodState } from "@langchain/langgraph";
import type { LangGraphRunnableConfig } from "@langchain/langgraph";

const getUserName = tool(
  // highlight-next-line
  async (_, config: LangGraphRunnableConfig) => {
    // highlight-next-line
    const currentState = getContextVariable("currentState") as z.infer<
      typeof MessagesZodState
    > & { userName?: string };
    return currentState?.userName || "Unknown user";
  },
  {
    name: "get_user_name",
    description: "Retrieve the current user name from state.",
    schema: z.object({}),
  }
);

使用返回 Command 的工具来 更新 user_name 并追加确认消息:

from typing import Annotated
from langgraph.types import Command
from langchain_core.messages import ToolMessage
from langchain_core.tools import tool, InjectedToolCallId

@tool
def update_user_name(
    new_name: str,
    tool_call_id: Annotated[str, InjectedToolCallId]
) -> Command:
    """Update user-name in short-term memory."""
    # highlight-next-line
    return Command(update={
        # highlight-next-line
        "user_name": new_name,
        # highlight-next-line
        "messages": [
            # highlight-next-line
            ToolMessage(f"Updated user name to {new_name}", tool_call_id=tool_call_id)
            # highlight-next-line
        ]
        # highlight-next-line
    })

更新 短期记忆,你可以使用返回 Command 来更新状态的工具:

import { Command } from "@langchain/langgraph";
import { tool } from "@langchain/core/tools";
import { z } from "zod";

const updateUserName = tool(
  async (input) => {
    // highlight-next-line
    return new Command({
      // highlight-next-line
      update: {
        // highlight-next-line
        userName: input.newName,
        // highlight-next-line
        messages: [
          // highlight-next-line
          {
            // highlight-next-line
            role: "assistant",
            // highlight-next-line
            content: `Updated user name to ${input.newName}`,
            // highlight-next-line
          },
          // highlight-next-line
        ],
        // highlight-next-line
      },
      // highlight-next-line
    });
  },
  {
    name: "update_user_name",
    description: "Update user name in short-term memory.",
    schema: z.object({
      newName: z.string().describe("The new user name"),
    }),
  }
);

Important

如果你想使用返回 Command 并更新图状态的工具,你可以使用预构建的 @[create_react_agent][] / @[ToolNode][] 组件,或实现自己的工具执行节点来收集工具返回的 Command 对象并返回它们的列表,例如:

def call_tools(state):
    ...
    commands = [tools_by_name[tool_call["name"]].invoke(tool_call) for tool_call in tool_calls]
    return commands

如果你想使用返回 Command 并更新图状态的工具,你可以使用预构建的 @[createReactAgent][create_react_agent] / @[ToolNode] 组件,或实现自己的工具执行节点来收集工具返回的 Command 对象并返回它们的列表,例如:

const callTools = async (state: State) => {
  // ...
  const commands = await Promise.all(
    toolCalls.map(toolCall => toolsByName[toolCall.name].invoke(toolCall))
  );
  return commands;
};

长期记忆

使用 长期记忆 存储跨对话的用户特定或应用特定数据。这对于聊天机器人等应用很有用,你可能想记住用户偏好或其他信息。

要使用长期记忆,你需要:

  1. 配置存储 以跨调用持久化数据。
  2. 从工具内部访问存储。

访问 存储中的信息:

from langchain_core.runnables import RunnableConfig
from langchain_core.tools import tool
from langgraph.graph import StateGraph
# highlight-next-line
from langgraph.config import get_store

@tool
def get_user_info(config: RunnableConfig) -> str:
    """Look up user info."""
    # 与提供给 `builder.compile(store=store)`
    # 或 `create_react_agent` 的相同
    # highlight-next-line
    store = get_store()
    user_id = config["configurable"].get("user_id")
    # highlight-next-line
    user_info = store.get(("users",), user_id)
    return str(user_info.value) if user_info else "Unknown user"

builder = StateGraph(...)
...
graph = builder.compile(store=store)

访问 存储中的信息:

import { tool } from "@langchain/core/tools";
import { z } from "zod";
import type { LangGraphRunnableConfig } from "@langchain/langgraph";

const getUserInfo = tool(
  async (_, config: LangGraphRunnableConfig) => {
    // 与提供给 `builder.compile({ store })`
    // 或 `createReactAgent` 的相同
    // highlight-next-line
    const store = config.store;
    if (!store) throw new Error("Store not provided");

    const userId = config?.configurable?.user_id;
    // highlight-next-line
    const userInfo = await store.get(["users"], userId);
    return userInfo?.value ? JSON.stringify(userInfo.value) : "Unknown user";
  },
  {
    name: "get_user_info",
    description: "Look up user info.",
    schema: z.object({}),
  }
);
访问长期记忆
from langchain_core.runnables import RunnableConfig
from langchain_core.tools import tool
from langgraph.config import get_store
from langgraph.prebuilt import create_react_agent
from langgraph.store.memory import InMemoryStore

# highlight-next-line
store = InMemoryStore() # (1)!

# highlight-next-line
store.put(  # (2)!
    ("users",),  # (3)!
    "user_123",  # (4)!
    {
        "name": "John Smith",
        "language": "English",
    } # (5)!
)

@tool
def get_user_info(config: RunnableConfig) -> str:
    """Look up user info."""
    # 与提供给 `create_react_agent` 的相同
    # highlight-next-line
    store = get_store() # (6)!
    user_id = config["configurable"].get("user_id")
    # highlight-next-line
    user_info = store.get(("users",), user_id) # (7)!
    return str(user_info.value) if user_info else "Unknown user"

agent = create_react_agent(
    model="anthropic:claude-3-7-sonnet-latest",
    tools=[get_user_info],
    # highlight-next-line
    store=store # (8)!
)

# 运行智能代理
agent.invoke(
    {"messages": [{"role": "user", "content": "look up user information"}]},
    # highlight-next-line
    config={"configurable": {"user_id": "user_123"}}
)
  1. InMemoryStore 是一个在内存中存储数据的存储。在生产环境中,你通常会使用数据库或其他持久化存储。请查看 [存储文档][../reference/store.md) 了解更多选项。如果你使用 LangGraph Platform 部署,平台将为你提供生产就绪的存储。
  2. 在此示例中,我们使用 put 方法向存储写入一些示例数据。请参阅 @[BaseStore.put] API 参考了解更多详情。
  3. 第一个参数是命名空间。用于将相关数据分组在一起。在本例中,我们使用 users 命名空间来分组用户数据。
  4. 命名空间内的键。此示例使用用户 ID 作为键。
  5. 我们要为给定用户存储的数据。
  6. get_store 函数用于访问存储。你可以从代码的任何位置调用它,包括工具和提示词。此函数返回创建智能代理时传递的存储。
  7. get 方法用于从存储检索数据。第一个参数是命名空间,第二个参数是键。这将返回一个 StoreValue 对象,其中包含值和关于值的元数据。
  8. store 被传递给智能代理。这使智能代理能够在运行工具时访问存储。你也可以使用 get_store 函数从代码的任何位置访问存储。
import { tool } from "@langchain/core/tools";
import { z } from "zod";
import { createReactAgent } from "@langchain/langgraph/prebuilt";
import { InMemoryStore } from "@langchain/langgraph";
import { ChatAnthropic } from "@langchain/anthropic";
import type { LangGraphRunnableConfig } from "@langchain/langgraph";

// highlight-next-line
const store = new InMemoryStore(); // (1)!

// highlight-next-line
await store.put(  // (2)!
  ["users"],  // (3)!
  "user_123",  // (4)!
  {
    name: "John Smith",
    language: "English",
  } // (5)!
);

const getUserInfo = tool(
  async (_, config: LangGraphRunnableConfig) => {
    // 与提供给 `createReactAgent` 的相同
    // highlight-next-line
    const store = config.store; // (6)!
    if (!store) throw new Error("Store not provided");

    const userId = config?.configurable?.user_id;
    // highlight-next-line
    const userInfo = await store.get(["users"], userId); // (7)!
    return userInfo?.value ? JSON.stringify(userInfo.value) : "Unknown user";
  },
  {
    name: "get_user_info",
    description: "Look up user info.",
    schema: z.object({}),
  }
);

const agent = createReactAgent({
  llm: new ChatAnthropic({ model: "claude-3-5-sonnet-20240620" }),
  tools: [getUserInfo],
  // highlight-next-line
  store: store // (8)!
});

// 运行智能代理
await agent.invoke(
  { messages: [{ role: "user", content: "look up user information" }] },
  // highlight-next-line
  { configurable: { user_id: "user_123" } }
);
  1. InMemoryStore 是一个在内存中存储数据的存储。在生产环境中,你通常会使用数据库或其他持久化存储。请查看 存储文档 了解更多选项。如果你使用 LangGraph Platform 部署,平台将为你提供生产就绪的存储。
  2. 在此示例中,我们使用 put 方法向存储写入一些示例数据。请参阅 BaseStore.put API 参考了解更多详情。
  3. 第一个参数是命名空间。用于将相关数据分组在一起。在本例中,我们使用 users 命名空间来分组用户数据。
  4. 命名空间内的键。此示例使用用户 ID 作为键。
  5. 我们要为给定用户存储的数据。
  6. 存储可从传递给工具的 config 对象访问。这使工具能够在运行时访问存储。
  7. get 方法用于从存储检索数据。第一个参数是命名空间,第二个参数是键。这将返回一个 StoreValue 对象,其中包含值和关于值的元数据。
  8. store 被传递给智能代理。这使智能代理能够在运行工具时访问存储。

更新 存储中的信息:

from langchain_core.runnables import RunnableConfig
from langchain_core.tools import tool
from langgraph.graph import StateGraph
# highlight-next-line
from langgraph.config import get_store

@tool
def save_user_info(user_info: str, config: RunnableConfig) -> str:
    """Save user info."""
    # 与提供给 `builder.compile(store=store)`
    # 或 `create_react_agent` 的相同
    # highlight-next-line
    store = get_store()
    user_id = config["configurable"].get("user_id")
    # highlight-next-line
    store.put(("users",), user_id, user_info)
    return "Successfully saved user info."

builder = StateGraph(...)
...
graph = builder.compile(store=store)

更新 存储中的信息:

import { tool } from "@langchain/core/tools";
import { z } from "zod";
import type { LangGraphRunnableConfig } from "@langchain/langgraph";

const saveUserInfo = tool(
  async (input, config: LangGraphRunnableConfig) => {
    // 与提供给 `builder.compile({ store })`
    // 或 `createReactAgent` 的相同
    // highlight-next-line
    const store = config.store;
    if (!store) throw new Error("Store not provided");

    const userId = config?.configurable?.user_id;
    // highlight-next-line
    await store.put(["users"], userId, input.userInfo);
    return "Successfully saved user info.";
  },
  {
    name: "save_user_info",
    description: "Save user info.",
    schema: z.object({
      userInfo: z.string().describe("User information to save"),
    }),
  }
);
更新长期记忆
from typing_extensions import TypedDict

from langchain_core.tools import tool
from langgraph.config import get_store
from langchain_core.runnables import RunnableConfig
from langgraph.prebuilt import create_react_agent
from langgraph.store.memory import InMemoryStore

store = InMemoryStore() # (1)!

class UserInfo(TypedDict): # (2)!
    name: str

@tool
def save_user_info(user_info: UserInfo, config: RunnableConfig) -> str: # (3)!
    """Save user info."""
    # 与提供给 `create_react_agent` 的相同
    # highlight-next-line
    store = get_store() # (4)!
    user_id = config["configurable"].get("user_id")
    # highlight-next-line
    store.put(("users",), user_id, user_info) # (5)!
    return "Successfully saved user info."

agent = create_react_agent(
    model="anthropic:claude-3-7-sonnet-latest",
    tools=[save_user_info],
    # highlight-next-line
    store=store
)

# 运行智能代理
agent.invoke(
    {"messages": [{"role": "user", "content": "My name is John Smith"}]},
    # highlight-next-line
    config={"configurable": {"user_id": "user_123"}} # (6)!
)

# 你可以直接访问存储来获取值
store.get(("users",), "user_123").value
  1. InMemoryStore 是一个在内存中存储数据的存储。在生产环境中,你通常会使用数据库或其他持久化存储。请查看 存储文档 了解更多选项。如果你使用 LangGraph Platform 部署,平台将为你提供生产就绪的存储。
  2. UserInfo 类是一个定义用户信息结构的 TypedDict。LLM 将使用它根据模式格式化响应。
  3. save_user_info 函数是一个允许智能代理更新用户信息的工具。这对于用户想要更新其个人资料信息的聊天应用可能很有用。
  4. get_store 函数用于访问存储。你可以从代码的任何位置调用它,包括工具和提示词。此函数返回创建智能代理时传递的存储。
  5. put 方法用于在存储中存储数据。第一个参数是命名空间,第二个参数是键。这将在存储中存储用户信息。
  6. user_id 在 config 中传递。用于标识正在更新信息的用户。
import { tool } from "@langchain/core/tools";
import { z } from "zod";
import { createReactAgent } from "@langchain/langgraph/prebuilt";
import { InMemoryStore } from "@langchain/langgraph";
import { ChatAnthropic } from "@langchain/anthropic";
import type { LangGraphRunnableConfig } from "@langchain/langgraph";

const store = new InMemoryStore(); // (1)!

const UserInfoSchema = z.object({ // (2)!
  name: z.string(),
});

const saveUserInfo = tool(
  async (input, config: LangGraphRunnableConfig) => { // (3)!
    // 与提供给 `createReactAgent` 的相同
    // highlight-next-line
    const store = config.store; // (4)!
    if (!store) throw new Error("Store not provided");

    const userId = config?.configurable?.user_id;
    // highlight-next-line
    await store.put(["users"], userId, input); // (5)!
    return "Successfully saved user info.";
  },
  {
    name: "save_user_info",
    description: "Save user info.",
    schema: UserInfoSchema,
  }
);

const agent = createReactAgent({
  llm: new ChatAnthropic({ model: "claude-3-5-sonnet-20240620" }),
  tools: [saveUserInfo],
  // highlight-next-line
  store: store
});

// 运行智能代理
await agent.invoke(
  { messages: [{ role: "user", content: "My name is John Smith" }] },
  // highlight-next-line
  { configurable: { user_id: "user_123" } } // (6)!
);

// 你可以直接访问存储来获取值
const userInfo = await store.get(["users"], "user_123");
console.log(userInfo?.value);
  1. InMemoryStore 是一个在内存中存储数据的存储。在生产环境中,你通常会使用数据库或其他持久化存储。请查看 存储文档 了解更多选项。如果你使用 LangGraph Platform 部署,平台将为你提供生产就绪的存储。
  2. UserInfoSchema 是一个定义用户信息结构的 Zod 模式。LLM 将使用它根据模式格式化响应。
  3. saveUserInfo 函数是一个允许智能代理更新用户信息的工具。这对于用户想要更新其个人资料信息的聊天应用可能很有用。
  4. 存储可从传递给工具的 config 对象访问。这使工具能够在运行时访问存储。
  5. put 方法用于在存储中存储数据。第一个参数是命名空间,第二个参数是键。这将在存储中存储用户信息。
  6. user_id 在 config 中传递。用于标识正在更新信息的用户。

高级工具功能

立即返回

使用 return_direct=True 可以立即返回工具的结果而不执行额外逻辑。

这对于不应触发进一步处理或工具调用的工具很有用,允许你直接将结果返回给用户。

# highlight-next-line
@tool(return_direct=True)
def add(a: int, b: int) -> int:
    """Add two numbers"""
    return a + b

使用 returnDirect: true 可以立即返回工具的结果而不执行额外逻辑。

这对于不应触发进一步处理或工具调用的工具很有用,允许你直接将结果返回给用户。

import { tool } from "@langchain/core/tools";
import { z } from "zod";

// highlight-next-line
const add = tool(
  (input) => {
    return input.a + input.b;
  },
  {
    name: "add",
    description: "Add two numbers",
    schema: z.object({
      a: z.number(),
      b: z.number(),
    }),
    // highlight-next-line
    returnDirect: true,
  }
);
扩展示例:在预构建智能代理中使用 return_direct
from langchain_core.tools import tool
from langgraph.prebuilt import create_react_agent

# highlight-next-line
@tool(return_direct=True)
def add(a: int, b: int) -> int:
    """Add two numbers"""
    return a + b

agent = create_react_agent(
    model="anthropic:claude-3-7-sonnet-latest",
    tools=[add]
)

agent.invoke(
    {"messages": [{"role": "user", "content": "what's 3 + 5?"}]}
)
import { tool } from "@langchain/core/tools";
import { z } from "zod";
import { createReactAgent } from "@langchain/langgraph/prebuilt";
import { ChatAnthropic } from "@langchain/anthropic";

// highlight-next-line
const add = tool(
  (input) => {
    return input.a + input.b;
  },
  {
    name: "add",
    description: "Add two numbers",
    schema: z.object({
      a: z.number(),
      b: z.number(),
    }),
    // highlight-next-line
    returnDirect: true,
  }
);

const agent = createReactAgent({
  llm: new ChatAnthropic({ model: "claude-3-5-sonnet-20240620" }),
  tools: [add]
});

await agent.invoke({
  messages: [{ role: "user", content: "what's 3 + 5?" }]
});

不使用预构建组件时

如果你正在构建自定义工作流且不依赖 create_react_agentToolNode,你还需要实现控制流来处理 return_direct=True

如果你正在构建自定义工作流且不依赖 createReactAgentToolNode,你还需要实现控制流来处理 returnDirect: true

强制使用工具

如果你需要强制使用特定工具,你需要在 模型 级别使用 bind_tools 方法中的 tool_choice 参数进行配置。

通过 tool_choice 强制使用特定工具:

@tool(return_direct=True)
def greet(user_name: str) -> int:
    """Greet user."""
    return f"Hello {user_name}!"

tools = [greet]

configured_model = model.bind_tools(
    tools,
    # 强制使用 'greet' 工具
    # highlight-next-line
    tool_choice={"type": "tool", "name": "greet"}
)
const greet = tool(
  (input) => {
    return `Hello ${input.userName}!`;
  },
  {
    name: "greet",
    description: "Greet user.",
    schema: z.object({
      userName: z.string(),
    }),
    returnDirect: true,
  }
);

const tools = [greet];

const configuredModel = model.bindTools(
  tools,
  // 强制使用 'greet' 工具
  // highlight-next-line
  { tool_choice: { type: "tool", name: "greet" } }
);
扩展示例:在智能代理中强制使用工具

要强制智能代理使用特定工具,你可以在 model.bind_tools() 中设置 tool_choice 选项:

from langchain_core.tools import tool

# highlight-next-line
@tool(return_direct=True)
def greet(user_name: str) -> int:
    """Greet user."""
    return f"Hello {user_name}!"

tools = [greet]

agent = create_react_agent(
    # highlight-next-line
    model=model.bind_tools(tools, tool_choice={"type": "tool", "name": "greet"}),
    tools=tools
)

agent.invoke(
    {"messages": [{"role": "user", "content": "Hi, I am Bob"}]}
)

要强制智能代理使用特定工具,你可以在 model.bindTools() 中设置 tool_choice 选项:

import { tool } from "@langchain/core/tools";
import { z } from "zod";
import { createReactAgent } from "@langchain/langgraph/prebuilt";
import { ChatOpenAI } from "@langchain/openai";

// highlight-next-line
const greet = tool(
  (input) => {
    return `Hello ${input.userName}!`;
  },
  {
    name: "greet",
    description: "Greet user.",
    schema: z.object({
      userName: z.string(),
    }),
    // highlight-next-line
    returnDirect: true,
  }
);

const tools = [greet];
const model = new ChatOpenAI({ model: "gpt-4o" });

const agent = createReactAgent({
  // highlight-next-line
  llm: model.bindTools(tools, { tool_choice: { type: "tool", name: "greet" } }),
  tools: tools
});

await agent.invoke({
  messages: [{ role: "user", content: "Hi, I am Bob" }]
});

避免无限循环

在没有停止条件的情况下强制使用工具可能会创建无限循环。使用以下保护措施之一:

在没有停止条件的情况下强制使用工具可能会创建无限循环。使用以下保护措施之一:

工具选择配置

tool_choice 参数用于配置模型在决定调用工具时应使用哪个工具。当你想确保特定任务始终调用特定工具,或当你想覆盖模型基于其内部逻辑选择工具的默认行为时,这很有用。

请注意,并非所有模型都支持此功能,确切的配置可能因你使用的模型而异。

禁用并行调用

对于支持的提供商,你可以通过在 model.bind_tools() 方法中设置 parallel_tool_calls=False 来禁用并行工具调用:

model.bind_tools(
    tools,
    # highlight-next-line
    parallel_tool_calls=False
)

对于支持的提供商,你可以通过在 model.bindTools() 方法中设置 parallel_tool_calls: false 来禁用并行工具调用:

model.bindTools(
  tools,
  // highlight-next-line
  { parallel_tool_calls: false }
);
扩展示例:在预构建智能代理中禁用并行工具调用
from langchain.chat_models import init_chat_model

def add(a: int, b: int) -> int:
    """Add two numbers"""
    return a + b

def multiply(a: int, b: int) -> int:
    """Multiply two numbers."""
    return a * b

model = init_chat_model("anthropic:claude-3-5-sonnet-latest", temperature=0)
tools = [add, multiply]
agent = create_react_agent(
    # 禁用并行工具调用
    # highlight-next-line
    model=model.bind_tools(tools, parallel_tool_calls=False),
    tools=tools
)

agent.invoke(
    {"messages": [{"role": "user", "content": "what's 3 + 5 and 4 * 7?"}]}
)
import { ChatOpenAI } from "@langchain/openai";
import { tool } from "@langchain/core/tools";
import { z } from "zod";
import { createReactAgent } from "@langchain/langgraph/prebuilt";

const add = tool(
  (input) => {
    return input.a + input.b;
  },
  {
    name: "add",
    description: "Add two numbers",
    schema: z.object({
      a: z.number(),
      b: z.number(),
    }),
  }
);

const multiply = tool(
  (input) => {
    return input.a * input.b;
  },
  {
    name: "multiply",
    description: "Multiply two numbers.",
    schema: z.object({
      a: z.number(),
      b: z.number(),
    }),
  }
);

const model = new ChatOpenAI({ model: "gpt-4o", temperature: 0 });
const tools = [add, multiply];

const agent = createReactAgent({
  // 禁用并行工具调用
  // highlight-next-line
  llm: model.bindTools(tools, { parallel_tool_calls: false }),
  tools: tools
});

await agent.invoke({
  messages: [{ role: "user", content: "what's 3 + 5 and 4 * 7?" }]
});

处理错误

LangGraph 通过预构建的 @[ToolNode][] 组件为工具执行提供内置错误处理,可独立使用也可在预构建智能代理中使用。

默认情况下ToolNode 捕获工具执行期间抛出的异常,并将它们作为带有错误状态的 ToolMessage 对象返回。

from langchain_core.messages import AIMessage
from langgraph.prebuilt import ToolNode

def multiply(a: int, b: int) -> int:
    if a == 42:
        raise ValueError("The ultimate error")
    return a * b

# 默认错误处理(默认启用)
tool_node = ToolNode([multiply])

message = AIMessage(
    content="",
    tool_calls=[{
        "name": "multiply",
        "args": {"a": 42, "b": 7},
        "id": "tool_call_id",
        "type": "tool_call"
    }]
)

result = tool_node.invoke({"messages": [message]})

输出:

{'messages': [
    ToolMessage(
        content="Error: ValueError('The ultimate error')\n Please fix your mistakes.",
        name='multiply',
        tool_call_id='tool_call_id',
        status='error'
    )
]}

LangGraph 通过预构建的 ToolNode 组件为工具执行提供内置错误处理,可独立使用也可在预构建智能代理中使用。

默认情况下ToolNode 捕获工具执行期间抛出的异常,并将它们作为带有错误状态的 ToolMessage 对象返回。

import { AIMessage } from "@langchain/core/messages";
import { ToolNode } from "@langchain/langgraph/prebuilt";
import { tool } from "@langchain/core/tools";
import { z } from "zod";

const multiply = tool(
  (input) => {
    if (input.a === 42) {
      throw new Error("The ultimate error");
    }
    return input.a * input.b;
  },
  {
    name: "multiply",
    description: "Multiply two numbers",
    schema: z.object({
      a: z.number(),
      b: z.number(),
    }),
  }
);

// 默认错误处理(默认启用)
const toolNode = new ToolNode([multiply]);

const message = new AIMessage({
  content: "",
  tool_calls: [
    {
      name: "multiply",
      args: { a: 42, b: 7 },
      id: "tool_call_id",
      type: "tool_call",
    },
  ],
});

const result = await toolNode.invoke({ messages: [message] });

输出:

{ messages: [
  ToolMessage {
    content: "Error: The ultimate error\n Please fix your mistakes.",
    name: "multiply",
    tool_call_id: "tool_call_id",
    status: "error"
  }
]}

禁用错误处理

要直接传播异常,请禁用错误处理:

tool_node = ToolNode([multiply], handle_tool_errors=False)
const toolNode = new ToolNode([multiply], { handleToolErrors: false });

禁用错误处理后,工具抛出的异常将向上传播,需要显式管理。

自定义错误消息

通过将错误处理参数设置为字符串来提供自定义错误消息:

tool_node = ToolNode(
    [multiply],
    handle_tool_errors="Can't use 42 as the first operand, please switch operands!"
)

示例输出:

{'messages': [
    ToolMessage(
        content="Can't use 42 as the first operand, please switch operands!",
        name='multiply',
        tool_call_id='tool_call_id',
        status='error'
    )
]}
const toolNode = new ToolNode([multiply], {
  handleToolErrors:
    "Can't use 42 as the first operand, please switch operands!",
});

示例输出:

{ messages: [
  ToolMessage {
    content: "Can't use 42 as the first operand, please switch operands!",
    name: "multiply",
    tool_call_id: "tool_call_id",
    status: "error"
  }
]}

智能代理中的错误处理

预构建智能代理(create_react_agent)中的错误处理利用 ToolNode

from langgraph.prebuilt import create_react_agent

agent = create_react_agent(
    model="anthropic:claude-3-7-sonnet-latest",
    tools=[multiply]
)

# 默认错误处理
agent.invoke({"messages": [{"role": "user", "content": "what's 42 x 7?"}]})

要在预构建智能代理中禁用或自定义错误处理,请显式传递配置好的 ToolNode

custom_tool_node = ToolNode(
    [multiply],
    handle_tool_errors="Cannot use 42 as a first operand!"
)

agent_custom = create_react_agent(
    model="anthropic:claude-3-7-sonnet-latest",
    tools=custom_tool_node
)

agent_custom.invoke({"messages": [{"role": "user", "content": "what's 42 x 7?"}]})

预构建智能代理(createReactAgent)中的错误处理利用 ToolNode

import { createReactAgent } from "@langchain/langgraph/prebuilt";
import { ChatAnthropic } from "@langchain/anthropic";

const agent = createReactAgent({
  llm: new ChatAnthropic({ model: "claude-3-5-sonnet-20240620" }),
  tools: [multiply],
});

// 默认错误处理
await agent.invoke({
  messages: [{ role: "user", content: "what's 42 x 7?" }],
});

要在预构建智能代理中禁用或自定义错误处理,请显式传递配置好的 ToolNode

const customToolNode = new ToolNode([multiply], {
  handleToolErrors: "Cannot use 42 as a first operand!",
});

const agentCustom = createReactAgent({
  llm: new ChatAnthropic({ model: "claude-3-5-sonnet-20240620" }),
  tools: customToolNode,
});

await agentCustom.invoke({
  messages: [{ role: "user", content: "what's 42 x 7?" }],
});

处理大量工具

随着可用工具数量的增长,你可能希望限制 LLM 选择的范围,以减少 token 消耗并帮助管理 LLM 推理中的错误来源。

为此,你可以通过在运行时使用语义搜索检索相关工具来动态调整模型可用的工具。

请参阅 langgraph-bigtool 预构建库以获取即用型实现。

预构建工具

LLM 提供商工具

你可以通过将带有工具规范的字典传递给 create_react_agenttools 参数来使用模型提供商的预构建工具。例如,要使用 OpenAI 的 web_search_preview 工具:

from langgraph.prebuilt import create_react_agent

agent = create_react_agent(
    model="openai:gpt-4o-mini",
    tools=[{"type": "web_search_preview"}]
)
response = agent.invoke(
    {"messages": ["What was a positive news story from today?"]}
)

请查阅你使用的特定模型的文档,了解哪些工具可用以及如何使用它们。

你可以通过将带有工具规范的字典传递给 createReactAgenttools 参数来使用模型提供商的预构建工具。例如,要使用 OpenAI 的 web_search_preview 工具:

import { createReactAgent } from "@langchain/langgraph/prebuilt";
import { ChatOpenAI } from "@langchain/openai";

const agent = createReactAgent({
  llm: new ChatOpenAI({ model: "gpt-4o-mini" }),
  tools: [{ type: "web_search_preview" }],
});

const response = await agent.invoke({
  messages: [
    { role: "user", content: "What was a positive news story from today?" },
  ],
});

请查阅你使用的特定模型的文档,了解哪些工具可用以及如何使用它们。

LangChain 工具

此外,LangChain 支持广泛的预构建工具集成,用于与 API、数据库、文件系统、网络数据等交互。这些工具扩展了智能代理的功能并实现快速开发。

你可以在 LangChain 集成目录 中浏览可用集成的完整列表。

一些常用的工具类别包括:

  • 搜索:Bing、SerpAPI、Tavily
  • 代码解释器:Python REPL、Node.js REPL
  • 数据库:SQL、MongoDB、Redis
  • 网络数据:网页抓取和浏览
  • API:OpenWeatherMap、NewsAPI 等

这些集成可以使用上述示例中显示的相同 tools 参数进行配置并添加到你的智能代理中。

你可以在 LangChain 集成目录 中浏览可用集成的完整列表。

一些常用的工具类别包括:

  • 搜索:Tavily、SerpAPI
  • 代码解释器:网络浏览器、计算器
  • 数据库:SQL、向量数据库
  • 网络数据:网页抓取和浏览
  • API:各种 API 集成

这些集成可以使用上述示例中显示的相同 tools 参数进行配置并添加到你的智能代理中。