添加记忆¶
聊天机器人现在可以使用工具回答用户问题,但它不记得先前交互的上下文。这限制了它进行连贯的多轮对话的能力。
LangGraph 通过**持久化检查点**解决了这个问题。如果你在编译图时提供 checkpointer 并在调用图时提供 thread_id,LangGraph 会在每一步后自动保存状态。当你使用相同的 thread_id 再次调用图时,图会加载其保存的状态,允许聊天机器人从中断的地方继续。
我们稍后会看到,**检查点**比简单的聊天记忆_强大得多_ - 它允许你随时保存和恢复复杂状态,用于错误恢复、人机协同工作流、时间旅行交互等。但首先,让我们添加检查点来启用多轮对话。
Note
本教程基于添加工具。
1. 创建 MemorySaver 检查点器¶
创建一个 MemorySaver 检查点器:
这是一个内存检查点器,对于教程来说很方便。然而,在生产应用中,你可能会将其更改为使用 SqliteSaver 或 PostgresSaver 并连接数据库。
2. 编译图¶
使用提供的检查点器编译图,该检查点器将在图处理每个节点时检查点 State:
const graph = new StateGraph(State)
.addNode("chatbot", chatbot)
.addNode("tools", new ToolNode(tools))
.addConditionalEdges("chatbot", toolsCondition, ["tools", END])
.addEdge("tools", "chatbot")
.addEdge(START, "chatbot")
.compile({ checkpointer: memory });
3. 与聊天机器人交互¶
现在你可以与机器人交互了!
-
选择一个线程作为此对话的键。
-
调用你的聊天机器人:
user_input = "Hi there! My name is Will." # config 是 stream() 或 invoke() 的**第二个位置参数**! events = graph.stream( {"messages": [{"role": "user", "content": user_input}]}, config, stream_mode="values", ) for event in events: event["messages"][-1].pretty_print()================================ Human Message ================================= Hi there! My name is Will. ================================== Ai Message ================================== Hello Will! It's nice to meet you. How can I assist you today? Is there anything specific you'd like to know or discuss?Note
config 在调用我们的图时作为**第二个位置参数**提供。重要的是,它_没有_嵌套在图输入(
{'messages': []})中。const userInput = "Hi there! My name is Will."; const events = await graph.stream( { messages: [{ type: "human", content: userInput }] }, { configurable: { thread_id: "1" }, streamMode: "values" } ); for await (const event of events) { const lastMessage = event.messages.at(-1); console.log(`${lastMessage?.getType()}: ${lastMessage?.text}`); }human: Hi there! My name is Will. ai: Hello Will! It's nice to meet you. How can I assist you today? Is there anything specific you'd like to know or discuss?Note
config 在调用我们的图时作为**第二个参数**提供。重要的是,它_没有_嵌套在图输入(
{"messages": []})中。
4. 提出后续问题¶
提出后续问题:
user_input = "Remember my name?"
# config 是 stream() 或 invoke() 的**第二个位置参数**!
events = graph.stream(
{"messages": [{"role": "user", "content": user_input}]},
config,
stream_mode="values",
)
for event in events:
event["messages"][-1].pretty_print()
================================ Human Message =================================
Remember my name?
================================== Ai Message ==================================
Of course, I remember your name, Will. I always try to pay attention to important details that users share with me. Is there anything else you'd like to talk about or any questions you have? I'm here to help with a wide range of topics or tasks.
const userInput2 = "Remember my name?";
const events2 = await graph.stream(
{ messages: [{ type: "human", content: userInput2 }] },
{ configurable: { thread_id: "1" }, streamMode: "values" }
);
for await (const event of events2) {
const lastMessage = event.messages.at(-1);
console.log(`${lastMessage?.getType()}: ${lastMessage?.text}`);
}
注意,我们没有使用外部列表来存储记忆:所有这些都由检查点器处理!你可以在这个 LangSmith trace 中检查完整的执行过程,看看发生了什么。
不相信我?使用不同的配置试试这个。
# 唯一的区别是我们在这里将 `thread_id` 更改为 "2" 而不是 "1"
events = graph.stream(
{"messages": [{"role": "user", "content": user_input}]},
# highlight-next-line
{"configurable": {"thread_id": "2"}},
stream_mode="values",
)
for event in events:
event["messages"][-1].pretty_print()
================================ Human Message =================================
Remember my name?
================================== Ai Message ==================================
I apologize, but I don't have any previous context or memory of your name. As an AI assistant, I don't retain information from past conversations. Each interaction starts fresh. Could you please tell me your name so I can address you properly in this conversation?
const events3 = await graph.stream(
{ messages: [{ type: "human", content: userInput2 }] },
// 唯一的区别是我们在这里将 `thread_id` 更改为 "2" 而不是 "1"
{ configurable: { thread_id: "2" }, streamMode: "values" }
);
for await (const event of events3) {
const lastMessage = event.messages.at(-1);
console.log(`${lastMessage?.getType()}: ${lastMessage?.text}`);
}
human: Remember my name?
ai: I don't have the ability to remember personal information about users between interactions. However, I'm here to help you with any questions or topics you want to discuss!
注意,我们做的**唯一**更改是修改配置中的 thread_id。查看此调用的 LangSmith trace 进行比较。
5. 检查状态¶
到目前为止,我们已经在两个不同的线程中创建了几个检查点。但检查点中包含什么?要在任何时候检查给定配置的图 state,请调用 get_state(config)。
StateSnapshot(values={'messages': [HumanMessage(content='Hi there! My name is Will.', additional_kwargs={}, response_metadata={}, id='8c1ca919-c553-4ebf-95d4-b59a2d61e078'), AIMessage(content="Hello Will! It's nice to meet you. How can I assist you today? Is there anything specific you'd like to know or discuss?", additional_kwargs={}, response_metadata={'id': 'msg_01WTQebPhNwmMrmmWojJ9KXJ', 'model': 'claude-3-5-sonnet-20240620', 'stop_reason': 'end_turn', 'stop_sequence': None, 'usage': {'input_tokens': 405, 'output_tokens': 32}}, id='run-58587b77-8c82-41e6-8a90-d62c444a261d-0', usage_metadata={'input_tokens': 405, 'output_tokens': 32, 'total_tokens': 437}), HumanMessage(content='Remember my name?', additional_kwargs={}, response_metadata={}, id='daba7df6-ad75-4d6b-8057-745881cea1ca'), AIMessage(content="Of course, I remember your name, Will. I always try to pay attention to important details that users share with me. Is there anything else you'd like to talk about or any questions you have? I'm here to help with a wide range of topics or tasks.", additional_kwargs={}, response_metadata={'id': 'msg_01E41KitY74HpENRgXx94vag', 'model': 'claude-3-5-sonnet-20240620', 'stop_reason': 'end_turn', 'stop_sequence': None, 'usage': {'input_tokens': 444, 'output_tokens': 58}}, id='run-ffeaae5c-4d2d-4ddb-bd59-5d5cbf2a5af8-0', usage_metadata={'input_tokens': 444, 'output_tokens': 58, 'total_tokens': 502})]}, next=(), config={'configurable': {'thread_id': '1', 'checkpoint_ns': '', 'checkpoint_id': '1ef7d06e-93e0-6acc-8004-f2ac846575d2'}}, metadata={'source': 'loop', 'writes': {'chatbot': {'messages': [AIMessage(content="Of course, I remember your name, Will. I always try to pay attention to important details that users share with me. Is there anything else you'd like to talk about or any questions you have? I'm here to help with a wide range of topics or tasks.", additional_kwargs={}, response_metadata={'id': 'msg_01E41KitY74HpENRgXx94vag', 'model': 'claude-3-5-sonnet-20240620', 'stop_reason': 'end_turn', 'stop_sequence': None, 'usage': {'input_tokens': 444, 'output_tokens': 58}}, id='run-ffeaae5c-4d2d-4ddb-bd59-5d5cbf2a5af8-0', usage_metadata={'input_tokens': 444, 'output_tokens': 58, 'total_tokens': 502})]}}, 'step': 4, 'parents': {}}, created_at='2024-09-27T19:30:10.820758+00:00', parent_config={'configurable': {'thread_id': '1', 'checkpoint_ns': '', 'checkpoint_id': '1ef7d06e-859f-6206-8003-e1bd3c264b8f'}}, tasks=())
到目前为止,我们已经在两个不同的线程中创建了几个检查点。但检查点中包含什么?要在任何时候检查给定配置的图 state,请调用 getState(config)。
{
values: {
messages: [
HumanMessage {
"id": "32fabcef-b3b8-481f-8bcb-fd83399a5f8d",
"content": "Hi there! My name is Will.",
"additional_kwargs": {},
"response_metadata": {}
},
AIMessage {
"id": "chatcmpl-BrPbTsCJbVqBvXWySlYoTJvM75Kv8",
"content": "Hello Will! How can I assist you today?",
"additional_kwargs": {},
"response_metadata": {},
"tool_calls": [],
"invalid_tool_calls": []
},
HumanMessage {
"id": "561c3aad-f8fc-4fac-94a6-54269a220856",
"content": "Remember my name?",
"additional_kwargs": {},
"response_metadata": {}
},
AIMessage {
"id": "chatcmpl-BrPbU4BhhsUikGbW37hYuF5vvnnE2",
"content": "Yes, I remember your name, Will! How can I help you today?",
"additional_kwargs": {},
"response_metadata": {},
"tool_calls": [],
"invalid_tool_calls": []
}
]
},
next: [],
tasks: [],
metadata: {
source: 'loop',
step: 4,
parents: {},
thread_id: '1'
},
config: {
configurable: {
thread_id: '1',
checkpoint_id: '1f05cccc-9bb6-6270-8004-1d2108bcec77',
checkpoint_ns: ''
}
},
createdAt: '2025-07-09T13:58:27.607Z',
parentConfig: {
configurable: {
thread_id: '1',
checkpoint_ns: '',
checkpoint_id: '1f05cccc-78fa-68d0-8003-ffb01a76b599'
}
}
}
import * as assert from "node:assert";
// 因为图在这一轮结束了,`next` 为空。
// 如果你在图调用期间获取状态,next 会告诉你下一个将执行哪个节点)
assert.deepEqual(snapshot.next, []);
上面的快照包含当前状态值、相应的配置和要处理的 next 节点。在我们的例子中,图已经到达 END 状态,所以 next 为空。
恭喜! 得益于 LangGraph 的检查点系统,你的聊天机器人现在可以跨会话维护对话状态。这为更自然、更有上下文的交互开辟了令人兴奋的可能性。LangGraph 的检查点甚至可以处理**任意复杂的图状态**,这比简单的聊天记忆更具表现力和强大。
查看下面的代码片段以查看本教程中的图:
from typing import Annotated
from langchain.chat_models import init_chat_model
from langchain_tavily import TavilySearch
from langchain_core.messages import BaseMessage
from typing_extensions import TypedDict
from langgraph.checkpoint.memory import InMemorySaver
from langgraph.graph import StateGraph
from langgraph.graph.message import add_messages
from langgraph.prebuilt import ToolNode, tools_condition
class State(TypedDict):
messages: Annotated[list, add_messages]
graph_builder = StateGraph(State)
tool = TavilySearch(max_results=2)
tools = [tool]
llm_with_tools = llm.bind_tools(tools)
def chatbot(state: State):
return {"messages": [llm_with_tools.invoke(state["messages"])]}
graph_builder.add_node("chatbot", chatbot)
tool_node = ToolNode(tools=[tool])
graph_builder.add_node("tools", tool_node)
graph_builder.add_conditional_edges(
"chatbot",
tools_condition,
)
graph_builder.add_edge("tools", "chatbot")
graph_builder.set_entry_point("chatbot")
memory = InMemorySaver()
graph = graph_builder.compile(checkpointer=memory)
import { END, MessagesZodState, START } from "@langchain/langgraph";
import { ChatOpenAI } from "@langchain/openai";
import { TavilySearch } from "@langchain/tavily";
import { MemorySaver } from "@langchain/langgraph";
import { StateGraph } from "@langchain/langgraph";
import { ToolNode, toolsCondition } from "@langchain/langgraph/prebuilt";
import { z } from "zod";
const State = z.object({
messages: MessagesZodState.shape.messages,
});
const tools = [new TavilySearch({ maxResults: 2 })];
const llm = new ChatOpenAI({ model: "gpt-4o-mini" }).bindTools(tools);
const memory = new MemorySaver();
async function generateText(content: string) {
const graph = new StateGraph(State)
.addNode("chatbot", async (state) => ({
messages: [await llm.invoke(state.messages)],
}))
.addNode("tools", new ToolNode(tools))
.addConditionalEdges("chatbot", toolsCondition, ["tools", END])
.addEdge("tools", "chatbot")
.addEdge(START, "chatbot")
.compile({ checkpointer: memory });
下一步¶
在下一个教程中,你将向聊天机器人添加人机协同,以处理它可能需要指导或验证才能继续的情况。