Introduction to LangGraph
《LangGraph 入门》¶
In this tutorial, we will build a support chatbot in LangGraph that can:
在本教程中,我们将构建一个支持聊天机器人在 LangGraph 中,它可以:
- Answer common questions by searching the web
通过搜索网络回答常见问题 - Maintain conversation state across calls
在通话间保持对话状态 - Route complex queries to a human for review
将复杂查询转交人工审核 - Use custom state to control its behavior
使用自定义状态控制其行为 - Rewind and explore alternative conversation paths
回溯并探索不同的对话路径
We'll start with a basic chatbot and progressively add more sophisticated capabilities, introducing key LangGraph concepts along the way.
我们将从一个基础的聊天机器人开始,逐步增加更复杂的功能,并在过程中引入关键的 LangGraph 概念。
Setup 设置¶
First, install the required packages:
首先,安装所需的软件包:
%%capture --no-stderr
%pip install -U langgraph langsmith
# Used for this tutorial; not a requirement for LangGraph
%pip install -U langchain_anthropic
Next, set your API keys:
接下来,设置您的 API 密钥:
import getpass
import os
def _set_env(var: str):
if not os.environ.get(var):
os.environ[var] = getpass.getpass(f"{var}: ")
_set_env("ANTHROPIC_API_KEY")
(Encouraged) LangSmith makes it a lot easier to see what's going on "under the hood."
LangSmith 使得查看“引擎盖下”的情况变得容易许多。
_set_env("LANGSMITH_API_KEY")
os.environ["LANGCHAIN_TRACING_V2"] = "true"
os.environ["LANGCHAIN_PROJECT"] = "LangGraph Tutorial"
Part 1: Build a Basic Chatbot
第一部分:构建基础聊天机器人¶
We'll first create a simple chatbot using LangGraph. This chatbot will respond directly to user messages. Though simple, it will illustrate the core concepts of building with LangGraph. By the end of this section, you will have a built rudimentary chatbot.
我们将首先使用 LangGraph 创建一个简单的聊天机器人。该聊天机器人将直接响应用户消息。尽管简单,但它将阐明使用 LangGraph 构建的核心概念。在本节结束时,您将构建出一个基础的聊天机器人。
Start by creating a StateGraph
. A StateGraph
object defines the structure of our chatbot as a "state machine". We'll add nodes
to represent the llm and functions our chatbot can call and edges
to specify how the bot should transition between these functions.
首先创建一个 StateGraph
。 StateGraph
对象定义了我们聊天机器人的结构,将其视为“状态机”。我们将添加 nodes
来表示llm以及聊天机器人可以调用的功能,并添加 edges
来指定机器人如何在各功能间进行转换。
from typing import Annotated
from typing_extensions import TypedDict
from langgraph.graph import StateGraph
from langgraph.graph.message import add_messages
class State(TypedDict):
# Messages have the type "list". The `add_messages` function
# in the annotation defines how this state key should be updated
# (in this case, it appends messages to the list, rather than overwriting them)
messages: Annotated[list, add_messages]
graph_builder = StateGraph(State)
Notice that we've defined our State
as a TypedDict with a single key: messages
. The messages
key is annotated with the add_messages
function, which tells LangGraph to append new messages to the existing list, rather than overwriting it.
请注意,我们已将 State
定义为具有单个键 messages
的 TypedDict。 messages
键通过 add_messages
函数进行了注解,该函数指示 LangGraph 将新消息追加到现有列表中,而不是覆盖它。
So now our graph knows two things:
现在我们的图表已知两件事:
- Every
node
we define will receive the currentState
as input and return a value that updates that state.
我们定义的每个node
都将接收当前的State
作为输入,并返回一个更新该状态的值。 messages
will be appended to the current list, rather than directly overwritten. This is communicated via the prebuiltadd_messages
function in theAnnotated
syntax.
messages
将被追加到当前列表中,而不是直接覆盖。这是通过Annotated
语法中预构建的add_messages
函数来传达的。
Next, add a "chatbot
" node. Nodes represent units of work. They are typically regular python functions.
接下来,添加一个“ chatbot
”节点。节点代表工作单元,它们通常是常规的 Python 函数。
from langchain_anthropic import ChatAnthropic
llm = ChatAnthropic(model="claude-3-haiku-20240307")
def chatbot(state: State):
return {"messages": [llm.invoke(state["messages"])]}
# The first argument is the unique node name
# The second argument is the function or object that will be called whenever
# the node is used.
graph_builder.add_node("chatbot", chatbot)
Notice how the chatbot
node function takes the current State
as input and returns an updated messages
list. This is the basic pattern for all LangGraph node functions.
注意 chatbot
节点函数如何接收当前的 State
作为输入,并返回一个更新的 messages
列表。这是所有 LangGraph 节点函数的基本模式。
The add_messages
function in our State
will append the llm's response messages to whatever messages are already in the state.
我们的 add_messages
中的 State
函数会将llm的响应消息附加到当前状态中已有的任何消息上。
Next, add an entry
point. This tells our graph where to start its work each time we run it.
接下来,添加一个 entry
点。这指示我们的图表每次运行时从何处开始工作。
graph_builder.set_entry_point("chatbot")
Similarly, set a finish
point. This instructs the graph "any time this node is run, you can exit."
同样地,设置一个 finish
点。这指示图表“任何时候运行此节点,您都可以退出。”
graph_builder.set_finish_point("chatbot")
Finally, we'll want to be able to run our graph. To do so, call "compile()
" on the graph builder. This creates a "CompiledGraph
" we can use invoke on our state.
最后,我们需要能够运行我们的图。为此,请在图构建器上调用“ compile()
”。这将创建一个“ CompiledGraph
”,我们可以用来调用我们的状态。
graph = graph_builder.compile()
You can visualize the graph using the get_graph
method and one of the "draw" methods, like draw_ascii
or draw_png
. The draw
methods each require additional dependencies.
您可以使用 get_graph
方法以及"draw"方法之一(如 draw_ascii
或 draw_png
)来可视化图表。 draw
方法各自需要额外的依赖项。
from IPython.display import Image, display
try:
display(Image(graph.get_graph().draw_mermaid_png()))
except:
# This requires some extra dependencies and is optional
pass
Now let's run the chatbot!
现在让我们运行聊天机器人吧!
Tip: You can exit the chat loop at any time by typing "quit", "exit", or "q".
提示:您可以通过输入“quit”、“exit”或“q”随时退出聊天循环。
while True:
user_input = input("User: ")
if user_input.lower() in ["quit", "exit", "q"]:
print("Goodbye!")
break
for event in graph.stream({"messages": ("user", user_input)}):
for value in event.values():
print("Assistant:", value["messages"][-1].content)
User: what's langgraph all about?
Assistant: Langgraph is a new open-source deep learning framework that focuses on enabling efficient training and deployment of large language models. Some key things to know about Langgraph: 1. Efficient Training: Langgraph is designed to accelerate the training of large language models by leveraging advanced optimization techniques and parallelization strategies. 2. Modular Architecture: Langgraph has a modular architecture that allows for easy customization and extension of language models, making it flexible for a variety of NLP tasks. 3. Hardware Acceleration: The framework is optimized for both CPU and GPU hardware, allowing for efficient model deployment on a wide range of devices. 4. Scalability: Langgraph is designed to handle large-scale language models with billions of parameters, enabling the development of state-of-the-art NLP applications. 5. Open-Source: Langgraph is an open-source project, allowing developers and researchers to collaborate, contribute, and build upon the framework. 6. Performance: The goal of Langgraph is to provide superior performance and efficiency compared to existing deep learning frameworks, particularly for training and deploying large language models. Overall, Langgraph is a promising new deep learning framework that aims to address the challenges of building and deploying advanced natural language processing models at scale. It is an active area of research and development, with the potential to drive further advancements in the field of language AI.
User: hm that doesn't seem right...
Assistant: I'm sorry, I don't have enough context to determine what doesn't seem right. Could you please provide more details about what you're referring to? That would help me better understand and respond appropriately.
User: q
Goodbye!
Congratulations! You've built your first chatbot using LangGraph. This bot can engage in basic conversation by taking user input and generating responses using an LLM. You can inspect a LangSmith Trace for the call above at the provided link.
恭喜!您已使用 LangGraph 构建了第一个聊天机器人。该机器人能够通过接收用户输入并利用LLM生成响应来进行基本对话。您可以在提供的链接中查看上述调用的 LangSmith Trace。
However, you may have noticed that the bot's knowledge is limited to what's in its training data. In the next part, we'll add a web search tool to expand the bot's knowledge and make it more capable.
然而,您可能已经注意到,机器人的知识仅限于其训练数据中的内容。在接下来的部分,我们将添加一个网络搜索工具,以扩展机器人的知识范围,使其功能更加强大。
Below is the full code for this section for your reference:
以下是本节完整代码,供您参考:
from typing import Annotated
from langchain_anthropic import ChatAnthropic
from typing_extensions import TypedDict
from langgraph.graph import StateGraph
from langgraph.graph.message import add_messages
class State(TypedDict):
messages: Annotated[list, add_messages]
graph_builder = StateGraph(State)
llm = ChatAnthropic(model="claude-3-haiku-20240307")
def chatbot(state: State):
return {"messages": [llm.invoke(state["messages"])]}
# The first argument is the unique node name
# The second argument is the function or object that will be called whenever
# the node is used.
graph_builder.add_node("chatbot", chatbot)
graph_builder.set_entry_point("chatbot")
graph_builder.set_finish_point("chatbot")
graph = graph_builder.compile()
Part 2: Enhancing the Chatbot with Tools
第二部分:利用工具增强聊天机器人¶
To handle queries our chatbot can't answer "from memory", we'll integrate a web search tool. Our bot can use this tool to find relevant information and provide better responses.
对于聊天机器人无法“凭记忆”回答的查询,我们将集成一个网络搜索工具。我们的机器人可以利用此工具查找相关信息,从而提供更优质的回答。
Requirements 要求¶
Before we start, make sure you have the necessary packages installed and API keys set up:
在开始之前,请确保您已安装必要的软件包并设置了 API 密钥:
First, install the requirements to use the Tavily Search Engine, and set your TAVILY_API_KEY.
首先,安装使用 Tavily 搜索引擎所需的依赖,并设置您的 TAVILY_API_KEY。
%%capture --no-stderr
%pip install -U tavily-python
_set_env("TAVILY_API_KEY")
Next, define the tool:
接下来,定义工具:
from langchain_community.tools.tavily_search import TavilySearchResults
tool = TavilySearchResults(max_results=2)
tools = [tool]
tool.invoke("What's a 'node' in LangGraph?")
[{'url': 'https://medium.com/@cplog/introduction-to-langgraph-a-beginners-guide-14f9be027141', 'content': 'Nodes: Nodes are the building blocks of your LangGraph. Each node represents a function or a computation step. You define nodes to perform specific tasks, such as processing input, making ...'}, {'url': 'https://js.langchain.com/docs/langgraph', 'content': "Assuming you have done the above Quick Start, you can build off it like:\nHere, we manually define the first tool call that we will make.\nNotice that it does that same thing as agent would have done (adds the agentOutcome key).\n LangGraph\n🦜🕸️LangGraph.js\n⚡ Building language agents as graphs ⚡\nOverview\u200b\nLangGraph is a library for building stateful, multi-actor applications with LLMs, built on top of (and intended to be used with) LangChain.js.\n Therefore, we will use an object with one key (messages) with the value as an object: { value: Function, default?: () => any }\nThe default key must be a factory that returns the default value for that attribute.\n Streaming Node Output\u200b\nOne of the benefits of using LangGraph is that it is easy to stream output as it's produced by each node.\n What this means is that only one of the downstream edges will be taken, and which one that is depends on the results of the start node.\n"}]
The results are page summaries our chat bot can use to answer questions.
这些结果是页面摘要,我们的聊天机器人可以用来回答问题。
Next, we'll start defining our graph. The following is all the same as in Part 1, except we have added bind_tools
on our LLM. This lets the LLM know the correct JSON format to use if it wants to use our search engine.
接下来,我们将开始定义我们的图。以下内容与第一部分相同,除了我们在LLM上添加了 bind_tools
。这使得LLM知道如果要使用我们的搜索引擎,应采用的正确 JSON 格式。
from typing import Annotated
from langchain_anthropic import ChatAnthropic
from typing_extensions import TypedDict
from langgraph.graph import StateGraph
from langgraph.graph.message import add_messages
class State(TypedDict):
messages: Annotated[list, add_messages]
graph_builder = StateGraph(State)
llm = ChatAnthropic(model="claude-3-haiku-20240307")
# Modification: tell the LLM which tools it can call
llm_with_tools = llm.bind_tools(tools)
def chatbot(state: State):
return {"messages": [llm_with_tools.invoke(state["messages"])]}
graph_builder.add_node("chatbot", chatbot)
Next we need to create a function to actually run the tools if they are called. We'll do this by adding the tools to a new node.
接下来,我们需要创建一个函数,以便在调用工具时实际运行它们。我们将通过将工具添加到一个新节点来实现这一点。
Below, implement a BasicToolNode
that checks the most recent message in the state and calls tools if the message contains tool_calls. It relies on the LLM's
tool_calling` support, which is available in Anthropic, OpenAI, Google Gemini, and a number of other LLM providers.
下面,实现一个 BasicToolNode
,用于检查状态中的最新消息,并在消息包含 tool_calls. It relies on the LLM's
工具调用支持时调用工具,该支持可在 Anthropic、OpenAI、Google Gemini 及众多其他LLM提供商中获得。
We will later replace this with LangGraph's prebuilt ToolNode to speed things up, but building it ourselves first is instructive.
稍后我们将用 LangGraph 的预构建工具节点来替换这个,以加快进程,但首先自己构建它是有教育意义的。
import json
from langchain_core.messages import ToolMessage
class BasicToolNode:
"""A node that runs the tools requested in the last AIMessage."""
def __init__(self, tools: list) -> None:
self.tools_by_name = {tool.name: tool for tool in tools}
def __call__(self, inputs: dict):
if messages := inputs.get("messages", []):
message = messages[-1]
else:
raise ValueError("No message found in input")
outputs = []
for tool_call in message.tool_calls:
tool_result = self.tools_by_name[tool_call["name"]].invoke(
tool_call["args"]
)
outputs.append(
ToolMessage(
content=json.dumps(tool_result),
name=tool_call["name"],
tool_call_id=tool_call["id"],
)
)
return {"messages": outputs}
tool_node = BasicToolNode(tools=[tool])
graph_builder.add_node("tools", tool_node)
With the tool node added, we can define the conditional_edges
.
添加工具节点后,我们可以定义 conditional_edges
。
Recall that edges route the control flow from one node to the next. Conditional edges usually contain "if" statements to route to different nodes depending on the current graph state. These functions receive the current graph state
and return a string or list of strings indicating which node(s) to call next.
回想一下,边将控制流从一个节点导向下一个节点。条件边通常包含“如果”语句,根据当前图的状态导向不同的节点。这些函数接收当前图 state
,并返回一个字符串或字符串列表,指示接下来要调用哪些节点。
Below, call define a router function called route_tools
, that checks for tool_calls in the chatbot's output. Provide this function to the graph by calling add_conditional_edges
, which tells the graph that whenever the chatbot
node completes to check this function to see where to go next.
下面,定义一个名为 route_tools
的路由函数,用于检查聊天机器人输出中的 tool_calls。通过调用 add_conditional_edges
将此函数提供给图,指示每当 chatbot
节点完成时,检查此函数以确定下一步去向。
The condition will route to tools
if tool calls are present and "__end__
" if not.
如果存在工具调用,则条件将导向 tools
,如果不存在,则导向" __end__
"。
Later, we will replace this with the prebuilt tools_condition to be more concise, but implementing it ourselves first makes things more clear.
稍后,我们将用预建的 tools_condition 替换此部分以更加简洁,但首先自行实现它有助于更清晰地理解。
from typing import Literal
def route_tools(
state: State,
) -> Literal["tools", "__end__"]:
"""Use in the conditional_edge to route to the ToolNode if the last message
has tool calls. Otherwise, route to the end."""
if isinstance(state, list):
ai_message = state[-1]
elif messages := state.get("messages", []):
ai_message = messages[-1]
else:
raise ValueError(f"No messages found in input state to tool_edge: {state}")
if hasattr(ai_message, "tool_calls") and len(ai_message.tool_calls) > 0:
return "tools"
return "__end__"
# The `tools_condition` function returns "tools" if the chatbot asks to use a tool, and "__end__" if
# it is fine directly responding. This conditional routing defines the main agent loop.
graph_builder.add_conditional_edges(
"chatbot",
route_tools,
# The following dictionary lets you tell the graph to interpret the condition's outputs as a specific node
# It defaults to the identity function, but if you
# want to use a node named something else apart from "tools",
# You can update the value of the dictionary to something else
# e.g., "tools": "my_tools"
{"tools": "tools", "__end__": "__end__"},
)
# Any time a tool is called, we return to the chatbot to decide the next step
graph_builder.add_edge("tools", "chatbot")
graph_builder.set_entry_point("chatbot")
graph = graph_builder.compile()
Notice that conditional edges start from a single node. This tells the graph "any time the 'chatbot
' node runs, either go to 'tools' if it calls a tool, or end the loop if it responds directly.
注意,条件边从一个单一节点开始。这告诉图表“每当' chatbot
'节点运行时,如果它调用工具,则前往'工具',如果它直接响应,则结束循环。
Like the prebuilt tools_condition
, our function returns the "__end__
" string if no tool calls are made. When the graph transitions to __end__
, it has no more tasks to complete and ceases execution. Because the condition can return __end__
, we don't need to explicitly set a finish_point
this time. Our graph already has a way to finish!
与预构建的 tools_condition
类似,如果未进行工具调用,我们的函数将返回" __end__
"字符串。当图表过渡到 __end__
时,它已无更多任务需要完成,从而停止执行。由于该条件可能返回 __end__
,这次我们无需显式设置 finish_point
。我们的图表已有完成的方法!
Let's visualize the graph we've built. The following function has some additional dependencies to run that are unimportant for this tutorial.
让我们来可视化我们构建的图。以下函数运行时有一些额外的依赖项,但对于本教程来说并不重要。
from IPython.display import Image, display
try:
display(Image(graph.get_graph().draw_mermaid_png()))
except:
# This requires some extra dependencies and is optional
pass
Now we can ask the bot questions outside its training data.
现在我们可以向机器人提出其训练数据之外的问题。
from langchain_core.messages import BaseMessage
while True:
user_input = input("User: ")
if user_input.lower() in ["quit", "exit", "q"]:
print("Goodbye!")
break
for event in graph.stream({"messages": [("user", user_input)]}):
for value in event.values():
if isinstance(value["messages"][-1], BaseMessage):
print("Assistant:", value["messages"][-1].content)
User: what's langgraph all about?
Assistant: [{'id': 'toolu_01L1TABSBXsHPsebWiMPNqf1', 'input': {'query': 'langgraph'}, 'name': 'tavily_search_results_json', 'type': 'tool_use'}] Assistant: [{"url": "https://langchain-ai.github.io/langgraph/", "content": "LangGraph is framework agnostic (each node is a regular python function). It extends the core Runnable API (shared interface for streaming, async, and batch calls) to make it easy to: Seamless state management across multiple turns of conversation or tool usage. The ability to flexibly route between nodes based on dynamic criteria."}, {"url": "https://blog.langchain.dev/langgraph-multi-agent-workflows/", "content": "As a part of the launch, we highlighted two simple runtimes: one that is the equivalent of the AgentExecutor in langchain, and a second that was a version of that aimed at message passing and chat models.\n It's important to note that these three examples are only a few of the possible examples we could highlight - there are almost assuredly other examples out there and we look forward to seeing what the community comes up with!\n LangGraph: Multi-Agent Workflows\nLinks\nLast week we highlighted LangGraph - a new package (available in both Python and JS) to better enable creation of LLM workflows containing cycles, which are a critical component of most agent runtimes. \"\nAnother key difference between Autogen and LangGraph is that LangGraph is fully integrated into the LangChain ecosystem, meaning you take fully advantage of all the LangChain integrations and LangSmith observability.\n As part of this launch, we're also excited to highlight a few applications built on top of LangGraph that utilize the concept of multiple agents.\n"}] Assistant: Based on the search results, LangGraph is a framework-agnostic Python and JavaScript library that extends the core Runnable API from the LangChain project to enable the creation of more complex workflows involving multiple agents or components. Some key things about LangGraph: - It makes it easier to manage state across multiple turns of conversation or tool usage, and to dynamically route between different nodes/components based on criteria. - It is integrated with the LangChain ecosystem, allowing you to take advantage of LangChain integrations and observability features. - It enables the creation of multi-agent workflows, where different components or agents can be chained together in more flexible and complex ways than the standard LangChain AgentExecutor. - The core idea is to provide a more powerful and flexible framework for building LLM-powered applications and workflows, beyond what is possible with just the core LangChain tools. Overall, LangGraph seems to be a useful addition to the LangChain toolkit, focused on enabling more advanced, multi-agent style applications and workflows powered by large language models.
User: neat!
Assistant: I'm afraid I don't have enough context to provide a substantive response to "neat!". As an AI assistant, I'm designed to have conversations and provide information to users, but I need more details or a specific question from you in order to give a helpful reply. Could you please rephrase your request or provide some additional context? I'd be happy to assist further once I understand what you're looking for.
User: what?
Assistant: I'm afraid I don't have enough context to provide a meaningful response to "what?". Could you please rephrase your request or provide more details about what you are asking? I'd be happy to try to assist you further once I have a clearer understanding of your query.
User: q
Goodbye!
Congrats! You've created a conversational agent in langgraph that can use a search engine to retrieve updated information when needed. Now it can handle a wider range of user queries. To inspect all the steps your agent just took, check out this LangSmith trace.
恭喜!您已在 langgraph 中创建了一个对话代理,该代理能够使用搜索引擎在需要时检索最新信息。现在它能处理更广泛的用户查询。要查看您的代理刚刚执行的所有步骤,请查看此 LangSmith 跟踪记录。
Our chatbot still can't remember past interactions on its own, limiting its ability to have coherent, multi-turn conversations. In the next part, we'll add memory to address this.
我们的聊天机器人目前仍无法自行记住过往的互动,这限制了它进行连贯、多轮对话的能力。在接下来的部分,我们将为其添加记忆功能以解决这一问题。
The full code for the graph we've created in this section is reproduced below, replacing our BasicToolNode
for the prebuilt ToolNode, and our route_tools
condition with the prebuilt tools_condition
本节中创建的图表完整代码如下所示,用预构建的 ToolNode 替换我们的 BasicToolNode
,并用预构建的 tools_condition 替换我们的 route_tools
条件
from typing import Annotated, Union
from langchain_anthropic import ChatAnthropic
from langchain_community.tools.tavily_search import TavilySearchResults
from langchain_core.messages import BaseMessage
from typing_extensions import TypedDict
from langgraph.graph import StateGraph
from langgraph.graph.message import add_messages
from langgraph.prebuilt import ToolNode, tools_condition
class State(TypedDict):
messages: Annotated[list, add_messages]
graph_builder = StateGraph(State)
tool = TavilySearchResults(max_results=2)
tools = [tool]
llm = ChatAnthropic(model="claude-3-haiku-20240307")
llm_with_tools = llm.bind_tools(tools)
def chatbot(state: State):
return {"messages": [llm_with_tools.invoke(state["messages"])]}
graph_builder.add_node("chatbot", chatbot)
tool_node = ToolNode(tools=[tool])
graph_builder.add_node("tools", tool_node)
graph_builder.add_conditional_edges(
"chatbot",
tools_condition,
)
# Any time a tool is called, we return to the chatbot to decide the next step
graph_builder.add_edge("tools", "chatbot")
graph_builder.set_entry_point("chatbot")
graph = graph_builder.compile()
Part 3: Adding Memory to the Chatbot
第三部分:为聊天机器人添加记忆¶ 翻译文本:¶
Our chatbot can now use tools to answer user questions, but it doesn't remember the context of previous interactions. This limits its ability to have coherent, multi-turn conversations.
我们的聊天机器人现在能使用工具来回答用户问题,但它不会记住之前的交互情境。这限制了它进行连贯、多轮对话的能力。
LangGraph solves this problem through persistent checkpointing. If you provide a checkpointer
when compiling the graph and a thread_id
when calling your graph, LangGraph automatically saves the state after each step. When you invoke the graph again using the same thread_id
, the graph loads its saved state, allowing the chatbot to pick up where it left off.
LangGraph 通过持久化检查点解决此问题。在编译图时提供 checkpointer
,在调用图时提供 thread_id
,LangGraph 会自动在每一步后保存状态。当您再次使用相同的 thread_id
调用图时,图会加载其保存的状态,使聊天机器人能够从上次中断的地方继续。
We will see later that checkpointing is much more powerful than simple chat memory - it lets you save and resume complex state at any time for error recovery, human-in-the-loop workflows, time travel interactions, and more. But before we get too ahead of ourselves, let's add checkpointing to enable multi-turn conversations.
稍后我们将了解到,检查点机制远比简单的对话记忆更为强大——它允许您在任何时候保存并恢复复杂的系统状态,以实现错误恢复、人机交互工作流程、时间旅行式互动等多种功能。但在我们过于超前之前,让我们先添加检查点功能,以支持多轮对话。
To get started, create a SqliteSaver
checkpointer.
开始前,创建一个 SqliteSaver
检查点。
from langgraph.checkpoint.sqlite import SqliteSaver
memory = SqliteSaver.from_conn_string(":memory:")
Notice that we've specified :memory
as the Sqlite DB path. This is convenient for our tutorial (it saves it all in-memory). In a production application, you would likely change this to connect to your own DB and/or use one of the other checkpointer classes.
请注意,我们已将 :memory
指定为 Sqlite 数据库路径。这对于我们的教程来说很方便(它将所有内容保存在内存中)。在生产应用程序中,您可能会将其更改为连接到自己的数据库和/或使用其他检查点类之一。
Next define the graph. Now that you've already built your own BasicToolNode
, we'll replace it with LangGraph's prebuilt ToolNode
and tools_condition
, since these do some nice things like parallel API execution. Apart from that, the following is all copied from Part 2.
接下来定义图。既然你已经构建了自己的 BasicToolNode
,我们将用 LangGraph 预建的 ToolNode
和 tools_condition
替换它,因为这些能实现一些不错的功能,比如并行 API 执行。除此之外,以下内容均复制自第二部分。
from typing import Annotated, Union
from langchain_anthropic import ChatAnthropic
from langchain_community.tools.tavily_search import TavilySearchResults
from langchain_core.messages import BaseMessage
from typing_extensions import TypedDict
from langgraph.graph import StateGraph
from langgraph.graph.message import add_messages
from langgraph.prebuilt import ToolNode, tools_condition
class State(TypedDict):
messages: Annotated[list, add_messages]
graph_builder = StateGraph(State)
tool = TavilySearchResults(max_results=2)
tools = [tool]
llm = ChatAnthropic(model="claude-3-haiku-20240307")
llm_with_tools = llm.bind_tools(tools)
def chatbot(state: State):
return {"messages": [llm_with_tools.invoke(state["messages"])]}
graph_builder.add_node("chatbot", chatbot)
tool_node = ToolNode(tools=[tool])
graph_builder.add_node("tools", tool_node)
graph_builder.add_conditional_edges(
"chatbot",
tools_condition,
)
# Any time a tool is called, we return to the chatbot to decide the next step
graph_builder.add_edge("tools", "chatbot")
graph_builder.set_entry_point("chatbot")
/Users/wfh/code/lc/langchain/libs/core/langchain_core/_api/beta_decorator.py:87: LangChainBetaWarning: The method `ChatAnthropic.bind_tools` is in beta. It is actively being worked on, so the API may change. warn_beta(
Finally, compile the graph with the provided checkpointer.
最后,使用提供的检查点机制编译图。
graph = graph_builder.compile(checkpointer=memory)
Notice the connectivity of the graph hasn't changed since Part 2. All we are doing is checkpointing the State
as the graph works through each node.
请注意,自第二部分以来,图的连通性并未发生变化。我们所做的只是随着图遍历每个节点时,对 State
进行检查点保存。
from IPython.display import Image, display
try:
display(Image(graph.get_graph().draw_mermaid_png()))
except:
# This requires some extra dependencies and is optional
pass
Now you can interact with your bot! First, pick a thread to use as the key for this conversation.
现在你可以与你的机器人互动了!首先,选择一个线程作为本次对话的关键。
config = {"configurable": {"thread_id": "1"}}
Next, call your chat bot.
接下来,呼叫你的聊天机器人。
user_input = "Hi there! My name is Will."
# The config is the **second positional argument** to stream() or invoke()!
events = graph.stream(
{"messages": [("user", user_input)]}, config, stream_mode="values"
)
for event in events:
event["messages"][-1].pretty_print()
================================ Human Message ================================= Hi there! My name is Will. ================================== Ai Message ================================== It's nice to meet you, Will! I'm an AI assistant created by Anthropic. I'm here to help you with any questions or tasks you may have. Please let me know how I can assist you today.
Note: The config was provided as the second positional argument when calling our graph. It importantly is not nested within the graph inputs ({'messages': []}
).
注意:在调用我们的图时,配置作为第二个位置参数提供。重要的是,它并未嵌套在图输入中( {'messages': []}
)。
Let's ask a followup: see if it remembers your name.
我们来问一个后续问题:看看它是否记得你的名字。
user_input = "Remember my name?"
# The config is the **second positional argument** to stream() or invoke()!
events = graph.stream(
{"messages": [("user", user_input)]}, config, stream_mode="values"
)
for event in events:
event["messages"][-1].pretty_print()
================================ Human Message ================================= Remember my name? ================================== Ai Message ================================== Of course, your name is Will. It's nice to meet you again!
Notice that we are't the memory using an external list: it's all handled by the checkpointer! You can inspect the full execution in this LangSmith trace to see what's going on.
请注意,我们并未使用外部列表来管理内存:这一切都由检查点处理程序负责!您可以通过查看 LangSmith 跟踪中的完整执行过程来了解具体情况。
Don't believe me? Try this using a different config.
不信我?试试用不同的配置。
# The only difference is we change the `thread_id` here to "2" instead of "1"
events = graph.stream(
{"messages": [("user", user_input)]},
{"configurable": {"thread_id": "2"}},
stream_mode="values",
)
for event in events:
event["messages"][-1].pretty_print()
================================ Human Message ================================= Remember my name? ================================== Ai Message ================================== I'm afraid I don't actually have the capability to remember your name. As an AI assistant, I don't have a persistent memory of our previous conversations or interactions. I respond based on the current context provided to me. Could you please restate your name or provide more information so I can try to assist you?
Notice that the only change we've made is to modify the thread_id
in the config. See this call's LangSmith trace for comparison.
请注意,我们唯一做的改动是修改了配置中的 thread_id
。请查看此调用的 LangSmith 跟踪记录以进行比较。
By now, we have made a few checkpoints across two different threads. But what goes into a checkpoint? To inspect a graph's state
for a given config at any time, call get_state(config)
.
截至目前,我们已经在两个不同的线程中设置了几个检查点。但检查点包含哪些内容呢?要随时检查给定配置下图的 state
,请调用 get_state(config)
。
snapshot = graph.get_state(config)
snapshot
StateSnapshot(values={'messages': [HumanMessage(content='Hi there! My name is Will.', id='aad97d7f-8845-4f9e-b723-2af3b7c97590'), AIMessage(content="It's nice to meet you, Will! I'm an AI assistant created by Anthropic. I'm here to help you with any questions or tasks you may have. Please let me know how I can assist you today.", response_metadata={'id': 'msg_01VCz7Y5jVmMZXibBtnECyvJ', 'model': 'claude-3-haiku-20240307', 'stop_reason': 'end_turn', 'stop_sequence': None, 'usage': {'input_tokens': 375, 'output_tokens': 49}}, id='run-66cf1695-5ba8-4fd8-a79d-ded9ee3c3b33-0'), HumanMessage(content='Remember my name?', id='ac1e9971-dbee-4622-9e63-5015dee05c20'), AIMessage(content="Of course, your name is Will. It's nice to meet you again!", response_metadata={'id': 'msg_01RsJ6GaQth7r9soxbF7TSpQ', 'model': 'claude-3-haiku-20240307', 'stop_reason': 'end_turn', 'stop_sequence': None, 'usage': {'input_tokens': 431, 'output_tokens': 19}}, id='run-890149d3-214f-44e8-9717-57ec4ef68224-0')]}, next=(), config={'configurable': {'thread_id': '1', 'thread_ts': '2024-05-06T22:23:20.430350+00:00'}}, parent_config=None)
snapshot.next # (since the graph ended this turn, `next` is empty. If you fetch a state from within a graph invocation, next tells which node will execute next)
()
The snapshot above contains the current state values, corresponding config, and the next
node to process. In our case, the graph has reached an __end__
state, so next
is empty.
上述快照包含了当前状态值、对应配置以及待处理的 next
节点。在我们的情况下,图已达到 __end__
状态,因此 next
为空。
Congratulations! Your chatbot can now maintain conversation state across sessions thanks to LangGraph's checkpointing system. This opens up exciting possibilities for more natural, contextual interactions. LangGraph's checkpointing even handles arbitrary complex graph states, which is much more expressive and powerful than simple chat memory.
恭喜!得益于 LangGraph 的检查点系统,您的聊天机器人现在能够在会话之间保持对话状态。这为更自然、更具上下文的交互开启了激动人心的可能性。LangGraph 的检查点甚至能处理任意复杂的图状态,其表达力和强大功能远超简单的聊天记忆。
In the next part, we'll introduce human oversight to our bot to handle situations where it may need guidance or verification before proceeding.
在接下来的部分,我们将为我们的机器人引入人工监督,以应对可能需要指导或验证后方可继续进行的情况。
Check out the code snippet below to review our graph from this section.
查看下面的代码片段,以回顾本节中的图表。
from typing import Annotated, Union
from langchain_anthropic import ChatAnthropic
from langchain_community.tools.tavily_search import TavilySearchResults
from langchain_core.messages import BaseMessage
from typing_extensions import TypedDict
from langgraph.checkpoint.sqlite import SqliteSaver
from langgraph.graph import StateGraph
from langgraph.graph.message import add_messages
from langgraph.prebuilt import ToolNode
class State(TypedDict):
messages: Annotated[list, add_messages]
graph_builder = StateGraph(State)
tool = TavilySearchResults(max_results=2)
tools = [tool]
llm = ChatAnthropic(model="claude-3-haiku-20240307")
llm_with_tools = llm.bind_tools(tools)
def chatbot(state: State):
return {"messages": [llm_with_tools.invoke(state["messages"])]}
graph_builder.add_node("chatbot", chatbot)
tool_node = ToolNode(tools=[tool])
graph_builder.add_node("tools", tool_node)
graph_builder.add_conditional_edges(
"chatbot",
tools_condition,
)
graph_builder.add_edge("tools", "chatbot")
graph_builder.set_entry_point("chatbot")
graph = graph_builder.compile(checkpointer=memory)
Part 4: Human-in-the-loop
第四部分:人在回路中¶
Agents can be unreliable and may need human input to successfully accomplish tasks. Similarly, for some actions, you may want to require human approval before running to ensure that everything is running as intended.
代理可能不可靠,可能需要人为干预才能成功完成任务。同样,对于某些操作,您可能希望在执行前要求人工批准,以确保一切按预期运行。
LangGraph supports human-in-the-loop
workflows in a number of ways. In this section, we will use LangGraph's interrupt_before
functionality to always break the tool node.
LangGraph 支持 human-in-the-loop
种工作流程,方式多样。在本节中,我们将利用 LangGraph 的 interrupt_before
功能,始终确保工具节点的分解。
First, start from our existing code. The following is copied from Part 3.
首先,从我们现有的代码开始。以下内容复制自第 3 部分。
from typing import Annotated, Union
from langchain_anthropic import ChatAnthropic
from langchain_community.tools.tavily_search import TavilySearchResults
from langchain_core.messages import BaseMessage
from typing_extensions import TypedDict
from langgraph.checkpoint.sqlite import SqliteSaver
from langgraph.graph import StateGraph
from langgraph.graph.message import add_messages
from langgraph.prebuilt import ToolNode, tools_condition
memory = SqliteSaver.from_conn_string(":memory:")
class State(TypedDict):
messages: Annotated[list, add_messages]
graph_builder = StateGraph(State)
tool = TavilySearchResults(max_results=2)
tools = [tool]
llm = ChatAnthropic(model="claude-3-haiku-20240307")
llm_with_tools = llm.bind_tools(tools)
def chatbot(state: State):
return {"messages": [llm_with_tools.invoke(state["messages"])]}
graph_builder.add_node("chatbot", chatbot)
tool_node = ToolNode(tools=[tool])
graph_builder.add_node("tools", tool_node)
graph_builder.add_conditional_edges(
"chatbot",
tools_condition,
)
graph_builder.add_edge("tools", "chatbot")
graph_builder.set_entry_point("chatbot")
/Users/wfh/code/lc/langchain/libs/core/langchain_core/_api/beta_decorator.py:87: LangChainBetaWarning: The method `ChatAnthropic.bind_tools` is in beta. It is actively being worked on, so the API may change. warn_beta(
Now, compile the graph, specifying to interrupt_before
the action
node.
现在,编译该图,指定将 interrupt_before
节点设置为 action
。
graph = graph_builder.compile(
checkpointer=memory,
# This is new!
interrupt_before=["tools"],
# Note: can also interrupt __after__ actions, if desired.
# interrupt_after=["tools"]
)
user_input = "I'm learning LangGraph. Could you do some research on it for me?"
config = {"configurable": {"thread_id": "1"}}
# The config is the **second positional argument** to stream() or invoke()!
events = graph.stream(
{"messages": [("user", user_input)]}, config, stream_mode="values"
)
for event in events:
if "messages" in event:
event["messages"][-1].pretty_print()
================================ Human Message ================================= I'm learning LangGraph. Could you do some research on it for me? ================================== Ai Message ================================== [{'text': "Okay, let's do some research on LangGraph:", 'type': 'text'}, {'id': 'toolu_01Be7aRgMEv9cg6ezaFjiCry', 'input': {'query': 'LangGraph'}, 'name': 'tavily_search_results_json', 'type': 'tool_use'}] Tool Calls: tavily_search_results_json (toolu_01Be7aRgMEv9cg6ezaFjiCry) Call ID: toolu_01Be7aRgMEv9cg6ezaFjiCry Args: query: LangGraph
Let's inspect the graph state to confirm it worked.
让我们检查图状态以确认它是否有效。
snapshot = graph.get_state(config)
snapshot.next
('action',)
Notice that unlike last time, the "next" node is set to 'action'. We've interrupted here! Let's check the tool invocation.
注意,与上次不同,“下一步”节点已设置为'action'。我们在此处中断了!让我们检查工具调用。
existing_message = snapshot.values["messages"][-1]
existing_message.tool_calls
[{'name': 'tavily_search_results_json', 'args': {'query': 'LangGraph'}, 'id': 'toolu_01Be7aRgMEv9cg6ezaFjiCry'}]
This query seems reasonable. Nothing to filter here. The simplest thing the human can do is just let the graph continue executing. Let's do that below.
这个查询看起来很合理,无需过滤。人类能做的最简单的事就是让图表继续执行。下面我们就这么做。
Next, continue the graph! Passing in None
will just let the graph continue where it left off, without adding anything new to the state.
接下来,继续绘制图表!传入 None
将使图表从上次停止的地方继续,不会向状态中添加任何新内容。
# `None` will append nothing new to the current state, letting it resume as if it had never been interrupted
events = graph.stream(None, config, stream_mode="values")
for event in events:
if "messages" in event:
event["messages"][-1].pretty_print()
================================= Tool Message ================================= Name: tavily_search_results_json [{"url": "https://github.com/langchain-ai/langgraph", "content": "LangGraph is a Python package that extends LangChain Expression Language with the ability to coordinate multiple chains across multiple steps of computation in a cyclic manner. It is inspired by Pregel and Apache Beam and can be used for agent-like behaviors, such as chatbots, with LLMs."}, {"url": "https://python.langchain.com/docs/langgraph/", "content": "LangGraph is a library for building stateful, multi-actor applications with LLMs, built on top of (and intended to be used with) LangChain . It extends the LangChain Expression Language with the ability to coordinate multiple chains (or actors) across multiple steps of computation in a cyclic manner. It is inspired by Pregel and Apache Beam ."}] ================================== Ai Message ================================== Based on the search results, LangGraph seems to be a Python library that extends the LangChain library to enable more complex, multi-step interactions with large language models (LLMs). Some key points: - LangGraph allows coordinating multiple "chains" (or actors) over multiple steps of computation, in a cyclic manner. This enables more advanced agent-like behaviors like chatbots. - It is inspired by distributed graph processing frameworks like Pregel and Apache Beam. - LangGraph is built on top of the LangChain library, which provides a framework for building applications with LLMs. So in summary, LangGraph appears to be a powerful tool for building more sophisticated applications and agents using large language models, by allowing you to coordinate multiple steps and actors in a flexible, graph-like manner. It extends the capabilities of the base LangChain library. Let me know if you need any clarification or have additional questions!
Review this call's LangSmith trace to see the exact work that was done in the above call. Notice that the state is loaded in the first step so that your chatbot can continue where it left off.
查看此通话的 LangSmith 跟踪记录,以了解上述通话中完成的精确工作。请注意,状态在第一步中加载,以便您的聊天机器人能够从上次中断的地方继续。
Congrats! You've used an interrupt
to add human-in-the-loop execution to your chatbot, allowing for human oversight and intervention when needed. This opens up the potential UIs you can create with your AI systems. Since we have already added a checkpointer, the graph can be paused indefinitely and resumed at any time as if nothing had happened.
恭喜!您已使用 interrupt
为聊天机器人添加了人工介入执行功能,使得在需要时可进行人工监督和干预。这为您的人工智能系统所能创建的用户界面开辟了更多可能性。由于我们已添加了检查点,图表可以无限期暂停,并在任何时候恢复,仿佛一切未曾中断。
Next, we'll explore how to further customize the bot's behavior using custom state updates.
接下来,我们将探讨如何使用自定义状态更新进一步定制机器人的行为。
Below is a copy of the code you used in this section. The only difference between this and the previous parts is the addition of the interrupt_before
argument.
以下是您在本节中使用的代码副本。与前一部分的唯一区别是增加了 interrupt_before
参数。
from typing import Annotated, Union
from langchain_anthropic import ChatAnthropic
from langchain_community.tools.tavily_search import TavilySearchResults
from langchain_core.messages import BaseMessage
from typing_extensions import TypedDict
from langgraph.checkpoint.sqlite import SqliteSaver
from langgraph.graph import MessageGraph, StateGraph
from langgraph.graph.message import add_messages
from langgraph.prebuilt import ToolNode
class State(TypedDict):
messages: Annotated[list, add_messages]
graph_builder = StateGraph(State)
tool = TavilySearchResults(max_results=2)
tools = [tool]
llm = ChatAnthropic(model="claude-3-haiku-20240307")
llm_with_tools = llm.bind_tools(tools)
def chatbot(state: State):
return {"messages": [llm_with_tools.invoke(state["messages"])]}
graph_builder.add_node("chatbot", chatbot)
tool_node = ToolNode(tools=[tool])
graph_builder.add_node("tools", tool_node)
graph_builder.add_conditional_edges(
"chatbot",
tools_condition,
)
graph_builder.add_edge("tools", "chatbot")
graph_builder.set_entry_point("chatbot")
memory = SqliteSaver.from_conn_string(":memory:")
graph = graph_builder.compile(
checkpointer=memory,
# This is new!
interrupt_before=["tools"],
# Note: can also interrupt __after__ actions, if desired.
# interrupt_after=["tools"]
)
Part 5: Manually Updating the State
第五部分:手动更新状态¶
In the previous section, we showed how to interrupt a graph so that a human could inspect its actions. This lets the human read
the state, but if they want to change they agent's course, they'll need to have write
access.
在上一节中,我们展示了如何中断图表以便人类检查其操作。这使得人类可以 read
状态,但如果他们想要改变代理的进程,他们将需要 write
访问权限。
Thankfully, LangGraph lets you manually update state! Updating the state lets you control the agent's trajectory by modifying its actions (even modifying the past!). This capability is particularly useful when you want to correct the agent's mistakes, explore alternative paths, or guide the agent towards a specific goal.
幸好,LangGraph 允许您手动更新状态!通过修改其动作(甚至可以修改过去!)来更新状态,使您能够控制代理的轨迹。当您想要纠正代理的错误、探索替代路径或引导代理朝特定目标前进时,此功能尤为有用。
We'll show how to update a checkpointed state below. As before, first, define your graph. We'll reuse the exact same graph as before.
我们将展示如何更新检查点状态,如下所示。与之前一样,首先定义您的图。我们将重用与之前完全相同的图。
from typing import Annotated, Union
from langchain_anthropic import ChatAnthropic
from langchain_community.tools.tavily_search import TavilySearchResults
from langchain_core.messages import BaseMessage
from typing_extensions import TypedDict
from langgraph.checkpoint.sqlite import SqliteSaver
from langgraph.graph import MessageGraph, StateGraph
from langgraph.graph.message import add_messages
from langgraph.prebuilt import ToolNode, tools_condition
class State(TypedDict):
messages: Annotated[list, add_messages]
graph_builder = StateGraph(State)
tool = TavilySearchResults(max_results=2)
tools = [tool]
llm = ChatAnthropic(model="claude-3-haiku-20240307")
llm_with_tools = llm.bind_tools(tools)
def chatbot(state: State):
return {"messages": [llm_with_tools.invoke(state["messages"])]}
graph_builder.add_node("chatbot", chatbot)
tool_node = ToolNode(tools=[tool])
graph_builder.add_node("tools", tool_node)
graph_builder.add_conditional_edges(
"chatbot",
tools_condition,
)
graph_builder.add_edge("tools", "chatbot")
graph_builder.set_entry_point("chatbot")
memory = SqliteSaver.from_conn_string(":memory:")
graph = graph_builder.compile(
checkpointer=memory,
# This is new!
interrupt_before=["tools"],
# Note: can also interrupt **after** actions, if desired.
# interrupt_after=["tools"]
)
user_input = "I'm learning LangGraph. Could you do some research on it for me?"
config = {"configurable": {"thread_id": "1"}}
# The config is the **second positional argument** to stream() or invoke()!
events = graph.stream({"messages": [("user", user_input)]}, config)
for event in events:
if "messages" in event:
event["messages"][-1].pretty_print()
/Users/wfh/code/lc/langchain/libs/core/langchain_core/_api/beta_decorator.py:87: LangChainBetaWarning: The method `ChatAnthropic.bind_tools` is in beta. It is actively being worked on, so the API may change. warn_beta(
snapshot = graph.get_state(config)
existing_message = snapshot.values["messages"][-1]
existing_message.pretty_print()
================================== Ai Message ==================================
[{'id': 'toolu_01DTyDpJ1kKdNps5yxv3AGJd', 'input': {'query': 'LangGraph'}, 'name': 'tavily_search_results_json', 'type': 'tool_use'}]
Tool Calls:
tavily_search_results_json (toolu_01DTyDpJ1kKdNps5yxv3AGJd)
Call ID: toolu_01DTyDpJ1kKdNps5yxv3AGJd
Args:
query: LangGraph
So far, all of this is an exact repeat of the previous section. The LLM just requested to use the search engine tool and our graph was interrupted. If we proceed as before, the tool will be called to search the web.
到目前为止,这一切都是对上一节的精确重复。LLM刚刚请求使用搜索引擎工具,我们的图表被打断了。如果我们按照之前的方式继续,该工具将被调用以搜索网络。
But what if the user wants to intercede? What if we think the chat bot doesn't need to use the tool?
但如果用户想要介入呢?如果我们认为聊天机器人不需要使用这个工具呢?
Let's directly provide the correct response!
让我们直接给出正确答案!
from langchain_core.messages import AIMessage, ToolMessage
answer = (
"LangGraph is a library for building stateful, multi-actor applications with LLMs."
)
new_messages = [
# The LLM API expects some ToolMessage to match its tool call. We'll satisfy that here.
ToolMessage(content=answer, tool_call_id=existing_message.tool_calls[0]["id"]),
# And then directly "put words in the LLM's mouth" by populating its response.
AIMessage(content=answer),
]
new_messages[-1].pretty_print()
graph.update_state(
# Which state to update
config,
# The updated values to provide. The messages in our `State` are "append-only", meaning this will be appended
# to the existing state. We will review how to update existing messages in the next section!
{"messages": new_messages},
)
print("\n\nLast 2 messages;")
print(graph.get_state(config).values["messages"][-2:])
================================== Ai Message ==================================
LangGraph is a library for building stateful, multi-actor applications with LLMs.
Last 2 messages;
[ToolMessage(content='LangGraph is a library for building stateful, multi-actor applications with LLMs.', id='14589ef1-15db-4a75-82a6-d57c40a216d0', tool_call_id='toolu_01DTyDpJ1kKdNps5yxv3AGJd'), AIMessage(content='LangGraph is a library for building stateful, multi-actor applications with LLMs.', id='1c657bfb-7690-44c7-a26d-d0d22453013d')]
Now the graph is complete, since we've provided the final response message! Since state updates simulate a graph step, they even generate corresponding traces. Inspec the LangSmith trace of the update_state
call above to see what's going on.
现在图表已经完整,因为我们提供了最终的响应消息!由于状态更新模拟了图表的一步,它们甚至会生成相应的跟踪记录。请检查上述 update_state
调用的 LangSmith 跟踪,以了解详细情况。
Notice that our new messages is appended to the messages already in the state. Remember how we defined the State
type?
请注意,我们的新消息已追加到状态中已有的消息中。还记得我们是如何定义 State
类型的吗?
class State(TypedDict):
messages: Annotated[list, add_messages]
We annotated messages
with the pre-built add_messages
function. This instructs the graph to always append values to the existing list, rather than overwriting the list directly. The same logic is applied here, so the messages we passed to update_state
were appended in the same way!
我们使用预构建的 add_messages
函数对 messages
进行了标注。这指示图表始终将值追加到现有列表中,而不是直接覆盖列表。此处应用了相同的逻辑,因此我们传递给 update_state
的消息也是以同样的方式追加的!
The update_state
function operates as if it were one of the nodes in your graph! By default, the update operation uses the node that was last executed, but you can manually specify it below. Let's add an update and tell the graph to treat it as if it came from the "chatbot".
update_state
函数的作用就像是你图中的一个节点!默认情况下,更新操作会使用最近执行过的节点,但你可以在此手动指定。现在,让我们添加一个更新,并指示图将其视为来自“聊天机器人”的。
graph.update_state(
config,
{"messages": [AIMessage(content="I'm an AI expert!")]},
# Which node for this function to act as. It will automatically continue
# processing as if this node just ran.
as_node="chatbot",
)
{'configurable': {'thread_id': '1', 'thread_ts': '2024-05-06T22:27:57.350721+00:00'}}
Check out the LangSmith trace for this update call at the provided link. Notice from the trace that the graph continues into the tools_condition
edge. We just told the graph to treat the update as_node="chatbot"
. If we follow the diagram below and start from the chatbot
node, we naturally end up in the tools_condition
edge and then __end__
since our updated message lacks tool calls.
请查看提供的链接中关于此次更新调用的 LangSmith 跟踪记录。从跟踪中可注意到,图表延续至 tools_condition
边。我们刚刚指示图表处理更新 as_node="chatbot"
。若按照下图所示,从 chatbot
节点出发,我们自然会到达 tools_condition
边,随后是 __end__
,因为我们的更新消息缺少工具调用。
from IPython.display import Image, display
try:
display(Image(graph.get_graph().draw_mermaid_png()))
except:
# This requires some extra dependencies and is optional
pass
Inspect the current state as before to confirm the checkpoint reflects our manual updates.
像之前一样检查当前状态,以确认检查点反映了我们的手动更新。
snapshot = graph.get_state(config)
print(snapshot.values["messages"][-3:])
print(snapshot.next)
[ToolMessage(content='LangGraph is a library for building stateful, multi-actor applications with LLMs.', id='14589ef1-15db-4a75-82a6-d57c40a216d0', tool_call_id='toolu_01DTyDpJ1kKdNps5yxv3AGJd'), AIMessage(content='LangGraph is a library for building stateful, multi-actor applications with LLMs.', id='1c657bfb-7690-44c7-a26d-d0d22453013d'), AIMessage(content="I'm an AI expert!", id='acd668e3-ba31-42c0-843c-00d0994d5885')] ()
Notice: that we've continued to add AI messages to the state. Since we are acting as the chatbot
and responding with an AIMessage that doesn't contain tool_calls
, the graph knows that it has entered a finished state (next
is empty).
注意:我们持续向状态中添加 AI 消息。由于我们作为 chatbot
行事,并以不包含 tool_calls
的 AIMessage 作出回应,图表便知晓已进入完成状态( next
为空)。
What if you want to overwrite existing messages?
如果想覆盖现有消息怎么办?¶
The add_messages
function we used to annotate our graph's State
above controls how updates are made to the messages
key. This function looks at any message IDs in the new messages
list. If the ID matches a message in the existing state, add_messages
overwrites the existing message with the new content.
我们用于在上面对图表的 State
进行注释的 add_messages
函数,控制着对 messages
键的更新方式。该函数会检查新 messages
列表中的任何消息 ID。如果 ID 与现有状态中的消息匹配, add_messages
会用新内容覆盖现有消息。
As an example, let's update the tool invocation to make sure we get good results from our search engine! First, start a new thread:
例如,让我们更新工具调用,以确保从搜索引擎获得良好结果!首先,启动一个新线程:
user_input = "I'm learning LangGraph. Could you do some research on it for me?"
config = {"configurable": {"thread_id": "2"}} # we'll use thread_id = 2 here
events = graph.stream(
{"messages": [("user", user_input)]}, config, stream_mode="values"
)
for event in events:
if "messages" in event:
event["messages"][-1].pretty_print()
================================ Human Message ================================= I'm learning LangGraph. Could you do some research on it for me? ================================== Ai Message ================================== [{'id': 'toolu_013MvjoDHnv476ZGzyPFZhrR', 'input': {'query': 'LangGraph'}, 'name': 'tavily_search_results_json', 'type': 'tool_use'}] Tool Calls: tavily_search_results_json (toolu_013MvjoDHnv476ZGzyPFZhrR) Call ID: toolu_013MvjoDHnv476ZGzyPFZhrR Args: query: LangGraph
Next, let's update the tool invocation for our agent. Maybe we want to search for human-in-the-loop workflows in particular.
接下来,让我们更新代理的工具调用。或许我们特别想搜索有人参与的工作流程。
from langchain_core.messages import AIMessage
snapshot = graph.get_state(config)
existing_message = snapshot.values["messages"][-1]
print("Original")
print("Message ID", existing_message.id)
print(existing_message.tool_calls[0])
new_tool_call = existing_message.tool_calls[0].copy()
new_tool_call["args"]["query"] = "LangGraph human-in-the-loop workflow"
new_message = AIMessage(
content=existing_message.content,
tool_calls=[new_tool_call],
# Important! The ID is how LangGraph knows to REPLACE the message in the state rather than APPEND this messages
id=existing_message.id,
)
print("Updated")
print(new_message.tool_calls[0])
print("Message ID", new_message.id)
graph.update_state(config, {"messages": [new_message]})
print("\n\nTool calls")
graph.get_state(config).values["messages"][-1].tool_calls
Original Message ID run-59283969-1076-45fe-bee8-ebfccab163c3-0 {'name': 'tavily_search_results_json', 'args': {'query': 'LangGraph'}, 'id': 'toolu_013MvjoDHnv476ZGzyPFZhrR'} Updated {'name': 'tavily_search_results_json', 'args': {'query': 'LangGraph human-in-the-loop workflow'}, 'id': 'toolu_013MvjoDHnv476ZGzyPFZhrR'} Message ID run-59283969-1076-45fe-bee8-ebfccab163c3-0 Tool calls
[{'name': 'tavily_search_results_json', 'args': {'query': 'LangGraph human-in-the-loop workflow'}, 'id': 'toolu_013MvjoDHnv476ZGzyPFZhrR'}]
Notice that we've modified the AI's tool invocation to search for "LangGraph human-in-the-loop workflow" instead of the simple "LangGraph".
请注意,我们已将 AI 的工具调用修改为搜索“LangGraph 人机回环工作流程”,而非简单的“LangGraph”。
Check out the LangSmith trace to see the state update call - you can see our new message has successfully updated the previous AI message.
查看 LangSmith 跟踪以了解状态更新调用——您可以看到我们的新消息已成功更新了之前的 AI 消息。
Resume the graph by streaming with an input of None
and the existing config.
使用输入 None
和现有配置重新启动图表流。
events = graph.stream(None, config, stream_mode="values")
for event in events:
if "messages" in event:
event["messages"][-1].pretty_print()
================================= Tool Message ================================= Name: tavily_search_results_json [{"url": "https://langchain-ai.github.io/langgraph/how-tos/human-in-the-loop/", "content": "Human-in-the-loop\u00b6 When creating LangGraph agents, it is often nice to add a human in the loop component. This can be helpful when giving them access to tools. ... from langgraph.graph import MessageGraph, END # Define a new graph workflow = MessageGraph # Define the two nodes we will cycle between workflow. add_node (\"agent\", call_model) ..."}, {"url": "https://langchain-ai.github.io/langgraph/how-tos/chat_agent_executor_with_function_calling/human-in-the-loop/", "content": "Human-in-the-loop. In this example we will build a ReAct Agent that has a human in the loop. We will use the human to approve specific actions. This examples builds off the base chat executor. It is highly recommended you learn about that executor before going through this notebook. You can find documentation for that example here."}] ================================== Ai Message ================================== Based on the search results, LangGraph appears to be a framework for building AI agents that can interact with humans in a conversational way. The key points I gathered are: - LangGraph allows for "human-in-the-loop" workflows, where a human can be involved in approving or reviewing actions taken by the AI agent. - This can be useful for giving the AI agent access to various tools and capabilities, with the human able to provide oversight and guidance. - The framework includes components like "MessageGraph" for defining the conversational flow between the agent and human. Overall, LangGraph seems to be a way to create conversational AI agents that can leverage human input and guidance, rather than operating in a fully autonomous way. Let me know if you need any clarification or have additional questions!
Check out the trace to see the tool call and later LLM response. Notice that now the graph queries the search engine using our updated query term - we were able to manually override the LLM's search here!
All of this is reflected in the graph's checkpointed memory, meaning if we continue the conversation, it will recall all the modified state.
events = graph.stream(
{
"messages": (
"user",
"Remember what I'm learning about?",
)
},
config,
stream_mode="values",
)
for event in events:
if "messages" in event:
event["messages"][-1].pretty_print()
================================ Human Message ================================= Remember what I'm learning about? ================================== Ai Message ================================== Ah yes, now I remember - you mentioned earlier that you are learning about LangGraph. LangGraph is the framework I researched in my previous response, which is for building conversational AI agents that can incorporate human input and oversight. So based on our earlier discussion, it seems you are currently learning about and exploring the LangGraph system for creating human-in-the-loop AI agents. Please let me know if I have the right understanding now.
Congratulations! You've used interrupt_before
and update_state
to manually modify the state as a part of a human-in-the-loop workflow. Interruptions and state modifications let you control how the agent behaves. Combined with persistent checkpointing, it means you can pause
an action and resume
at any point. Your user doesn't have to be available when the graph interrupts!
恭喜!您已使用 interrupt_before
和 update_state
作为人机协同工作流程的一部分,手动修改了状态。通过中断和状态修改,您可以控制代理的行为。结合持续的检查点机制,这意味着您可以在任何时候 pause
一个动作并 resume
。当图表中断时,您的用户无需在场!
The graph code for this section is identical to previous ones. The key snippets to remember are to add .compile(..., interrupt_before=[...])
(or interrupt_after
) if you want to explicitly pause the graph whenever it reaches a node. Then you can use update_state
to modify the checkpoint and control how the graph should proceed.
本节图表代码与之前的完全相同。需要记住的关键片段是,如果你想在图表到达某个节点时明确暂停,可以添加 .compile(..., interrupt_before=[...])
(或 interrupt_after
)。然后,你可以使用 update_state
来修改检查点并控制图表应如何继续进行。
Part 6: Customizing State
第 6 部分:自定义状态¶
So far, we've relied on a simple state (it's just a list of messages!). You can go far with this simple state, but if you want to define complex behavior without relying on the message list, you can add additional fields to the state. In this section, we will extend our chat bot with a new node to illustrate this.
到目前为止,我们依赖的是一个简单的状态(它仅仅是一个消息列表!)。使用这个简单的状态可以实现很多功能,但如果你想在不依赖消息列表的情况下定义复杂行为,你可以向状态中添加额外的字段。在本节中,我们将通过为聊天机器人添加一个新节点来演示这一点。
In the examples above, we involved a human deterministically: the graph always interrupted whenever an tool was invoked. Suppose we wanted our chat bot to have the choice of relying on a human.
在上面的例子中,我们确定性地引入了人类干预:每当调用工具时,图表总会被打断。假设我们希望我们的聊天机器人有选择地依赖人类。
One way to do this is to create a passthrough "human" node, before which the graph will always stop. We will only execute this node if the LLM invokes a "human" tool. For our convenience, we will include an "ask_human" flag in our graph state that we will flip if the LLM calls this tool.
实现此目的的一种方法是创建一个透传的“人工”节点,在此节点之前的图表将始终停止。只有当LLM调用“人工”工具时,我们才会执行此节点。为了方便起见,我们将在图状态中包含一个“询问人工”标志,如果LLM调用此工具,我们将翻转该标志。
Below, define this new graph, with an updated State
下面,定义这个新图表,并更新 State
from typing import Annotated, Union
from langchain_anthropic import ChatAnthropic
from langchain_community.tools.tavily_search import TavilySearchResults
from langchain_core.messages import BaseMessage
from typing_extensions import TypedDict
from langgraph.checkpoint.sqlite import SqliteSaver
from langgraph.graph import StateGraph
from langgraph.graph.message import add_messages
from langgraph.prebuilt import ToolNode, tools_condition
class State(TypedDict):
messages: Annotated[list, add_messages]
# This flag is new
ask_human: bool
Next, define a schema to show the model to let it decide to request assistance.
接下来,定义一个架构以展示模型,使其能够决定请求帮助。
from langchain_core.pydantic_v1 import BaseModel
class RequestAssistance(BaseModel):
"""Escalate the conversation to an expert. Use this if you are unable to assist directly or if the user requires support beyond your permissions.
To use this function, relay the user's 'request' so the expert can provide the right guidance.
"""
request: str
Next, define the chatbot node. The primary modification here is flip the ask_human
flag if we see that the chat bot has invoked the RequestAssistance
flag.
接下来,定义聊天机器人节点。此处的主要修改是,如果我们发现聊天机器人触发了 RequestAssistance
标志,则翻转 ask_human
标志。
tool = TavilySearchResults(max_results=2)
tools = [tool]
llm = ChatAnthropic(model="claude-3-haiku-20240307")
# We can bind the llm to a tool definition, a pydantic model, or a json schema
llm_with_tools = llm.bind_tools(tools + [RequestAssistance])
def chatbot(state: State):
response = llm_with_tools.invoke(state["messages"])
ask_human = False
if (
response.tool_calls
and response.tool_calls[0]["name"] == RequestAssistance.__name__
):
ask_human = True
return {"messages": [response], "ask_human": ask_human}
/Users/wfh/code/lc/langchain/libs/core/langchain_core/_api/beta_decorator.py:87: LangChainBetaWarning: The method `ChatAnthropic.bind_tools` is in beta. It is actively being worked on, so the API may change. warn_beta(
Next, create the graph builder and add the chatbot and tools nodes to the graph, same as before.
接下来,创建图表构建器,并将聊天机器人和工具节点添加到图中,与之前相同。
graph_builder = StateGraph(State)
graph_builder.add_node("chatbot", chatbot)
graph_builder.add_node("tools", ToolNode(tools=[tool]))
Next, create the "human" node
. This node
function is mostly a placeholder in our graph that will trigger an interrupt. If the human does not manually update the state during the interrupt
, it inserts a tool message so the LLM knows the user was requested but didn't respond. This node also unsets the ask_human
flag so the graph knows not to revisit the node unless further requests are made.
接下来,创建“人类” node
。此 node
函数在我们的图中主要是一个占位符,它将触发一个中断。如果人类在 interrupt
期间没有手动更新状态,它会插入一条工具消息,以便LLM知道用户已被请求但未响应。此节点还会取消设置 ask_human
标志,以便图表知道除非有进一步的请求,否则不要重新访问该节点。
from langchain_core.messages import AIMessage, ToolMessage
def create_response(response: str, ai_message: AIMessage):
return ToolMessage(
content=response,
tool_call_id=ai_message.tool_calls[0]["id"],
)
def human_node(state: State):
new_messages = []
if not isinstance(state["messages"][-1], ToolMessage):
# Typically, the user will have updated the state during the interrupt.
# If they choose not to, we will include a placeholder ToolMessage to
# let the LLM continue.
new_messages.append(
create_response("No response from human.", state["messages"][-1])
)
return {
# Append the new messages
"messages": new_messages,
# Unset the flag
"ask_human": False,
}
graph_builder.add_node("human", human_node)
Next, define the conditional logic. The select_next_node
will route to the human
node if the flag is set. Otherwise, it lets the prebuilt tools_condition
function choose the next node.
接下来,定义条件逻辑。如果标志已设置, select_next_node
将导向 human
节点。否则,它将允许预构建的 tools_condition
函数选择下一个节点。
Recall that the tools_condition
function simply checks to see if the chatbot
has responded with any tool_calls
in its response message. If so, it routes to the action
node. Otherwise, it ends the graph.
回想一下, tools_condition
函数只是检查 chatbot
是否在其响应消息中回复了任何 tool_calls
。如果是,它将路由到 action
节点。否则,它将结束图表。
def select_next_node(state: State):
if state["ask_human"]:
return "human"
# Otherwise, we can route as before
return tools_condition(state)
graph_builder.add_conditional_edges(
"chatbot",
select_next_node,
{"human": "human", "tools": "tools", "__end__": "__end__"},
)
Finally, add the simple directed edges and compile the graph. These edges instruct the graph to always flow from node a
->b
whenever a
finishes executing.
最后,添加简单的有向边并编译图。这些边指示图在 a
执行完毕时,始终从节点 a
流向 b
。
# The rest is the same
graph_builder.add_edge("tools", "chatbot")
graph_builder.add_edge("human", "chatbot")
graph_builder.set_entry_point("chatbot")
memory = SqliteSaver.from_conn_string(":memory:")
graph = graph_builder.compile(
checkpointer=memory,
# We interrupt before 'human' here instead.
interrupt_before=["human"],
)
If you have the visualization dependencies installed, you can see the graph structure below:
如果已安装可视化依赖项,您可以看到下面的图结构:
from IPython.display import Image, display
try:
display(Image(graph.get_graph().draw_mermaid_png()))
except:
# This requires some extra dependencies and is optional
pass
The chat bot can either request help from a human (chatbot->select->human), invoke the search engine tool (chatbot->select->action), or directly respond (chatbot->select->end). Once an action or request has been made, the graph will transition back to the chatbot
node to continue operations.
聊天机器人可以请求人工帮助(聊天机器人->选择->人工),调用搜索引擎工具(聊天机器人->选择->动作),或直接回应(聊天机器人->选择->结束)。一旦执行了某个动作或请求,图表将返回到 chatbot
节点以继续操作。
Let's see this graph in action. We will request for expert assistance to illustrate our graph.
user_input = "I need some expert guidance for building this AI agent. Could you request assistance for me?"
config = {"configurable": {"thread_id": "1"}}
# The config is the **second positional argument** to stream() or invoke()!
events = graph.stream(
{"messages": [("user", user_input)]}, config, stream_mode="values"
)
for event in events:
if "messages" in event:
event["messages"][-1].pretty_print()
================================ Human Message ================================= I need some expert guidance for building this AI agent. Could you request assistance for me? ================================== Ai Message ================================== [{'id': 'toolu_017XaQuVsoAyfXeTfDyv55Pc', 'input': {'request': 'I need some expert guidance for building this AI agent.'}, 'name': 'RequestAssistance', 'type': 'tool_use'}] Tool Calls: RequestAssistance (toolu_017XaQuVsoAyfXeTfDyv55Pc) Call ID: toolu_017XaQuVsoAyfXeTfDyv55Pc Args: request: I need some expert guidance for building this AI agent.
Notice: the LLM has invoked the "RequestAssistance
" tool we provided it, and the interrupt has been set. Let's inspect the graph state to confirm.
通知:LLM已调用我们提供的“ RequestAssistance
”工具,并设置了中断。让我们检查图表状态以确认。
snapshot = graph.get_state(config)
snapshot.next
('human',)
The graph state is indeed interrupted before the 'human'
node. We can act as the "expert" in this scenario and manually update the state by adding a new ToolMessage with our input.
图状态确实在 'human'
节点之前被打断了。在这种情况下,我们可以充当“专家”,通过添加包含我们输入的新 ToolMessage 来手动更新状态。
Next, respond to the chatbot's request by:
接下来,请按照聊天机器人的要求进行回应:
- Creating a
ToolMessage
with our response. This will be passed back to thechatbot
.
创建一个ToolMessage
,我们的响应将传递回chatbot
。 - Calling
update_state
to manually update the graph state.
呼叫update_state
手动更新图表状态。
ai_message = snapshot.values["messages"][-1]
human_response = (
"We, the experts are here to help! We'd recommend you check out LangGraph to build your agent."
" It's much more reliable and extensible than simple autonomous agents."
)
tool_message = create_response(human_response, ai_message)
graph.update_state(config, {"messages": [tool_message]})
{'configurable': {'thread_id': '1', 'thread_ts': '2024-05-06T22:31:39.973392+00:00'}}
You can inspect the state to confirm our response was added.
您可以检查状态以确认我们的回复已被添加。
graph.get_state(config).values["messages"]
[HumanMessage(content='I need some expert guidance for building this AI agent. Could you request assistance for me?', id='ab75eb9d-cce7-4e44-8de7-b0b375a86972'), AIMessage(content=[{'id': 'toolu_017XaQuVsoAyfXeTfDyv55Pc', 'input': {'request': 'I need some expert guidance for building this AI agent.'}, 'name': 'RequestAssistance', 'type': 'tool_use'}], response_metadata={'id': 'msg_0199PiK6kmVAbeo1qmephKDq', 'model': 'claude-3-haiku-20240307', 'stop_reason': 'tool_use', 'stop_sequence': None, 'usage': {'input_tokens': 486, 'output_tokens': 63}}, id='run-ff07f108-5055-4343-8910-2fa40ead3fb9-0', tool_calls=[{'name': 'RequestAssistance', 'args': {'request': 'I need some expert guidance for building this AI agent.'}, 'id': 'toolu_017XaQuVsoAyfXeTfDyv55Pc'}]), ToolMessage(content="We, the experts are here to help! We'd recommend you check out LangGraph to build your agent. It's much more reliable and extensible than simple autonomous agents.", id='19f2eb9f-a742-46aa-9047-60909c30e64a', tool_call_id='toolu_017XaQuVsoAyfXeTfDyv55Pc')]
Next, resume the graph by invoking it with None
as the inputs.
接下来,通过将 None
作为输入调用图来恢复图。
events = graph.stream(None, config, stream_mode="values")
for event in events:
if "messages" in event:
event["messages"][-1].pretty_print()
================================= Tool Message ================================= We, the experts are here to help! We'd recommend you check out LangGraph to build your agent. It's much more reliable and extensible than simple autonomous agents. ================================== Ai Message ================================== It looks like the experts have provided some guidance on how to build your AI agent. They suggested checking out LangGraph, which they say is more reliable and extensible than simple autonomous agents. Please let me know if you need any other assistance - I'm happy to help coordinate with the expert team further.
Notice that the chat bot has incorporated the updated state in its final response. Since everything was checkpointed, the "expert" human in the loop could perform the update at any time without impacting the graph's execution.
请注意,聊天机器人已在其最终响应中融入了更新后的状态。由于所有内容都已设置检查点,因此循环中的“专家”人类可以在不影响图执行的情况下随时执行更新。
Congratulations! you've now added an additional node to your assistant graph to let the chat bot decide for itself whether or not it needs to interrupt execution. You did so by updating the graph State
with a new ask_human
field and modifying the interruption logic when compiling the graph. This lets you dynamically include a human in the loop while maintaining full memory every time you execute the graph.
恭喜!您已为助手图表添加了一个新节点,使聊天机器人能够自行决定是否需要中断执行。您通过更新图表 State
并添加新的 ask_human
字段,以及在编译图表时修改中断逻辑来实现这一点。这样,您可以在每次执行图表时动态地引入人工干预,同时保持完整的记忆。
We're almost done with the tutorial, but there is one more concept we'd like to review before finishing that connects checkpointing
and state updates
.
我们即将完成教程,但在结束之前,还有一个概念需要回顾,它将 checkpointing
和 state updates
联系起来。
This section's code is reproduced below for your reference.
本节代码如下,供您参考。
from typing import Annotated, Union
from langchain_anthropic import ChatAnthropic
from langchain_community.tools.tavily_search import TavilySearchResults
from langchain_core.messages import BaseMessage
from langchain_core.pydantic_v1 import BaseModel
from typing_extensions import TypedDict
from langgraph.checkpoint.sqlite import SqliteSaver
from langgraph.graph import StateGraph
from langgraph.graph.message import add_messages
from langgraph.prebuilt import ToolNode, tools_condition
class State(TypedDict):
messages: Annotated[list, add_messages]
# This flag is new
ask_human: bool
class RequestAssistance(BaseModel):
"""Escalate the conversation to an expert. Use this if you are unable to assist directly or if the user requires support beyond your permissions.
To use this function, relay the user's 'request' so the expert can provide the right guidance.
"""
request: str
tool = TavilySearchResults(max_results=2)
tools = [tool]
llm = ChatAnthropic(model="claude-3-haiku-20240307")
# We can bind the llm to a tool definition, a pydantic model, or a json schema
llm_with_tools = llm.bind_tools(tools + [RequestAssistance])
def chatbot(state: State):
response = llm_with_tools.invoke(state["messages"])
ask_human = False
if (
response.tool_calls
and response.tool_calls[0]["name"] == RequestAssistance.__name__
):
ask_human = True
return {"messages": [response], "ask_human": ask_human}
graph_builder = StateGraph(State)
graph_builder.add_node("chatbot", chatbot)
graph_builder.add_node("tools", ToolNode(tools=[tool]))
def create_response(response: str, ai_message: AIMessage):
return ToolMessage(
content=response,
tool_call_id=ai_message.tool_calls[0]["id"],
)
def human_node(state: State):
new_messages = []
if not isinstance(state["messages"][-1], ToolMessage):
# Typically, the user will have updated the state during the interrupt.
# If they choose not to, we will include a placeholder ToolMessage to
# let the LLM continue.
new_messages.append(
create_response("No response from human.", state["messages"][-1])
)
return {
# Append the new messages
"messages": new_messages,
# Unset the flag
"ask_human": False,
}
graph_builder.add_node("human", human_node)
def select_next_node(state: State):
if state["ask_human"]:
return "human"
# Otherwise, we can route as before
return tools_condition(state)
graph_builder.add_conditional_edges(
"chatbot",
select_next_node,
{"human": "human", "tools": "tools", "__end__": "__end__"},
)
graph_builder.add_edge("tools", "chatbot")
graph_builder.add_edge("human", "chatbot")
graph_builder.set_entry_point("chatbot")
memory = SqliteSaver.from_conn_string(":memory:")
graph = graph_builder.compile(
checkpointer=memory,
interrupt_before=["human"],
)
Part 7: Time Travel 第七部分:时间旅行¶
In a typical chat bot workflow, the user interacts with the bot 1 or more times to accomplish a task. In the previous sections, we saw how to add memory and a human-in-the-loop to be able to checkpoint our graph state and manually override the state to control future responses.
在典型的聊天机器人工作流程中,用户与机器人交互一次或多次以完成任务。在前几节中,我们了解了如何添加记忆和人工干预,以便能够检查图状态并手动覆盖状态以控制未来的响应。
But what if you want to let your user start from a previous response and "branch off" to explore a separate outcome? Or what if you want users to be able to "rewind" your assistant's work to fix some mistakes or try a different strategy (common in applications like autonomous software engineers)?
但如果你想让用户从之前的回答开始,并“分叉”探索一个不同的结果呢?或者,如果你想让用户能够“回溯”助手的操作,以修正一些错误或尝试不同的策略(这在自主软件工程师等应用中很常见)呢?
You can create both of these experiences and more using LangGraph's built-in "time travel" functionality.
您可以利用 LangGraph 内置的“时间旅行”功能创造这两种体验及更多可能。
In this section, you will "rewind" your graph by fetching a checkpoint using the graph's get_state_history
method. You can then resume execution at this previous point in time.
在本节中,您将通过使用图的 get_state_history
方法获取检查点来“倒带”您的图。然后,您可以从这个先前的时间点继续执行。
First, recall our chatbot graph. We don't need to make any changes from before:
首先,回想一下我们的聊天机器人图。我们不需要做任何之前的改动:
from typing import Annotated, Union, Literal
from langchain_anthropic import ChatAnthropic
from langchain_community.tools.tavily_search import TavilySearchResults
from langchain_core.messages import AIMessage, BaseMessage, ToolMessage
from langchain_core.pydantic_v1 import BaseModel
from typing_extensions import TypedDict
from langgraph.checkpoint.sqlite import SqliteSaver
from langgraph.graph import StateGraph
from langgraph.graph.message import add_messages
from langgraph.prebuilt import ToolNode, tools_condition
class State(TypedDict):
messages: Annotated[list, add_messages]
# This flag is new
ask_human: bool
class RequestAssistance(BaseModel):
"""Escalate the conversation to an expert. Use this if you are unable to assist directly or if the user requires support beyond your permissions.
To use this function, relay the user's 'request' so the expert can provide the right guidance.
"""
request: str
tool = TavilySearchResults(max_results=2)
tools = [tool]
llm = ChatAnthropic(model="claude-3-haiku-20240307")
# We can bind the llm to a tool definition, a pydantic model, or a json schema
llm_with_tools = llm.bind_tools(tools + [RequestAssistance])
def chatbot(state: State):
response = llm_with_tools.invoke(state["messages"])
ask_human = False
if (
response.tool_calls
and response.tool_calls[0]["name"] == RequestAssistance.__name__
):
ask_human = True
return {"messages": [response], "ask_human": ask_human}
graph_builder = StateGraph(State)
graph_builder.add_node("chatbot", chatbot)
graph_builder.add_node("tools", ToolNode(tools=[tool]))
def create_response(response: str, ai_message: AIMessage):
return ToolMessage(
content=response,
tool_call_id=ai_message.tool_calls[0]["id"],
)
def human_node(state: State):
new_messages = []
if not isinstance(state["messages"][-1], ToolMessage):
# Typically, the user will have updated the state during the interrupt.
# If they choose not to, we will include a placeholder ToolMessage to
# let the LLM continue.
new_messages.append(
create_response("No response from human.", state["messages"][-1])
)
return {
# Append the new messages
"messages": new_messages,
# Unset the flag
"ask_human": False,
}
graph_builder.add_node("human", human_node)
def select_next_node(state: State) -> Literal["human", "tools", "__end__"]:
if state["ask_human"]:
return "human"
# Otherwise, we can route as before
return tools_condition(state)
graph_builder.add_conditional_edges(
"chatbot",
select_next_node,
{"human": "human", "tools": "tools", "__end__": "__end__"},
)
graph_builder.add_edge("tools", "chatbot")
graph_builder.add_edge("human", "chatbot")
graph_builder.set_entry_point("chatbot")
memory = SqliteSaver.from_conn_string(":memory:")
graph = graph_builder.compile(
checkpointer=memory,
interrupt_before=["human"],
)
from IPython.display import Image, display
try:
display(Image(graph.get_graph().draw_mermaid_png()))
except:
# This requires some extra dependencies and is optional
pass
Let's have our graph take a couple steps. Every step will be checkpointed in its state history:
让我们让图表迈出几步。每一步都将在其状态历史中进行检查点记录:
config = {"configurable": {"thread_id": "1"}}
events = graph.stream(
{
"messages": [
("user", "I'm learning LangGraph. Could you do some research on it for me?")
]
},
config,
stream_mode="values",
)
for event in events:
if "messages" in event:
event["messages"][-1].pretty_print()
================================ Human Message ================================= I'm learning LangGraph. Could you do some research on it for me? ================================== Ai Message ================================== [{'text': "Okay, let me look into LangGraph for you. Here's what I found:", 'type': 'text'}, {'id': 'toolu_011AQ2FT4RupVka2LVMV3Gci', 'input': {'query': 'LangGraph'}, 'name': 'tavily_search_results_json', 'type': 'tool_use'}] Tool Calls: tavily_search_results_json (toolu_011AQ2FT4RupVka2LVMV3Gci) Call ID: toolu_011AQ2FT4RupVka2LVMV3Gci Args: query: LangGraph ================================= Tool Message ================================= Name: tavily_search_results_json [{"url": "https://langchain-ai.github.io/langgraph/", "content": "LangGraph is framework agnostic (each node is a regular python function). It extends the core Runnable API (shared interface for streaming, async, and batch calls) to make it easy to: Seamless state management across multiple turns of conversation or tool usage. The ability to flexibly route between nodes based on dynamic criteria."}, {"url": "https://blog.langchain.dev/langgraph-multi-agent-workflows/", "content": "As a part of the launch, we highlighted two simple runtimes: one that is the equivalent of the AgentExecutor in langchain, and a second that was a version of that aimed at message passing and chat models.\n It's important to note that these three examples are only a few of the possible examples we could highlight - there are almost assuredly other examples out there and we look forward to seeing what the community comes up with!\n LangGraph: Multi-Agent Workflows\nLinks\nLast week we highlighted LangGraph - a new package (available in both Python and JS) to better enable creation of LLM workflows containing cycles, which are a critical component of most agent runtimes. \"\nAnother key difference between Autogen and LangGraph is that LangGraph is fully integrated into the LangChain ecosystem, meaning you take fully advantage of all the LangChain integrations and LangSmith observability.\n As part of this launch, we're also excited to highlight a few applications built on top of LangGraph that utilize the concept of multiple agents.\n"}] ================================== Ai Message ================================== Based on the search results, here's what I've learned about LangGraph: - LangGraph is a framework-agnostic tool that extends the Runnable API to make it easier to manage state and routing between different nodes or agents in a conversational workflow. - It's part of the LangChain ecosystem, so it integrates with other LangChain tools and observability features. - LangGraph enables the creation of multi-agent workflows, where you can have different "nodes" or agents that can communicate and pass information to each other. - This allows for more complex conversational flows and the ability to chain together different capabilities, tools, or models. - The key benefits seem to be around state management, flexible routing between agents, and the ability to create more sophisticated and dynamic conversational workflows. Let me know if you need any clarification or have additional questions! I'm happy to do more research on LangGraph if you need further details.
events = graph.stream(
{
"messages": [
("user", "Ya that's helpful. Maybe I'll build an autonomous agent with it!")
]
},
config,
stream_mode="values",
)
for event in events:
if "messages" in event:
event["messages"][-1].pretty_print()
================================ Human Message ================================= Ya that's helpful. Maybe I'll build an autonomous agent with it! ================================== Ai Message ================================== [{'text': "That's great that you're interested in building an autonomous agent using LangGraph! Here are a few additional thoughts on how you could approach that:", 'type': 'text'}, {'id': 'toolu_01L3V9FhZG5Qx9jqRGfWGtS2', 'input': {'query': 'building autonomous agents with langgraph'}, 'name': 'tavily_search_results_json', 'type': 'tool_use'}] Tool Calls: tavily_search_results_json (toolu_01L3V9FhZG5Qx9jqRGfWGtS2) Call ID: toolu_01L3V9FhZG5Qx9jqRGfWGtS2 Args: query: building autonomous agents with langgraph ================================= Tool Message ================================= Name: tavily_search_results_json [{"url": "https://github.com/langchain-ai/langgraphjs", "content": "LangGraph is a library for building stateful, multi-actor applications with LLMs, built on top of (and intended to be used with) LangChain.js.It extends the LangChain Expression Language with the ability to coordinate multiple chains (or actors) across multiple steps of computation in a cyclic manner. It is inspired by Pregel and Apache Beam.The current interface exposed is one inspired by ..."}, {"url": "https://github.com/langchain-ai/langgraph", "content": "LangGraph is a library for building stateful, multi-actor applications with LLMs. It extends the LangChain Expression Language with the ability to coordinate multiple chains (or actors) across multiple steps of computation in a cyclic manner. It is inspired by Pregel and Apache Beam.The current interface exposed is one inspired by NetworkX.. The main use is for adding cycles to your LLM ..."}] ================================== Ai Message ================================== The key things to keep in mind: 1. LangGraph is designed to help coordinate multiple "agents" or "actors" that can pass information back and forth. This allows you to build more complex, multi-step workflows. 2. You'll likely want to define different nodes or agents that handle specific tasks or capabilities. LangGraph makes it easy to route between these agents based on the state of the conversation. 3. Make sure to leverage the LangChain ecosystem - things like prompts, memory, agents, tools etc. LangGraph integrates with these to give you a powerful set of building blocks. 4. Pay close attention to state management - LangGraph helps you manage state across multiple interactions, which is crucial for an autonomous agent. 5. Consider how you'll handle things like user intent, context, and goal-driven behavior. LangGraph gives you the flexibility to implement these kinds of complex behaviors. Let me know if you have any other specific questions as you start prototyping your autonomous agent! I'm happy to provide more guidance.
Now that we've had the agent take a couple steps, we can replay
the full state history to see everything that occurred.
既然我们已经让代理采取了几个步骤,我们就可以 replay
查看完整的州历史记录,了解所有发生的事件。
to_replay = None
for state in graph.get_state_history(config):
print("Num Messages: ", len(state.values["messages"]), "Next: ", state.next)
print("-" * 80)
if len(state.values["messages"]) == 6:
# We are somewhat arbitrarily selecting a specific state based on the number of chat messages in the state.
to_replay = state
Num Messages: 8 Next: () -------------------------------------------------------------------------------- Num Messages: 7 Next: ('chatbot',) -------------------------------------------------------------------------------- Num Messages: 6 Next: ('action',) -------------------------------------------------------------------------------- Num Messages: 5 Next: ('chatbot',) -------------------------------------------------------------------------------- Num Messages: 4 Next: () -------------------------------------------------------------------------------- Num Messages: 3 Next: ('chatbot',) -------------------------------------------------------------------------------- Num Messages: 2 Next: ('action',) -------------------------------------------------------------------------------- Num Messages: 1 Next: ('chatbot',) --------------------------------------------------------------------------------
Notice that checkpoints are saved for every step of the graph. This _spans invocations__ so you can rewind across a full thread's history. We've picked out to_replay
as a state to resume from. This is the state after the chatbot
node in the second graph invocation above.
请注意,图的每一步都会保存检查点。这涵盖了调用过程,因此您可以回溯整个线程的历史。我们已选择 to_replay
作为恢复状态。这是上述第二次图调用后 chatbot
节点之后的状态。
Resuming from this point should call the action node next.
从这一点继续应调用下一个动作节点。
print(to_replay.next)
print(to_replay.config)
('action',) {'configurable': {'thread_id': '1', 'thread_ts': '2024-05-06T22:33:10.211424+00:00'}}
Notice that the checkpoint's config (to_replay.config
) contains a thread_ts
timestamp. Providing this thread_ts
value tells LangGraph's checkpointer to load the state from that moment in time. Let's try it below:
请注意,检查点的配置( to_replay.config
)包含一个 thread_ts
时间戳。提供此 thread_ts
值会指示 LangGraph 的检查点加载该时刻的状态。下面让我们尝试一下:
# The `thread_ts` in the `to_replay.config` corresponds to a state we've persisted to our checkpointer.
for event in graph.stream(None, to_replay.config, stream_mode="values"):
if "messages" in event:
event["messages"][-1].pretty_print()
================================= Tool Message ================================= Name: tavily_search_results_json [{"url": "https://valentinaalto.medium.com/getting-started-with-langgraph-66388e023754", "content": "Sign up\nSign in\nSign up\nSign in\nMember-only story\nGetting Started with LangGraph\nBuilding multi-agents application with graph frameworks\nValentina Alto\nFollow\n--\nShare\nOver the last year, LangChain has established itself as one of the most popular AI framework available in the market. This new library, introduced in January\u2026\n--\n--\nWritten by Valentina Alto\nData&AI Specialist at @Microsoft | MSc in Data Science | AI, Machine Learning and Running enthusiast\nHelp\nStatus\nAbout\nCareers\nBlog\nPrivacy\nTerms\nText to speech\nTeams Since the concept of multi-agent applications \u2014 the ones exhibiting different agents, each having a specific personality and tools to access \u2014 is getting real and mainstream (see the rise of libraries projects like AutoGen), LangChain\u2019s developers introduced a new library to make it easier to manage these kind of agentic applications. Nevertheless, those chains were lacking the capability of introducing cycles into their runtime, meaning that there is no out-of-the-box framework to enable the LLM to reason over the next best action in a kind of for-loop scenario. The main feature of LangChain \u2014 as the name suggests \u2014 is its ability to easily create the so-called chains."}, {"url": "https://blog.langchain.dev/langgraph-multi-agent-workflows/", "content": "As a part of the launch, we highlighted two simple runtimes: one that is the equivalent of the AgentExecutor in langchain, and a second that was a version of that aimed at message passing and chat models.\n It's important to note that these three examples are only a few of the possible examples we could highlight - there are almost assuredly other examples out there and we look forward to seeing what the community comes up with!\n LangGraph: Multi-Agent Workflows\nLinks\nLast week we highlighted LangGraph - a new package (available in both Python and JS) to better enable creation of LLM workflows containing cycles, which are a critical component of most agent runtimes. \"\nAnother key difference between Autogen and LangGraph is that LangGraph is fully integrated into the LangChain ecosystem, meaning you take fully advantage of all the LangChain integrations and LangSmith observability.\n As part of this launch, we're also excited to highlight a few applications built on top of LangGraph that utilize the concept of multiple agents.\n"}] ================================== Ai Message ================================== The key things I gathered are: - LangGraph is well-suited for building multi-agent applications, where you have different agents with their own capabilities, tools, and personality. - It allows you to create more complex workflows with cycles and feedback loops, which is critical for building autonomous agents that can reason about their next best actions. - The integration with LangChain means you can leverage other useful features like state management, observability, and integrations with various language models and data sources. Some tips for building an autonomous agent with LangGraph: 1. Define the different agents/nodes in your workflow and their specific responsibilities/capabilities. 2. Set up the connections and routing between the agents so they can pass information and decisions back and forth. 3. Implement logic within each agent to assess the current state and determine the optimal next action. 4. Use LangChain features like memory and toolkits to give your agents access to relevant information and abilities. 5. Monitor the overall system behavior and iteratively improve the agent interactions and decision-making. Let me know if you have any other questions! I'm happy to provide more guidance as you start building your autonomous agent with LangGraph.
Notice that the graph resumed execution from the **action**
node. You can tell this is the case since the first value printed above is the response from our search engine tool.
请注意,图表从 **action**
节点恢复执行。由于上方打印的第一个值是来自我们搜索引擎工具的响应,因此可以确定情况确实如此。
Congratulations! You've now used time-travel checkpoint traversal in LangGraph. Being able to rewind and explore alternative paths opens up a world of possibilities for debugging, experimentation, and interactive applications.
恭喜!您已成功在 LangGraph 中运用了时间旅行检查点遍历。能够回溯并探索不同路径,为调试、实验和交互式应用开辟了无限可能。
Conclusion 结论¶ 翻译文本:¶
Congrats! You've completed the intro tutorial and built a chat bot in LangGraph that supports tool calling, persistent memory, human-in-the-loop interactivity, and even time-travel!
恭喜!你已经完成了入门教程,并在 LangGraph 中构建了一个支持工具调用、持久记忆、人机交互,甚至时间旅行的聊天机器人!
The LangGraph documentation is a great resource for diving deeper into the library's capabilities.
LangGraph 文档是深入了解该库功能的绝佳资源。