跳转至

如何将 LangGraph 与 AutoGen、CrewAI 和其他框架集成

本指南展示如何将 AutoGen 智能代理与 LangGraph 集成,以利用持久化、流式传输和记忆等功能,然后将集成解决方案部署到 LangGraph Platform 进行可扩展的生产使用。在本指南中,我们展示如何构建一个与 AutoGen 集成的 LangGraph 聊天机器人,但你可以使用相同的方法与其他框架集成。

将 AutoGen 与 LangGraph 集成提供以下几个好处:

前置条件

  • Python 3.9+
  • Autogen: pip install autogen
  • LangGraph: pip install langgraph
  • OpenAI API 密钥

设置

设置你的环境:

import getpass
import os


def _set_env(var: str):
    if not os.environ.get(var):
        os.environ[var] = getpass.getpass(f"{var}: ")


_set_env("OPENAI_API_KEY")

1. 定义 AutoGen 智能代理

创建一个可以执行代码的 AutoGen 智能代理。此示例改编自 AutoGen 的官方教程

import autogen
import os

config_list = [{"model": "gpt-4o", "api_key": os.environ["OPENAI_API_KEY"]}]

llm_config = {
    "timeout": 600,
    "cache_seed": 42,
    "config_list": config_list,
    "temperature": 0,
}

autogen_agent = autogen.AssistantAgent(
    name="assistant",
    llm_config=llm_config,
)

user_proxy = autogen.UserProxyAgent(
    name="user_proxy",
    human_input_mode="NEVER",
    max_consecutive_auto_reply=10,
    is_termination_msg=lambda x: x.get("content", "").rstrip().endswith("TERMINATE"),
    code_execution_config={
        "work_dir": "web",
        "use_docker": False,
    },  # 如果 docker 可用,请设置 use_docker=True 来运行生成的代码。使用 docker 比直接运行生成的代码更安全。
    llm_config=llm_config,
    system_message="Reply TERMINATE if the task has been solved at full satisfaction. Otherwise, reply CONTINUE, or the reason why the task is not solved yet.",
)

2. 创建图

我们现在将创建一个调用 AutoGen 智能代理的 LangGraph 聊天机器人图。

from langchain_core.messages import convert_to_openai_messages
from langgraph.graph import StateGraph, MessagesState, START
from langgraph.checkpoint.memory import InMemorySaver

def call_autogen_agent(state: MessagesState):
    # 将 LangGraph 消息转换为 AutoGen 的 OpenAI 格式
    messages = convert_to_openai_messages(state["messages"])

    # 获取最后一条用户消息
    last_message = messages[-1]

    # 将之前的消息历史作为上下文传递(不包括最后一条消息)
    carryover = messages[:-1] if len(messages) > 1 else []

    # 使用 AutoGen 发起聊天
    response = user_proxy.initiate_chat(
        autogen_agent,
        message=last_message,
        carryover=carryover
    )

    # 从智能代理提取最终响应
    final_content = response.chat_history[-1]["content"]

    # 以 LangGraph 格式返回响应
    return {"messages": {"role": "assistant", "content": final_content}}

# 创建带有记忆功能的图以实现持久化
checkpointer = InMemorySaver()

# 构建图
builder = StateGraph(MessagesState)
builder.add_node("autogen", call_autogen_agent)
builder.add_edge(START, "autogen")

# 使用检查点器编译以实现持久化
graph = builder.compile(checkpointer=checkpointer)
from IPython.display import display, Image

display(Image(graph.get_graph().draw_mermaid_png()))

图

3. 本地测试图

在部署到 LangGraph Platform 之前,你可以在本地测试图:

# 传递线程 ID 以持久化智能代理输出,用于后续交互
# highlight-next-line
config = {"configurable": {"thread_id": "1"}}

for chunk in graph.stream(
    {
        "messages": [
            {
                "role": "user",
                "content": "查找斐波那契数列中 10 到 30 之间的数字",
            }
        ]
    },
    # highlight-next-line
    config,
):
    print(chunk)

输出:

user_proxy (to assistant):

查找斐波那契数列中 10 到 30 之间的数字

--------------------------------------------------------------------------------
assistant (to user_proxy):

To find numbers between 10 and 30 in the Fibonacci sequence, we can generate the Fibonacci sequence and check which numbers fall within this range. Here's a plan:

1. Generate Fibonacci numbers starting from 0.
2. Continue generating until the numbers exceed 30.
3. Collect and print the numbers that are between 10 and 30.

...

由于我们利用了 LangGraph 的持久化功能,我们现在可以使用相同的线程 ID 继续对话 -- LangGraph 会自动将之前的历史传递给 AutoGen 智能代理:

for chunk in graph.stream(
    {
        "messages": [
            {
                "role": "user",
                "content": "Multiply the last number by 3",
            }
        ]
    },
    # highlight-next-line
    config,
):
    print(chunk)

输出:

user_proxy (to assistant):

Multiply the last number by 3
Context:
查找斐波那契数列中 10 到 30 之间的数字
The Fibonacci numbers between 10 and 30 are 13 and 21.

These numbers are part of the Fibonacci sequence, which is generated by adding the two preceding numbers to get the next number, starting from 0 and 1.

The sequence goes: 0, 1, 1, 2, 3, 5, 8, 13, 21, 34, ...

As you can see, 13 and 21 are the only numbers in this sequence that fall between 10 and 30.

TERMINATE

--------------------------------------------------------------------------------
assistant (to user_proxy):

The last number in the Fibonacci sequence between 10 and 30 is 21. Multiplying 21 by 3 gives:

21 * 3 = 63

TERMINATE

--------------------------------------------------------------------------------
{'call_autogen_agent': {'messages': {'role': 'assistant', 'content': 'The last number in the Fibonacci sequence between 10 and 30 is 21. Multiplying 21 by 3 gives:\n\n21 * 3 = 63\n\nTERMINATE'}}}

4. 准备部署

要部署到 LangGraph Platform,请创建如下文件结构:

my-autogen-agent/
├── agent.py          # 你的主智能代理代码
├── requirements.txt  # Python 依赖
└── langgraph.json   # LangGraph 配置
import os
import autogen
from langchain_core.messages import convert_to_openai_messages
from langgraph.graph import StateGraph, MessagesState, START
from langgraph.checkpoint.memory import InMemorySaver

# AutoGen 配置
config_list = [{"model": "gpt-4o", "api_key": os.environ["OPENAI_API_KEY"]}]

llm_config = {
    "timeout": 600,
    "cache_seed": 42,
    "config_list": config_list,
    "temperature": 0,
}

# 创建 AutoGen 智能代理
autogen_agent = autogen.AssistantAgent(
    name="assistant",
    llm_config=llm_config,
)

user_proxy = autogen.UserProxyAgent(
    name="user_proxy",
    human_input_mode="NEVER",
    max_consecutive_auto_reply=10,
    is_termination_msg=lambda x: x.get("content", "").rstrip().endswith("TERMINATE"),
    code_execution_config={
        "work_dir": "/tmp/autogen_work",
        "use_docker": False,
    },
    llm_config=llm_config,
    system_message="Reply TERMINATE if the task has been solved at full satisfaction.",
)

def call_autogen_agent(state: MessagesState):
    """调用 AutoGen 智能代理的节点函数"""
    messages = convert_to_openai_messages(state["messages"])
    last_message = messages[-1]
    carryover = messages[:-1] if len(messages) > 1 else []

    response = user_proxy.initiate_chat(
        autogen_agent,
        message=last_message,
        carryover=carryover
    )

    final_content = response.chat_history[-1]["content"]
    return {"messages": {"role": "assistant", "content": final_content}}

# 创建并编译图
def create_graph():
    checkpointer = InMemorySaver()
    builder = StateGraph(MessagesState)
    builder.add_node("autogen", call_autogen_agent)
    builder.add_edge(START, "autogen")
    return builder.compile(checkpointer=checkpointer)

# 导出图供 LangGraph Platform 使用
graph = create_graph()
langgraph>=0.1.0
ag2>=0.2.0
langchain-core>=0.1.0
langchain-openai>=0.0.5
{
"dependencies": ["."],
"graphs": {
    "autogen_agent": "./agent.py:graph"
},
"env": ".env"
}

5. 部署到 LangGraph Platform

使用 LangGraph Platform CLI 部署图:

pip install -U langgraph-cli
langgraph deploy --config langgraph.json