AutoGen学习笔记系列(七)Tutorial - Managing State

发布于:2025-03-07 ⋅ 阅读:(9) ⋅ 点赞:(0)

这篇文章瞄准的是AutoGen框架官方教程中的 Tutorial 章节中的 Managing State 小节,主要介绍了如何对Team内的状态管理,特别是如何 保存加载 状态,这对于Agent系统而言非常重要。

官网链接:https://microsoft.github.io/autogen/stable/user-guide/agentchat-user-guide/tutorial/state.html# ;

【注意】:AutoGen库中有一个函数名存在语意歧义,save_state() 实际是获取状态,至于如何本地化是需要自己完成的,但好在其状态的描述都是json格式,所以这部分很容易实现。


Managing State

在之前的文章中详细介绍了如何构建与使用Team、如何控制Team内部轮询节奏等。我们也提到过只要不调用 team.reset() 方法,Team内部的上下文状态是会持续被保留的。那么为什么会需要将这个上下文状态写到本地硬盘上呢?

  1. 节省成本:如果你想构建自己私有Team并让其拥有长期记忆,那么保存历史状态就是必须的,否则每次启动后都需要将你和Team的所有历史对话都重新发一遍,token会被快速消耗;
  2. 代码测试:在调试过程中难免会抛出异常,为了避免程序被意外中断,保存上下文信息也是非常必要的;
  3. 方便迁移:上下文状态相当于一个人的记忆,只要Team结构不变就可以将这个记忆复制很多份在其他地方使用,并行任务的神器;

AutoGen提供了Agent、Team快捷方便的存储方法 save_state() 成员函数,其返回值是一个python的dict类型,能够很容易地转换为json文本,这意味着我们甚至可以手动修改Agent、Team的状态信息。


Saving and Loading Agents

对于Agent而言可以使用save_state() 获取 状态,用 load_state() 加载状态。

  • 首先写一个demo,在运行后将Agent的状态保存到硬盘上:
from autogen_agentchat.agents import AssistantAgent
from autogen_agentchat.messages import TextMessage
from autogen_core import CancellationToken
from autogen_ext.models.openai import OpenAIChatCompletionClient
import os, asyncio, json

os.environ["OPENAI_API_KEY"] = "你的OpenAI API Key"
assistant_agent = AssistantAgent(
    name="assistant_agent",
    system_message="You are a helpful assistant",
    model_client=OpenAIChatCompletionClient(
        model="gpt-4o-2024-08-06",
    ),
)

#----------------------------------------------------#
# 写一个函数用来本地化Agent的状态信息
def persisting_state(file_name:str, state):
    with open(file_name, "w") as f:
        json.dump(state, f)

# 让Agent执行一小段交互
async def main():
    response = await assistant_agent.on_messages(
        [TextMessage(content="Write a 3 line poem on lake tangayika", source="user")], CancellationToken()
    )
    print(response.chat_message.content)
    print('-' * 50)
    # 等待运行完成后在获取其状态
    agent_state = await assistant_agent.save_state()
    print(agent_state)
    # 将Agent状态写入本地
    persisting_state("./agent_state.json", agent_state)

asyncio.run(main())

运行结果如下:

$ python demo.py 

在这里插入图片描述

当前目录下会生成一个这个Agent的状态信息,里面的messages字段就是Agent和LLM之间的聊天记录,也就是所谓的记忆:

{
    "type": "AssistantAgentState",
    "version": "1.0.0",
    "llm_context": {
        "messages": [
            {
                "content": "Write a 3 line poem on lake tangayika",
                "source": "user",
                "type": "UserMessage"
            },
            {
                "content": "In waters deep, where ancient secrets keep,  \nTanganyika mirrors the sky's boundless sweep,  \nSilent guardian of tales and dreams, it sleeps.  ",
                "source": "assistant_agent",
                "type": "AssistantMessage"
            }
        ]
    }
}
  • 然后再写一个demo从硬盘上读取Agent状态:
from autogen_agentchat.agents import AssistantAgent
from autogen_agentchat.messages import TextMessage
from autogen_core import CancellationToken
from autogen_ext.models.openai import OpenAIChatCompletionClient
import os, asyncio, json

os.environ["OPENAI_API_KEY"] = "你的OpenAI API Key"

#----------------------------------------------------#
# 写一个函数用来加载本地json文件
def load_state(file_path:str) -> dict:
    with open(file_path, "r") as f:
        state = json.load(f)
        return state

# 定义一个新的Agent,此时这个Agent还没有记忆,是一个空的状态
new_assistant_agent = AssistantAgent(
    name="assistant_agent",
    system_message="You are a helpful assistant",
    model_client=OpenAIChatCompletionClient(
        model="gpt-4o-2024-08-06",
    ),
)

async def main():
	# 加载本地Agent状态
    agent_state = load_state("./agent_state.json")
    await new_assistant_agent.load_state(agent_state)
    # 让新的Agent与模型进行一次交互
    response = await new_assistant_agent.on_messages(
        [TextMessage(content="What was the last line of the previous poem you wrote", source="user")], CancellationToken()
    )
    print(response.chat_message.content)
    print('-' * 50)
    agent_state = await new_assistant_agent.save_state()
    print(agent_state)
    
asyncio.run(main())

运行结果如下:

$ python demo.py

在这里插入图片描述

此时如果你再将这个新的Agent状态保存一下就可以看到下面这些内容:

{
    "type": "AssistantAgentState",
    "version": "1.0.0",
    "llm_context": {
        "messages": [
        	// 这部分是之前Agent的聊天记录
            {
                "content": "Write a 3 line poem on lake tangayika",
                "source": "user",
                "type": "UserMessage"
            },
            {
                "content": "In waters deep, where ancient secrets keep,  \nTanganyika mirrors the sky's boundless sweep,  \nSilent guardian of tales and dreams, it sleeps.  ",
                "source": "assistant_agent",
                "type": "AssistantMessage"
            },
            // 从这开始是新Agent的聊天记录
            {
                "content": "What was the last line of the previous poem you wrote",
                "source": "user",
                "type": "UserMessage"
            },
            {
                "content": "The last line of the previous poem I wrote was:  \nSilent guardian of tales and dreams, it sleeps.",
                "source": "assistant_agent",
                "type": "AssistantMessage"
            }
        ]
    }
}

Saving and Loading Teams

和上面Agent状态获取与加载一样,Team的状态获取与加载使用的是同名函数save_state()load_state()

  • 先写一个demo用来保存team的状态:
from autogen_agentchat.agents import AssistantAgent
from autogen_ext.models.openai import OpenAIChatCompletionClient
from autogen_agentchat.conditions import MaxMessageTermination
from autogen_agentchat.teams import RoundRobinGroupChat
from autogen_agentchat.ui import Console
import os, asyncio, json

os.environ["OPENAI_API_KEY"] = "你的OpenAI API Key"
def persisting_state(file_name:str, state):
    with open(file_name, "w") as f:
        json.dump(state, f)

assistant_agent = AssistantAgent(
    name="assistant_agent",
    system_message="You are a helpful assistant",
    model_client=OpenAIChatCompletionClient(
        model="gpt-4o-2024-08-06",
    ),
)
agent_team = RoundRobinGroupChat([assistant_agent], termination_condition=MaxMessageTermination(max_messages=2))

#----------------------------------------------------#
async def main():
	stream = agent_team.run_stream(task="Write a beautiful poem 3-line about lake tangayika")
    await Console(stream)
    # 获取team的状态
    team_state = await agent_team.save_state()
    print(team_state)
    persisting_state("./team_state.json", team_state)
    
asyncio.run(main())

运行结果如下:

$ python demo.py

在这里插入图片描述

这里可以发现Team类型的状态要比单个Agent的内容多不少:

{
    "type": "TeamState",
    "version": "1.0.0",
    "agent_states": {
        "group_chat_manager/10c60d46-7e4d-4de0-ae64-e2b7eaf97119": {
            "type": "RoundRobinManagerState",
            "version": "1.0.0",
            "message_thread": [
                {
                    "source": "user",
                    "models_usage": null,
                    "content": "Write a beautiful poem 3-line about lake tangayika",
                    "type": "TextMessage"
                },
                {
                    "source": "assistant_agent",
                    "models_usage": {
                        "prompt_tokens": 29,
                        "completion_tokens": 34
                    },
                    "content": "In Tanganyika's embrace, skies mirror deep,  \nWhispering waves cradle secrets to keep,  \nNature's timeless dance, where the heart finds sleep.  ",
                    "type": "TextMessage"
                }
            ],
            "current_turn": 0,
            "next_speaker_index": 0
        },
        "collect_output_messages/10c60d46-7e4d-4de0-ae64-e2b7eaf97119": {},
        "assistant_agent/10c60d46-7e4d-4de0-ae64-e2b7eaf97119": {
            "type": "ChatAgentContainerState",
            "version": "1.0.0",
            "agent_state": {
                "type": "AssistantAgentState",
                "version": "1.0.0",
                "llm_context": {
                    "messages": [
                        {
                            "content": "Write a beautiful poem 3-line about lake tangayika",
                            "source": "user",
                            "type": "UserMessage"
                        },
                        {
                            "content": "In Tanganyika's embrace, skies mirror deep,  \nWhispering waves cradle secrets to keep,  \nNature's timeless dance, where the heart finds sleep.  ",
                            "source": "assistant_agent",
                            "type": "AssistantMessage"
                        }
                    ]
                }
            },
            "message_buffer": []
        }
    },
    "team_id": "10c60d46-7e4d-4de0-ae64-e2b7eaf97119"
}
  • 再写一个demo用来加载本地保存的状态:
from autogen_agentchat.agents import AssistantAgent
from autogen_ext.models.openai import OpenAIChatCompletionClient
from autogen_agentchat.conditions import MaxMessageTermination
from autogen_agentchat.teams import RoundRobinGroupChat
from autogen_agentchat.ui import Console
import os, asyncio, json

os.environ["OPENAI_API_KEY"] = "你的OpenAI API Key"
def load_state(file_path:str) -> dict:
    with open(file_path, "r") as f:
        state = json.load(f)
        return state
        
assistant_agent = AssistantAgent(
    name="assistant_agent",
    system_message="You are a helpful assistant",
    model_client=OpenAIChatCompletionClient(
        model="gpt-4o-2024-08-06",
    ),
)
agent_team = RoundRobinGroupChat([assistant_agent], termination_condition=MaxMessageTermination(max_messages=2))

#----------------------------------------------------#
async def main():
	# 从本地加载team的状态
    team_state = load_state("./team_state.json")
    await agent_team.load_state(team_state)
    stream = agent_team.run_stream(task="What was the last line of the poem you wrote?")
    await Console(stream)
    print('-' * 50)
    team_state = await agent_team.save_state()
    print(team_state)
    
asyncio.run(main())

运行结果如下:

$ python demo.py

在这里插入图片描述


Persisting State (File or Database)

剩下这一小段就是介绍如何将Agent和Team状态写入硬盘并加载,其实上面的示例中已经用到了这个函数,官方教程中没有对其进行封装,我这里直接把上面用到的函数贴在这:

# 将状态写入本地
def persisting_state(file_name:str, state):
    with open(file_name, "w") as f:
        json.dump(state, f)

# 从本地加载状态
def load_state(file_path:str) -> dict:
    with open(file_path, "r") as f:
        state = json.load(f)
        return state

Take a breath

至此,官方教程中的 Tutorial 章节就全部介绍完了,后面将开启官方教程中的进阶章节 Advanced 部分,革命尚未成功,同志仍需努力。