Autogen_core:Agent and Agent Runtime

目录

1. 代码

python 复制代码
from dataclasses import dataclass

from autogen_core import AgentId, MessageContext, RoutedAgent, message_handler


@dataclass
class MyMessageType:
    content: str


class MyAgent(RoutedAgent):
    def __init__(self) -> None:
        super().__init__("MyAgent")

    @message_handler
    async def handle_my_message_type(self, message: MyMessageType, ctx: MessageContext) -> None:
        print(f"{self.id.type} received message: {message.content}")
python 复制代码
from autogen_agentchat.agents import AssistantAgent
from autogen_agentchat.messages import TextMessage
from autogen_ext.models.openai import OpenAIChatCompletionClient

class MyAssistant(RoutedAgent):
    def __init__(self, name: str) -> None:
        super().__init__(name)
        model_client =  OpenAIChatCompletionClient(
                    model="GLM-4-Air-0111",
                    api_key = "your api key",
                    base_url="https://open.bigmodel.cn/api/paas/v4/",
                    model_capabilities={
                "vision": True,
                "function_calling": True,
                "json_output": True,
            }
                    )
        self._delegate = AssistantAgent(name, model_client=model_client)

    @message_handler
    async def handle_my_message_type(self, message: MyMessageType, ctx: MessageContext) -> None:
        print(f"{self.id.type} received message: {message.content}")
        response = await self._delegate.on_messages(
            [TextMessage(content=message.content, source="user")], ctx.cancellation_token
        )
        print(f"{self.id.type} responded: {response.chat_message.content}")
python 复制代码
from autogen_core import SingleThreadedAgentRuntime

runtime = SingleThreadedAgentRuntime()
await MyAgent.register(runtime, "my_agent", lambda: MyAgent())
await MyAssistant.register(runtime, "my_assistant", lambda: MyAssistant("my_assistant"))
复制代码
AgentType(type='my_assistant')
python 复制代码
runtime.start()  # Start processing messages in the background.
await runtime.send_message(MyMessageType("Hello, World!"), AgentId("my_agent", "default"))
await runtime.send_message(MyMessageType("Hello, World!"), AgentId("my_assistant", "default"))
await runtime.stop()  # Stop processing messages in the background.
复制代码
my_agent received message: Hello, World!
my_assistant received message: Hello, World!
my_assistant responded: Hello! How can I help you today?

2. 代码解释

这段代码展示了如何使用AutoGen框架创建和运行两个代理(Agent),其中每个代理可以接收和响应特定类型的消息。以下是代码逻辑的解释:

第一部分:定义消息类型和代理

python 复制代码
from dataclasses import dataclass

from autogen_core import AgentId, MessageContext, RoutedAgent, message_handler


@dataclass
class MyMessageType:
    content: str


class MyAgent(RoutedAgent):
    def __init__(self) -> None:
        super().__init__("MyAgent")

    @message_handler
    async def handle_my_message_type(self, message: MyMessageType, ctx: MessageContext) -> None:
        print(f"{self.id.type} received message: {message.content}")
  1. 定义消息类型 :使用@dataclass定义了一个数据类MyMessageType,用于表示消息,其中包含一个content字段。
  2. 定义代理类 :创建了一个名为MyAgent的代理类,继承自RoutedAgent。在初始化方法中,调用父类初始化并传入代理名称。
  3. 消息处理方法 :使用@message_handler装饰器定义了一个异步方法handle_my_message_type,用于处理MyMessageType类型的消息。当接收到消息时,打印消息内容。

第二部分:定义助手代理

python 复制代码
from autogen_agentchat.agents import AssistantAgent
from autogen_agentchat.messages import TextMessage
from autogen_ext.models.openai import OpenAIChatCompletionClient

class MyAssistant(RoutedAgent):
    def __init__(self, name: str) -> None:
        super().__init__(name)
        model_client =  OpenAIChatCompletionClient(
                    model="GLM-4-Air-0111",
                    api_key = "your api key",
                    base_url="https://open.bigmodel.cn/api/paas/v4/",
                    model_capabilities={
                "vision": True,
                "function_calling": True,
                "json_output": True,
            }
                    )
        self._delegate = AssistantAgent(name, model_client=model_client)

    @message_handler
    async def handle_my_message_type(self, message: MyMessageType, ctx: MessageContext) -> None:
        print(f"{self.id.type} received message: {message.content}")
        response = await self._delegate.on_messages(
            [TextMessage(content=message.content, source="user")], ctx.cancellation_token
        )
        print(f"{self.id.type} responded: {response.chat_message.content}")
  1. 定义助手代理类 :创建了一个名为MyAssistant的代理类,继承自RoutedAgent
  2. 初始化助手代理 :在初始化方法中,创建了一个OpenAIChatCompletionClient实例,用于与OpenAI的聊天模型进行交互,并初始化了一个AssistantAgent作为代理的委托对象。
  3. 消息处理方法 :定义了一个异步方法handle_my_message_type,用于处理接收到的消息。当接收到消息时,打印消息内容,并通过委托对象_delegate将消息发送给OpenAI聊天模型,打印模型的响应。

第三部分:注册和运行代理

python 复制代码
from autogen_core import SingleThreadedAgentRuntime

runtime = SingleThreadedAgentRuntime()
await MyAgent.register(runtime, "my_agent", lambda: MyAgent())
await MyAssistant.register(runtime, "my_assistant", lambda: MyAssistant("my_assistant"))
  1. 创建运行时环境 :创建了一个SingleThreadedAgentRuntime实例,用于管理代理的运行。
  2. 注册代理 :使用register方法注册了MyAgentMyAssistant代理。

第四部分:发送和停止消息处理

python 复制代码
runtime.start()  # Start processing messages in the background.
await runtime.send_message(MyMessageType("Hello, World!"), AgentId("my_agent", "default"))
await runtime.send_message(MyMessageType("Hello, World!"), AgentId("my_assistant", "default"))
await runtime.stop()  # Stop processing messages in the background.
  1. 启动消息处理 :调用runtime.start()启动后台消息处理。
  2. 发送消息 :使用runtime.send_message向两个代理发送MyMessageType类型的消息。
  3. 停止消息处理 :调用runtime.stop()停止后台消息处理。

总结

这段代码展示了如何使用AutoGen框架创建两个代理,其中MyAgent用于接收和打印消息,MyAssistant用于接收消息并通过OpenAI聊天模型生成响应。通过运行时环境SingleThreadedAgentRuntime,可以启动、发送消息和停止代理的消息处理。

3. 类似的例子

python 复制代码
from dataclasses import dataclass

from autogen_core import AgentId, MessageContext, RoutedAgent, message_handler


# Define a new message type that contains a number and the operation to be performed
@dataclass
class MathOperationMessage:
    number1: float
    number2: float
    operation: str  # operation can be 'add', 'subtract', 'multiply', 'divide'


class MathAgent(RoutedAgent):
    def __init__(self) -> None:
        super().__init__("MathAgent")

    @message_handler
    async def handle_math_operation_message(self, message: MathOperationMessage, ctx: MessageContext) -> None:
        if message.operation == "add":
            result = message.number1 + message.number2
        elif message.operation == "subtract":
            result = message.number1 - message.number2
        elif message.operation == "multiply":
            result = message.number1 * message.number2
        elif message.operation == "divide":
            if message.number2 != 0:
                result = message.number1 / message.number2
            else:
                result = "Error: Division by zero"
        else:
            result = "Error: Unknown operation"

        print(f"{self.id.type} received message: {message}")
        print(f"{self.id.type} performed operation: {message.operation} on {message.number1} and {message.number2}")
        print(f"{self.id.type} result: {result}")


# Define another assistant agent that can respond with basic information
from autogen_agentchat.agents import AssistantAgent
from autogen_agentchat.messages import TextMessage
from autogen_ext.models.openai import OpenAIChatCompletionClient

class MathAssistant(RoutedAgent):
    def __init__(self, name: str) -> None:
        super().__init__(name)
        model_client = OpenAIChatCompletionClient(
            model="GLM-4-Air-0111",
            api_key="your api key",  # Replace with your actual API key
            base_url="https://open.bigmodel.cn/api/paas/v4/",
            model_capabilities={
                "vision": True,
                "function_calling": True,
                "json_output": True,
            }
        )
        self._delegate = AssistantAgent(name, model_client=model_client)

    @message_handler
    async def handle_math_message(self, message: MathOperationMessage, ctx: MessageContext) -> None:
        print(f"{self.id.type} received message: {message}")
        response = await self._delegate.on_messages(
            [TextMessage(content=f"Performing operation: {message.operation} on {message.number1} and {message.number2}", source="user")],
            ctx.cancellation_token
        )
        print(f"{self.id.type} responded: {response.chat_message.content}")


# Set up the runtime and agents
from autogen_core import SingleThreadedAgentRuntime

runtime = SingleThreadedAgentRuntime()
await MathAgent.register(runtime, "math_agent", lambda: MathAgent())
await MathAssistant.register(runtime, "math_assistant", lambda: MathAssistant("math_assistant"))

# Start processing messages
runtime.start()

# Send a message to the MathAgent for processing
await runtime.send_message(MathOperationMessage(3, 5, "add"), AgentId("math_agent", "default"))
await runtime.send_message(MathOperationMessage(10, 2, "subtract"), AgentId("math_agent", "default"))

# Send a message to the MathAssistant to provide context about the operation
await runtime.send_message(MathOperationMessage(7, 4, "multiply"), AgentId("math_assistant", "default"))

# Stop the runtime
await runtime.stop()
复制代码
math_agent received message: MathOperationMessage(number1=3, number2=5, operation='add')
math_agent performed operation: add on 3 and 5
math_agent result: 8
math_agent received message: MathOperationMessage(number1=10, number2=2, operation='subtract')
math_agent performed operation: subtract on 10 and 2
math_agent result: 8
math_assistant received message: MathOperationMessage(number1=7, number2=4, operation='multiply')
math_assistant responded: ```
7 * 4 = 28
```

TERMINATE

参考链接:https://microsoft.github.io/autogen/stable/user-guide/core-user-guide/framework/agent-and-agent-runtime.html

相关推荐
小陈工1 小时前
Python Web开发入门(十七):Vue.js与Python后端集成——让前后端真正“握手言和“
开发语言·前端·javascript·数据库·vue.js·人工智能·python
墨染天姬5 小时前
【AI】端侧AIBOX可以部署哪些智能体
人工智能
AI成长日志5 小时前
【Agentic RL】1.1 什么是Agentic RL:从传统RL到智能体学习
人工智能·学习·算法
2501_948114246 小时前
2026年大模型API聚合平台技术评测:企业级接入层的治理演进与星链4SAPI架构观察
大数据·人工智能·gpt·架构·claude
小小工匠6 小时前
LLM - awesome-design-md 从 DESIGN.md 到“可对话的设计系统”:用纯文本驱动 AI 生成一致 UI 的新范式
人工智能·ui
黎阳之光6 小时前
黎阳之光:视频孪生领跑者,铸就中国数字科技全球竞争力
大数据·人工智能·算法·安全·数字孪生
小超同学你好6 小时前
面向 LLM 的程序设计 6:Tool Calling 的完整生命周期——从定义、决策、执行到观测回注
人工智能·语言模型
智星云算力6 小时前
本地GPU与租用GPU混合部署:混合算力架构搭建指南
人工智能·架构·gpu算力·智星云·gpu租用
jinanwuhuaguo6 小时前
截止到4月8日,OpenClaw 2026年4月更新深度解读剖析:从“能力回归”到“信任内建”的范式跃迁
android·开发语言·人工智能·深度学习·kotlin
xiaozhazha_6 小时前
效率提升80%:2026年AI CRM与ERP深度集成的架构设计与实现
人工智能