Autogen_core:Agent and Agent Runtime

目录

1. 代码

python 复制代码
from dataclasses import dataclass

from autogen_core import AgentId, MessageContext, RoutedAgent, message_handler


@dataclass
class MyMessageType:
    content: str


class MyAgent(RoutedAgent):
    def __init__(self) -> None:
        super().__init__("MyAgent")

    @message_handler
    async def handle_my_message_type(self, message: MyMessageType, ctx: MessageContext) -> None:
        print(f"{self.id.type} received message: {message.content}")
python 复制代码
from autogen_agentchat.agents import AssistantAgent
from autogen_agentchat.messages import TextMessage
from autogen_ext.models.openai import OpenAIChatCompletionClient

class MyAssistant(RoutedAgent):
    def __init__(self, name: str) -> None:
        super().__init__(name)
        model_client =  OpenAIChatCompletionClient(
                    model="GLM-4-Air-0111",
                    api_key = "your api key",
                    base_url="https://open.bigmodel.cn/api/paas/v4/",
                    model_capabilities={
                "vision": True,
                "function_calling": True,
                "json_output": True,
            }
                    )
        self._delegate = AssistantAgent(name, model_client=model_client)

    @message_handler
    async def handle_my_message_type(self, message: MyMessageType, ctx: MessageContext) -> None:
        print(f"{self.id.type} received message: {message.content}")
        response = await self._delegate.on_messages(
            [TextMessage(content=message.content, source="user")], ctx.cancellation_token
        )
        print(f"{self.id.type} responded: {response.chat_message.content}")
python 复制代码
from autogen_core import SingleThreadedAgentRuntime

runtime = SingleThreadedAgentRuntime()
await MyAgent.register(runtime, "my_agent", lambda: MyAgent())
await MyAssistant.register(runtime, "my_assistant", lambda: MyAssistant("my_assistant"))
复制代码
AgentType(type='my_assistant')
python 复制代码
runtime.start()  # Start processing messages in the background.
await runtime.send_message(MyMessageType("Hello, World!"), AgentId("my_agent", "default"))
await runtime.send_message(MyMessageType("Hello, World!"), AgentId("my_assistant", "default"))
await runtime.stop()  # Stop processing messages in the background.
复制代码
my_agent received message: Hello, World!
my_assistant received message: Hello, World!
my_assistant responded: Hello! How can I help you today?

2. 代码解释

这段代码展示了如何使用AutoGen框架创建和运行两个代理(Agent),其中每个代理可以接收和响应特定类型的消息。以下是代码逻辑的解释:

第一部分:定义消息类型和代理

python 复制代码
from dataclasses import dataclass

from autogen_core import AgentId, MessageContext, RoutedAgent, message_handler


@dataclass
class MyMessageType:
    content: str


class MyAgent(RoutedAgent):
    def __init__(self) -> None:
        super().__init__("MyAgent")

    @message_handler
    async def handle_my_message_type(self, message: MyMessageType, ctx: MessageContext) -> None:
        print(f"{self.id.type} received message: {message.content}")
  1. 定义消息类型 :使用@dataclass定义了一个数据类MyMessageType,用于表示消息,其中包含一个content字段。
  2. 定义代理类 :创建了一个名为MyAgent的代理类,继承自RoutedAgent。在初始化方法中,调用父类初始化并传入代理名称。
  3. 消息处理方法 :使用@message_handler装饰器定义了一个异步方法handle_my_message_type,用于处理MyMessageType类型的消息。当接收到消息时,打印消息内容。

第二部分:定义助手代理

python 复制代码
from autogen_agentchat.agents import AssistantAgent
from autogen_agentchat.messages import TextMessage
from autogen_ext.models.openai import OpenAIChatCompletionClient

class MyAssistant(RoutedAgent):
    def __init__(self, name: str) -> None:
        super().__init__(name)
        model_client =  OpenAIChatCompletionClient(
                    model="GLM-4-Air-0111",
                    api_key = "your api key",
                    base_url="https://open.bigmodel.cn/api/paas/v4/",
                    model_capabilities={
                "vision": True,
                "function_calling": True,
                "json_output": True,
            }
                    )
        self._delegate = AssistantAgent(name, model_client=model_client)

    @message_handler
    async def handle_my_message_type(self, message: MyMessageType, ctx: MessageContext) -> None:
        print(f"{self.id.type} received message: {message.content}")
        response = await self._delegate.on_messages(
            [TextMessage(content=message.content, source="user")], ctx.cancellation_token
        )
        print(f"{self.id.type} responded: {response.chat_message.content}")
  1. 定义助手代理类 :创建了一个名为MyAssistant的代理类,继承自RoutedAgent
  2. 初始化助手代理 :在初始化方法中,创建了一个OpenAIChatCompletionClient实例,用于与OpenAI的聊天模型进行交互,并初始化了一个AssistantAgent作为代理的委托对象。
  3. 消息处理方法 :定义了一个异步方法handle_my_message_type,用于处理接收到的消息。当接收到消息时,打印消息内容,并通过委托对象_delegate将消息发送给OpenAI聊天模型,打印模型的响应。

第三部分:注册和运行代理

python 复制代码
from autogen_core import SingleThreadedAgentRuntime

runtime = SingleThreadedAgentRuntime()
await MyAgent.register(runtime, "my_agent", lambda: MyAgent())
await MyAssistant.register(runtime, "my_assistant", lambda: MyAssistant("my_assistant"))
  1. 创建运行时环境 :创建了一个SingleThreadedAgentRuntime实例,用于管理代理的运行。
  2. 注册代理 :使用register方法注册了MyAgentMyAssistant代理。

第四部分:发送和停止消息处理

python 复制代码
runtime.start()  # Start processing messages in the background.
await runtime.send_message(MyMessageType("Hello, World!"), AgentId("my_agent", "default"))
await runtime.send_message(MyMessageType("Hello, World!"), AgentId("my_assistant", "default"))
await runtime.stop()  # Stop processing messages in the background.
  1. 启动消息处理 :调用runtime.start()启动后台消息处理。
  2. 发送消息 :使用runtime.send_message向两个代理发送MyMessageType类型的消息。
  3. 停止消息处理 :调用runtime.stop()停止后台消息处理。

总结

这段代码展示了如何使用AutoGen框架创建两个代理,其中MyAgent用于接收和打印消息,MyAssistant用于接收消息并通过OpenAI聊天模型生成响应。通过运行时环境SingleThreadedAgentRuntime,可以启动、发送消息和停止代理的消息处理。

3. 类似的例子

python 复制代码
from dataclasses import dataclass

from autogen_core import AgentId, MessageContext, RoutedAgent, message_handler


# Define a new message type that contains a number and the operation to be performed
@dataclass
class MathOperationMessage:
    number1: float
    number2: float
    operation: str  # operation can be 'add', 'subtract', 'multiply', 'divide'


class MathAgent(RoutedAgent):
    def __init__(self) -> None:
        super().__init__("MathAgent")

    @message_handler
    async def handle_math_operation_message(self, message: MathOperationMessage, ctx: MessageContext) -> None:
        if message.operation == "add":
            result = message.number1 + message.number2
        elif message.operation == "subtract":
            result = message.number1 - message.number2
        elif message.operation == "multiply":
            result = message.number1 * message.number2
        elif message.operation == "divide":
            if message.number2 != 0:
                result = message.number1 / message.number2
            else:
                result = "Error: Division by zero"
        else:
            result = "Error: Unknown operation"

        print(f"{self.id.type} received message: {message}")
        print(f"{self.id.type} performed operation: {message.operation} on {message.number1} and {message.number2}")
        print(f"{self.id.type} result: {result}")


# Define another assistant agent that can respond with basic information
from autogen_agentchat.agents import AssistantAgent
from autogen_agentchat.messages import TextMessage
from autogen_ext.models.openai import OpenAIChatCompletionClient

class MathAssistant(RoutedAgent):
    def __init__(self, name: str) -> None:
        super().__init__(name)
        model_client = OpenAIChatCompletionClient(
            model="GLM-4-Air-0111",
            api_key="your api key",  # Replace with your actual API key
            base_url="https://open.bigmodel.cn/api/paas/v4/",
            model_capabilities={
                "vision": True,
                "function_calling": True,
                "json_output": True,
            }
        )
        self._delegate = AssistantAgent(name, model_client=model_client)

    @message_handler
    async def handle_math_message(self, message: MathOperationMessage, ctx: MessageContext) -> None:
        print(f"{self.id.type} received message: {message}")
        response = await self._delegate.on_messages(
            [TextMessage(content=f"Performing operation: {message.operation} on {message.number1} and {message.number2}", source="user")],
            ctx.cancellation_token
        )
        print(f"{self.id.type} responded: {response.chat_message.content}")


# Set up the runtime and agents
from autogen_core import SingleThreadedAgentRuntime

runtime = SingleThreadedAgentRuntime()
await MathAgent.register(runtime, "math_agent", lambda: MathAgent())
await MathAssistant.register(runtime, "math_assistant", lambda: MathAssistant("math_assistant"))

# Start processing messages
runtime.start()

# Send a message to the MathAgent for processing
await runtime.send_message(MathOperationMessage(3, 5, "add"), AgentId("math_agent", "default"))
await runtime.send_message(MathOperationMessage(10, 2, "subtract"), AgentId("math_agent", "default"))

# Send a message to the MathAssistant to provide context about the operation
await runtime.send_message(MathOperationMessage(7, 4, "multiply"), AgentId("math_assistant", "default"))

# Stop the runtime
await runtime.stop()
复制代码
math_agent received message: MathOperationMessage(number1=3, number2=5, operation='add')
math_agent performed operation: add on 3 and 5
math_agent result: 8
math_agent received message: MathOperationMessage(number1=10, number2=2, operation='subtract')
math_agent performed operation: subtract on 10 and 2
math_agent result: 8
math_assistant received message: MathOperationMessage(number1=7, number2=4, operation='multiply')
math_assistant responded: ```
7 * 4 = 28
```

TERMINATE

参考链接:https://microsoft.github.io/autogen/stable/user-guide/core-user-guide/framework/agent-and-agent-runtime.html

相关推荐
Blossom.1183 分钟前
低代码开发:开启软件开发的新篇章
人工智能·深度学习·安全·低代码·机器学习·计算机视觉·数据挖掘
安特尼12 分钟前
招行数字金融挑战赛数据赛道赛题一
人工智能·python·机器学习·金融·数据分析
带娃的IT创业者12 分钟前
《AI大模型应知应会100篇》第59篇:Flowise:无代码搭建大模型应用
人工智能
数澜悠客1 小时前
AI与IoT携手,精准农业未来已来
人工智能·物联网
猎板PCB黄浩1 小时前
AI优化高频PCB信号完整性:猎板PCB的技术突破与应用实践
人工智能
Icoolkj1 小时前
可灵 AI:开启 AI 视频创作新时代
人工智能·音视频
RK_Dangerous1 小时前
【深度学习】计算机视觉(18)——从应用到设计
人工智能·深度学习·计算机视觉
AI大模型顾潇2 小时前
[特殊字符] 本地部署DeepSeek大模型:安全加固与企业级集成方案
数据库·人工智能·安全·大模型·llm·微调·llama
_Itachi__2 小时前
深入理解目标检测中的关键指标及其计算方法
人工智能·目标检测·目标跟踪
Stara05112 小时前
基于注意力机制与iRMB模块的YOLOv11改进模型—高效轻量目标检测新范式
人工智能·python·深度学习·神经网络·目标检测·计算机视觉·yolov11