Autogen4j: the Java version of Microsoft AutoGen

https://github.com/HamaWhiteGG/autogen4j

Java version of Microsoft AutoGen, Enable Next-Gen Large Language Model Applications.

1. What is AutoGen

AutoGen is a framework that enables the development of LLM applications using multiple agents that can converse with each other to solve tasks. AutoGen agents are customizable, conversable, and seamlessly allow human participation. They can operate in various modes that employ combinations of LLMs, human inputs, and tools.

The following example in the autogen4j-example.

2. Quickstart

2.1 Maven Repository

Prerequisites for building:

  • Java 17 or later
  • Unix-like environment (we use Linux, Mac OS X)
  • Maven (we recommend version 3.8.6 and require at least 3.5.4)
xml 复制代码
<dependency>
    <groupId>io.github.hamawhitegg</groupId>
    <artifactId>autogen4j-core</artifactId>
    <version>0.1.0</version>
</dependency>

2.2 Environment Setup

Using Autogen4j requires OpenAI's APIs, you need to set the environment variable.

shell 复制代码
export OPENAI_API_KEY=xxx

3. Multi-Agent Conversation Framework

Autogen enables the next-gen LLM applications with a generic multi-agent conversation framework. It offers customizable and conversable agents that integrate LLMs, tools, and humans.

By automating chat among multiple capable agents, one can easily make them collectively perform tasks autonomously or with human feedback, including tasks that require using tools via code.

Features of this use case include:

  • Multi-agent conversations: AutoGen agents can communicate with each other to solve tasks. This allows for more complex and sophisticated applications than would be possible with a single LLM.
  • Customization: AutoGen agents can be customized to meet the specific needs of an application. This includes the ability to choose the LLMs to use, the types of human input to allow, and the tools to employ.
  • Human participation: AutoGen seamlessly allows human participation. This means that humans can provide input and feedback to the agents as needed.

3.1 Auto Feedback From Code Execution Example

Auto Feedback From Code Execution Example

java 复制代码
// create an AssistantAgent named "assistant"
var assistant = AssistantAgent.builder()
        .name("assistant")
        .build();

var codeExecutionConfig = CodeExecutionConfig.builder()
        .workDir("data/coding")
        .build();
// create a UserProxyAgent instance named "user_proxy"
var userProxy = UserProxyAgent.builder()
        .name("user_proxy")
        .humanInputMode(NEVER)
        .maxConsecutiveAutoReply(10)
        .isTerminationMsg(e -> e.getContent().strip().endsWith("TERMINATE"))
        .codeExecutionConfig(codeExecutionConfig)
        .build();

// the assistant receives a message from the user_proxy, which contains the task description
userProxy.initiateChat(assistant,
        "What date is today? Compare the year-to-date gain for META and TESLA.");

// followup of the previous question
userProxy.send(assistant,
        "Plot a chart of their stock price change YTD and save to stock_price_ytd.png.");

The figure below shows an example conversation flow with Autogen4j.

After running, you can check the file coding_output.log for the output logs.

The final output is as shown in the following picture.

3.2 Group Chat Example

Group Chat Example

java 复制代码
var codeExecutionConfig = CodeExecutionConfig.builder()
        .workDir("data/group_chat")
        .lastMessagesNumber(2)
        .build();

// create a UserProxyAgent instance named "user_proxy"
var userProxy = UserProxyAgent.builder()
        .name("user_proxy")
        .systemMessage("A human admin.")
        .humanInputMode(TERMINATE)
        .codeExecutionConfig(codeExecutionConfig)
        .build();

// create an AssistantAgent named "coder"
var coder = AssistantAgent.builder()
        .name("coder")
        .build();

// create an AssistantAgent named "pm"
var pm = AssistantAgent.builder()
        .name("product_manager")
        .systemMessage("Creative in software product ideas.")
        .build();

var groupChat = GroupChat.builder()
        .agents(List.of(userProxy, coder, pm))
        .maxRound(12)
        .build();

// create an GroupChatManager named "manager"
var manager = GroupChatManager.builder()
        .groupChat(groupChat)
        .build();

userProxy.initiateChat(manager,
        "Find a latest paper about gpt-4 on arxiv and find its potential applications in software.");

After running, you can check the file group_chat_output.log for the output logs.

4. Run Test Cases from Source

shell 复制代码
git clone https://github.com/HamaWhiteGG/autogen4j.git
cd autogen4j

# export JAVA_HOME=JDK17_INSTALL_HOME && mvn clean test
mvn clean test

This project uses Spotless to format the code.

If you make any modifications, please remember to format the code using the following command.

shell 复制代码
# export JAVA_HOME=JDK17_INSTALL_HOME && mvn spotless:apply
mvn spotless:apply

5. Support

Don't hesitate to ask!

Open an issue if you find a bug or need any help.

相关推荐
CoderLiu11 小时前
用这个MCP,只给大模型一个figma链接就能直接导出图片,还能自动压缩上传?
前端·llm·mcp
新智元13 小时前
图灵奖大佬向 97 年小孩哥汇报?小扎 1 亿年薪买新贵,老将痛诉熬夜捡 GPU!
人工智能·openai
新智元13 小时前
刚刚,Ilya 官宣出任 SSI CEO!送走「叛徒」联创,豪言不缺 GPU
人工智能·openai
星始流年14 小时前
前端视角下认识AI Agent
前端·agent·ai编程
智泊AI15 小时前
大语言模型LLM底层技术原理到底是什么?大型语言模型如何工作?
llm
moonless022215 小时前
🌈Transformer说人话版(二)位置编码 【持续更新ing】
人工智能·llm
小爷毛毛_卓寿杰15 小时前
基于大模型与知识图谱的对话引导意图澄清系统技术解析
人工智能·llm
聚客AI16 小时前
解构高效提示工程:分层模型、文本扩展引擎与可视化调试全链路指南
人工智能·llm·掘金·日新计划
程序员X小鹿16 小时前
字节扣子空间的这个新功能,又原地封神了!打工人从此告别通宵!(附实测体验)
aigc·agent
AI大模型19 小时前
LangGraph官方文档笔记(4)——提示聊天机器人
程序员·langchain·llm