Autogen4j: the Java version of Microsoft AutoGen

https://github.com/HamaWhiteGG/autogen4j

Java version of Microsoft AutoGen, Enable Next-Gen Large Language Model Applications.

1. What is AutoGen

AutoGen is a framework that enables the development of LLM applications using multiple agents that can converse with each other to solve tasks. AutoGen agents are customizable, conversable, and seamlessly allow human participation. They can operate in various modes that employ combinations of LLMs, human inputs, and tools.

The following example in the autogen4j-example.

2. Quickstart

2.1 Maven Repository

Prerequisites for building:

  • Java 17 or later
  • Unix-like environment (we use Linux, Mac OS X)
  • Maven (we recommend version 3.8.6 and require at least 3.5.4)
xml 复制代码
<dependency>
    <groupId>io.github.hamawhitegg</groupId>
    <artifactId>autogen4j-core</artifactId>
    <version>0.1.0</version>
</dependency>

2.2 Environment Setup

Using Autogen4j requires OpenAI's APIs, you need to set the environment variable.

shell 复制代码
export OPENAI_API_KEY=xxx

3. Multi-Agent Conversation Framework

Autogen enables the next-gen LLM applications with a generic multi-agent conversation framework. It offers customizable and conversable agents that integrate LLMs, tools, and humans.

By automating chat among multiple capable agents, one can easily make them collectively perform tasks autonomously or with human feedback, including tasks that require using tools via code.

Features of this use case include:

  • Multi-agent conversations: AutoGen agents can communicate with each other to solve tasks. This allows for more complex and sophisticated applications than would be possible with a single LLM.
  • Customization: AutoGen agents can be customized to meet the specific needs of an application. This includes the ability to choose the LLMs to use, the types of human input to allow, and the tools to employ.
  • Human participation: AutoGen seamlessly allows human participation. This means that humans can provide input and feedback to the agents as needed.

3.1 Auto Feedback From Code Execution Example

Auto Feedback From Code Execution Example

java 复制代码
// create an AssistantAgent named "assistant"
var assistant = AssistantAgent.builder()
        .name("assistant")
        .build();

var codeExecutionConfig = CodeExecutionConfig.builder()
        .workDir("data/coding")
        .build();
// create a UserProxyAgent instance named "user_proxy"
var userProxy = UserProxyAgent.builder()
        .name("user_proxy")
        .humanInputMode(NEVER)
        .maxConsecutiveAutoReply(10)
        .isTerminationMsg(e -> e.getContent().strip().endsWith("TERMINATE"))
        .codeExecutionConfig(codeExecutionConfig)
        .build();

// the assistant receives a message from the user_proxy, which contains the task description
userProxy.initiateChat(assistant,
        "What date is today? Compare the year-to-date gain for META and TESLA.");

// followup of the previous question
userProxy.send(assistant,
        "Plot a chart of their stock price change YTD and save to stock_price_ytd.png.");

The figure below shows an example conversation flow with Autogen4j.

After running, you can check the file coding_output.log for the output logs.

The final output is as shown in the following picture.

3.2 Group Chat Example

Group Chat Example

java 复制代码
var codeExecutionConfig = CodeExecutionConfig.builder()
        .workDir("data/group_chat")
        .lastMessagesNumber(2)
        .build();

// create a UserProxyAgent instance named "user_proxy"
var userProxy = UserProxyAgent.builder()
        .name("user_proxy")
        .systemMessage("A human admin.")
        .humanInputMode(TERMINATE)
        .codeExecutionConfig(codeExecutionConfig)
        .build();

// create an AssistantAgent named "coder"
var coder = AssistantAgent.builder()
        .name("coder")
        .build();

// create an AssistantAgent named "pm"
var pm = AssistantAgent.builder()
        .name("product_manager")
        .systemMessage("Creative in software product ideas.")
        .build();

var groupChat = GroupChat.builder()
        .agents(List.of(userProxy, coder, pm))
        .maxRound(12)
        .build();

// create an GroupChatManager named "manager"
var manager = GroupChatManager.builder()
        .groupChat(groupChat)
        .build();

userProxy.initiateChat(manager,
        "Find a latest paper about gpt-4 on arxiv and find its potential applications in software.");

After running, you can check the file group_chat_output.log for the output logs.

4. Run Test Cases from Source

shell 复制代码
git clone https://github.com/HamaWhiteGG/autogen4j.git
cd autogen4j

# export JAVA_HOME=JDK17_INSTALL_HOME && mvn clean test
mvn clean test

This project uses Spotless to format the code.

If you make any modifications, please remember to format the code using the following command.

shell 复制代码
# export JAVA_HOME=JDK17_INSTALL_HOME && mvn spotless:apply
mvn spotless:apply

5. Support

Don't hesitate to ask!

Open an issue if you find a bug or need any help.

相关推荐
CodeCraft Studio2 小时前
Aspose.Words for .NET 25.7:支持自建大语言模型(LLM),实现更安全灵活的AI文档处理功能
人工智能·ai·语言模型·llm·.net·智能文档处理·aspose.word
ZHOU_WUYI3 小时前
FastVLM-0.5B 模型解析
人工智能·llm
Wilber的技术分享3 小时前
【大模型实战笔记 1】Prompt-Tuning方法
人工智能·笔记·机器学习·大模型·llm·prompt
AI Echoes4 小时前
别再手工缝合API了!开源LLMOps神器LMForge,让你像搭积木一样玩转AI智能体!
人工智能·python·langchain·开源·agent
AI Echoes4 小时前
从零构建企业级LLMOps平台:LMForge——支持多模型、可视化编排、知识库与安全审核的全栈解决方案
人工智能·python·langchain·开源·agent
蛋先生DX9 小时前
零压力了解 LoRA 微调原理
人工智能·llm
ZHOU_WUYI10 小时前
门控MLP(Qwen3MLP)与稀疏混合专家(Qwen3MoeSparseMoeBlock)模块解析
人工智能·llm
boonya16 小时前
国内外开源大模型 LLM整理
开源·大模型·llm·大语言模型
温柔哥`19 小时前
AgentThink:一种在自动驾驶视觉语言模型中用于工具增强链式思维推理的统一框架
语言模型·自动驾驶·agent·工具调用·grpo·强化微调·tool call
DevYK1 天前
企业级 Agent 开发实战(二) MCP 原理深度解析及项目实战
agent·mcp