Autogen4j: the Java version of Microsoft AutoGen

https://github.com/HamaWhiteGG/autogen4j

Java version of Microsoft AutoGen, Enable Next-Gen Large Language Model Applications.

1. What is AutoGen

AutoGen is a framework that enables the development of LLM applications using multiple agents that can converse with each other to solve tasks. AutoGen agents are customizable, conversable, and seamlessly allow human participation. They can operate in various modes that employ combinations of LLMs, human inputs, and tools.

The following example in the autogen4j-example.

2. Quickstart

2.1 Maven Repository

Prerequisites for building:

  • Java 17 or later
  • Unix-like environment (we use Linux, Mac OS X)
  • Maven (we recommend version 3.8.6 and require at least 3.5.4)
xml 复制代码
<dependency>
    <groupId>io.github.hamawhitegg</groupId>
    <artifactId>autogen4j-core</artifactId>
    <version>0.1.0</version>
</dependency>

2.2 Environment Setup

Using Autogen4j requires OpenAI's APIs, you need to set the environment variable.

shell 复制代码
export OPENAI_API_KEY=xxx

3. Multi-Agent Conversation Framework

Autogen enables the next-gen LLM applications with a generic multi-agent conversation framework. It offers customizable and conversable agents that integrate LLMs, tools, and humans.

By automating chat among multiple capable agents, one can easily make them collectively perform tasks autonomously or with human feedback, including tasks that require using tools via code.

Features of this use case include:

  • Multi-agent conversations: AutoGen agents can communicate with each other to solve tasks. This allows for more complex and sophisticated applications than would be possible with a single LLM.
  • Customization: AutoGen agents can be customized to meet the specific needs of an application. This includes the ability to choose the LLMs to use, the types of human input to allow, and the tools to employ.
  • Human participation: AutoGen seamlessly allows human participation. This means that humans can provide input and feedback to the agents as needed.

3.1 Auto Feedback From Code Execution Example

Auto Feedback From Code Execution Example

java 复制代码
// create an AssistantAgent named "assistant"
var assistant = AssistantAgent.builder()
        .name("assistant")
        .build();

var codeExecutionConfig = CodeExecutionConfig.builder()
        .workDir("data/coding")
        .build();
// create a UserProxyAgent instance named "user_proxy"
var userProxy = UserProxyAgent.builder()
        .name("user_proxy")
        .humanInputMode(NEVER)
        .maxConsecutiveAutoReply(10)
        .isTerminationMsg(e -> e.getContent().strip().endsWith("TERMINATE"))
        .codeExecutionConfig(codeExecutionConfig)
        .build();

// the assistant receives a message from the user_proxy, which contains the task description
userProxy.initiateChat(assistant,
        "What date is today? Compare the year-to-date gain for META and TESLA.");

// followup of the previous question
userProxy.send(assistant,
        "Plot a chart of their stock price change YTD and save to stock_price_ytd.png.");

The figure below shows an example conversation flow with Autogen4j.

After running, you can check the file coding_output.log for the output logs.

The final output is as shown in the following picture.

3.2 Group Chat Example

Group Chat Example

java 复制代码
var codeExecutionConfig = CodeExecutionConfig.builder()
        .workDir("data/group_chat")
        .lastMessagesNumber(2)
        .build();

// create a UserProxyAgent instance named "user_proxy"
var userProxy = UserProxyAgent.builder()
        .name("user_proxy")
        .systemMessage("A human admin.")
        .humanInputMode(TERMINATE)
        .codeExecutionConfig(codeExecutionConfig)
        .build();

// create an AssistantAgent named "coder"
var coder = AssistantAgent.builder()
        .name("coder")
        .build();

// create an AssistantAgent named "pm"
var pm = AssistantAgent.builder()
        .name("product_manager")
        .systemMessage("Creative in software product ideas.")
        .build();

var groupChat = GroupChat.builder()
        .agents(List.of(userProxy, coder, pm))
        .maxRound(12)
        .build();

// create an GroupChatManager named "manager"
var manager = GroupChatManager.builder()
        .groupChat(groupChat)
        .build();

userProxy.initiateChat(manager,
        "Find a latest paper about gpt-4 on arxiv and find its potential applications in software.");

After running, you can check the file group_chat_output.log for the output logs.

4. Run Test Cases from Source

shell 复制代码
git clone https://github.com/HamaWhiteGG/autogen4j.git
cd autogen4j

# export JAVA_HOME=JDK17_INSTALL_HOME && mvn clean test
mvn clean test

This project uses Spotless to format the code.

If you make any modifications, please remember to format the code using the following command.

shell 复制代码
# export JAVA_HOME=JDK17_INSTALL_HOME && mvn spotless:apply
mvn spotless:apply

5. Support

Don't hesitate to ask!

Open an issue if you find a bug or need any help.

相关推荐
山顶夕景12 小时前
【VLM】Visual Merit or Linguistic Crutch? 看DeepSeek-OCR
大模型·llm·ocr·多模态
玄同76512 小时前
LangChain 核心组件全解析:构建大模型应用的 “乐高积木”
人工智能·python·语言模型·langchain·llm·nlp·知识图谱
亚里随笔14 小时前
相对优势估计存在偏差——揭示群体相对强化学习中的系统性偏差问题
人工智能·深度学习·机器学习·llm·agentic·rlvr
带刺的坐椅15 小时前
论 AI Skills 分布式发展的必然性:从单体智能到“云端大脑”的跃迁
java·ai·llm·mcp·tool-call·skills
中杯可乐多加冰19 小时前
RAG 深度实践系列(三):RAG 技术演变与核心架构的深度剖析
人工智能·深度学习·大模型·llm·知识库·rag·graphrag
叶庭云1 天前
一文学会在 ModelArts 中部署及使用 Claude Code Agent
agent·编程助手·云服务器·modelarts·claude code·开源代码仓库·plan mode
Wilber的技术分享1 天前
【Transformer原理详解2】Decoder结构解析、Decoder-Only结构中的Decoder
人工智能·笔记·深度学习·llm·transformer
猿小羽2 天前
AI 2.0 时代全栈开发实战:从 Spring AI 到 MLOps 的进阶指南
ai·llm·mlops·rag·vector database·spring ai·prompt engineering
猿小羽2 天前
AI 学习与实战系列:Spring AI + MCP 深度实战——构建标准化、可扩展的智能 Agent 系统
java·spring boot·llm·agent·spring ai·mcp·model context protocol
视觉&物联智能2 天前
【杂谈】-2026年人工智能发展趋势:智能体崛起、行业洗牌与安全挑战
人工智能·安全·llm·aigc·agi·智能体