Autogen4j: the Java version of Microsoft AutoGen

https://github.com/HamaWhiteGG/autogen4j

Java version of Microsoft AutoGen, Enable Next-Gen Large Language Model Applications.

1. What is AutoGen

AutoGen is a framework that enables the development of LLM applications using multiple agents that can converse with each other to solve tasks. AutoGen agents are customizable, conversable, and seamlessly allow human participation. They can operate in various modes that employ combinations of LLMs, human inputs, and tools.

The following example in the autogen4j-example.

2. Quickstart

2.1 Maven Repository

Prerequisites for building:

  • Java 17 or later
  • Unix-like environment (we use Linux, Mac OS X)
  • Maven (we recommend version 3.8.6 and require at least 3.5.4)
xml 复制代码
<dependency>
    <groupId>io.github.hamawhitegg</groupId>
    <artifactId>autogen4j-core</artifactId>
    <version>0.1.0</version>
</dependency>

2.2 Environment Setup

Using Autogen4j requires OpenAI's APIs, you need to set the environment variable.

shell 复制代码
export OPENAI_API_KEY=xxx

3. Multi-Agent Conversation Framework

Autogen enables the next-gen LLM applications with a generic multi-agent conversation framework. It offers customizable and conversable agents that integrate LLMs, tools, and humans.

By automating chat among multiple capable agents, one can easily make them collectively perform tasks autonomously or with human feedback, including tasks that require using tools via code.

Features of this use case include:

  • Multi-agent conversations: AutoGen agents can communicate with each other to solve tasks. This allows for more complex and sophisticated applications than would be possible with a single LLM.
  • Customization: AutoGen agents can be customized to meet the specific needs of an application. This includes the ability to choose the LLMs to use, the types of human input to allow, and the tools to employ.
  • Human participation: AutoGen seamlessly allows human participation. This means that humans can provide input and feedback to the agents as needed.

3.1 Auto Feedback From Code Execution Example

Auto Feedback From Code Execution Example

java 复制代码
// create an AssistantAgent named "assistant"
var assistant = AssistantAgent.builder()
        .name("assistant")
        .build();

var codeExecutionConfig = CodeExecutionConfig.builder()
        .workDir("data/coding")
        .build();
// create a UserProxyAgent instance named "user_proxy"
var userProxy = UserProxyAgent.builder()
        .name("user_proxy")
        .humanInputMode(NEVER)
        .maxConsecutiveAutoReply(10)
        .isTerminationMsg(e -> e.getContent().strip().endsWith("TERMINATE"))
        .codeExecutionConfig(codeExecutionConfig)
        .build();

// the assistant receives a message from the user_proxy, which contains the task description
userProxy.initiateChat(assistant,
        "What date is today? Compare the year-to-date gain for META and TESLA.");

// followup of the previous question
userProxy.send(assistant,
        "Plot a chart of their stock price change YTD and save to stock_price_ytd.png.");

The figure below shows an example conversation flow with Autogen4j.

After running, you can check the file coding_output.log for the output logs.

The final output is as shown in the following picture.

3.2 Group Chat Example

Group Chat Example

java 复制代码
var codeExecutionConfig = CodeExecutionConfig.builder()
        .workDir("data/group_chat")
        .lastMessagesNumber(2)
        .build();

// create a UserProxyAgent instance named "user_proxy"
var userProxy = UserProxyAgent.builder()
        .name("user_proxy")
        .systemMessage("A human admin.")
        .humanInputMode(TERMINATE)
        .codeExecutionConfig(codeExecutionConfig)
        .build();

// create an AssistantAgent named "coder"
var coder = AssistantAgent.builder()
        .name("coder")
        .build();

// create an AssistantAgent named "pm"
var pm = AssistantAgent.builder()
        .name("product_manager")
        .systemMessage("Creative in software product ideas.")
        .build();

var groupChat = GroupChat.builder()
        .agents(List.of(userProxy, coder, pm))
        .maxRound(12)
        .build();

// create an GroupChatManager named "manager"
var manager = GroupChatManager.builder()
        .groupChat(groupChat)
        .build();

userProxy.initiateChat(manager,
        "Find a latest paper about gpt-4 on arxiv and find its potential applications in software.");

After running, you can check the file group_chat_output.log for the output logs.

4. Run Test Cases from Source

shell 复制代码
git clone https://github.com/HamaWhiteGG/autogen4j.git
cd autogen4j

# export JAVA_HOME=JDK17_INSTALL_HOME && mvn clean test
mvn clean test

This project uses Spotless to format the code.

If you make any modifications, please remember to format the code using the following command.

shell 复制代码
# export JAVA_HOME=JDK17_INSTALL_HOME && mvn spotless:apply
mvn spotless:apply

5. Support

Don't hesitate to ask!

Open an issue if you find a bug or need any help.

相关推荐
我爱学Python!2 小时前
解决复杂查询难题:如何通过 Self-querying Prompting 提高 RAG 系统效率?
人工智能·程序人生·自然语言处理·大模型·llm·大语言模型·rag
任某某01161 天前
第四期书生大模型实战营 - 基础岛闯关作业3 - 浦语提示词工程实践
llm
知来者逆2 天前
DrugLLM——利用大规模语言模型通过 Few-Shot 生成生物制药小分子
人工智能·语言模型·自然语言处理·llm·大语言模型·生物制药
Swift社区2 天前
使用 AI 在医疗影像分析中的应用探索
typescript·tensorflow·openai
waiting不是违停2 天前
MetaGPT实现多动作Agent
langchain·llm
hunteritself2 天前
ChatGPT Search VS Kimi探索版:AI搜索哪家强?!
人工智能·gpt·chatgpt·openai·xai
HuggingFace3 天前
通用辅助生成: 使用任意辅助模型加速解码
llm
用户3157476081353 天前
带你在前端上玩转机器学习,从零去构建,训练一个模型!!
前端·机器学习·llm
Seal软件3 天前
配置NVIDIA Container Runtime和容器运行GPUStack教程
docker·容器·大模型·llm·gpu集群