kafka伪集群部署,使用KRAFT模式

1:拉去管理kafka界面UI镜像

bash 复制代码
docker pull provectuslabs/kafka-ui

2:拉去管理kafka镜像

bash 复制代码
docker pull bitnami/kafka

3:docker-compose.yml

yml 复制代码
version: '3.8'
services:
  kafka-1:
    container_name: kafka1
    image: bitnami/kafka	
    ports:
      - "19092:19092"
      - "19093:19093"
     
    volumes:
#      - /etc/localtime:/etc/localtime:ro
      - G:\temptemptemp\kafkaCluster\kafka\datas\kafka1:/bitnami/kafka:rw
    environment:
      - KAFKA_CFG_ADVERTISED_LISTENERS=PLAINTEXT://192.168.11.50:19092
      - ALLOW_PLAINTEXT_LISTENER=yes
      - KAFKA_CREATE_TOPICS=test-topic:1:1
      - KAFKA_BROKER_ID=1
      - KAFKA_KRAFT_CLUSTER_ID=1P5TYc6qRpmNeZ2ZIps4TQ
      - KAFKA_CFG_PROCESS_ROLES=broker,controller
      - KAFKA_CFG_LISTENERS=PLAINTEXT://:19092,CONTROLLER://:19093
      - KAFKA_CFG_CONTROLLER_LISTENER_NAMES=CONTROLLER
      - KAFKA_CFG_CONTROLLER_QUORUM_VOTERS=1@192.168.11.50:19093,2@192.168.11.50:29093,3@192.168.11.50:39092
      - KAFKA_CFG_NODE_ID=1
      - KAFKA_CFG_LISTENER_SECURITY_PROTOCOL_MAP=CONTROLLER:PLAINTEXT,PLAINTEXT:PLAINTEXT
      - KAFKA_ENABLE_KRAFT=yes
     
  kafka-2:
    container_name: kafka2
    image: bitnami/kafka
    ports:
      - "29092:29092"
      - "29093:29093"
      
    volumes:
#      - /etc/localtime:/etc/localtime:ro
      - G:\temptemptemp\kafkaCluster\kafka\datas\kafka2:/bitnami/kafka:rw
    environment:
      - KAFKA_CFG_ADVERTISED_LISTENERS=PLAINTEXT://192.168.11.50:29092
      - ALLOW_PLAINTEXT_LISTENER=yes
      - KAFKA_CREATE_TOPICS=test-topic:1:1
      - KAFKA_BROKER_ID=2
      - KAFKA_KRAFT_CLUSTER_ID=1P5TYc6qRpmNeZ2ZIps4TQ
      - KAFKA_CFG_PROCESS_ROLES=broker,controller
      - KAFKA_CFG_LISTENERS=PLAINTEXT://:29092,CONTROLLER://:29093
      - KAFKA_CFG_CONTROLLER_LISTENER_NAMES=CONTROLLER
      - KAFKA_CFG_CONTROLLER_QUORUM_VOTERS=1@192.168.11.50:19093,2@192.168.11.50:29093,3@192.168.11.50:39093
      - KAFKA_CFG_NODE_ID=2
      - KAFKA_CFG_LISTENER_SECURITY_PROTOCOL_MAP=CONTROLLER:PLAINTEXT,PLAINTEXT:PLAINTEXT
      - KAFKA_ENABLE_KRAFT=yes
      
    depends_on:
      - kafka-1
      
  kafka-3:
    container_name: kafka3
    image: bitnami/kafka
    ports:
      - "39092:39092"
      - "39093:39093"
      
    volumes:
#      - /etc/localtime:/etc/localtime:ro
      - G:\temptemptemp\kafkaCluster\kafka\datas\kafka3:/bitnami/kafka:rw
    environment:
      - KAFKA_CFG_ADVERTISED_LISTENERS=PLAINTEXT://192.168.11.50:39092
      - ALLOW_PLAINTEXT_LISTENER=yes
      - KAFKA_CREATE_TOPICS=test-topic:1:1
      - KAFKA_BROKER_ID=3
      - KAFKA_KRAFT_CLUSTER_ID=1P5TYc6qRpmNeZ2ZIps4TQ
      - KAFKA_CFG_PROCESS_ROLES=broker,controller
      - KAFKA_CFG_LISTENERS=PLAINTEXT://:39092,CONTROLLER://:39093
      - KAFKA_CFG_CONTROLLER_LISTENER_NAMES=CONTROLLER
      - KAFKA_CFG_CONTROLLER_QUORUM_VOTERS=1@192.168.11.50:19093,2@192.168.11.50:29093,3@192.168.11.50:39092
      - KAFKA_CFG_NODE_ID=3
      - KAFKA_CFG_LISTENER_SECURITY_PROTOCOL_MAP=CONTROLLER:PLAINTEXT,PLAINTEXT:PLAINTEXT
      - KAFKA_ENABLE_KRAFT=yes
      
    depends_on:
      - kafka-1
      - kafka-2

  kafka-ui:
    container_name: kafka-ui
    image: provectuslabs/kafka-ui
    ports:
      - "8080:8080"
    environment:
      - KAFKA_CLUSTERS_0_BOOTSTRAPSERVERS=192.168.11.50:19092,192.168.11.50:29092,192.168.11.50:39092
      - KAFKA_KRAFT_CLUSTER_ID=1P5TYc6qRpmNeZ2ZIps4TQ

4:springboot项目发布和消费kafka

4-1:application.yml

yml 复制代码
server:
  port: 9088
spring:
  kafka:
    consumer:
      bootstrap-servers: localhost:19092,localhost:29093,localhost:39092
      group-id: test-group
      auto-offset-reset: earliest
    producer:
      bootstrap-servers: localhost:19092,localhost:29092,localhost:39092
      key-serializer: org.apache.kafka.common.serialization.StringSerializer
      value-serializer: org.apache.kafka.common.serialization.StringSerializer

4-2:消费者

java 复制代码
package com.example.kafkademo.config;

import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.kafka.annotation.KafkaListener;
import org.springframework.stereotype.Service;

/**
 * @Author xu
 * @create 2023/9/27 19
 */
@Service
public class KafkaConsumerService {

    private static final Logger LOGGER = LoggerFactory.getLogger(KafkaConsumerService.class);

    @KafkaListener(topics = "topic")
    public void receiveMessage(String message) {
        LOGGER.info("received message='{}'", message);
    }
}

4-3:生产者

java 复制代码
package com.example.kafkademo.config;

import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.kafka.core.KafkaTemplate;
import org.springframework.stereotype.Service;

/**
 * @Author xu
 * @create 2023/9/27 19
 */
@Service
public class KafkaProducerService {

    private static final Logger LOGGER = LoggerFactory.getLogger(KafkaProducerService.class);

    @Autowired
    private KafkaTemplate<String, String> kafkaTemplate;

    public void sendMessage(String topic, String message) {
        LOGGER.info("sending message='{}' to topic='{}'", message, topic);
        kafkaTemplate.send(topic, message);
    }
}

4-4:controller

java 复制代码
package com.example.kafkademo.controller;

import com.example.kafkademo.config.KafkaProducerService;
import org.springframework.stereotype.Controller;
import org.springframework.web.bind.annotation.PostMapping;
import org.springframework.web.bind.annotation.RestController;

import javax.annotation.Resource;

/**
 * @Author xu
 * @create 2023/9/27 19
 */
@RestController
public class KafkaController {
    @Resource
    KafkaProducerService kafkaProducerService;

    @PostMapping("/publish")
    public String publish(String topic,String content){
        kafkaProducerService.sendMessage("topic",content);
        System.out.println("content");
        return content;
    }
}

4-5:pom

java 复制代码
<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
         xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 https://maven.apache.org/xsd/maven-4.0.0.xsd">
    <modelVersion>4.0.0</modelVersion>
    <parent>
        <groupId>org.springframework.boot</groupId>
        <artifactId>spring-boot-starter-parent</artifactId>
        <version>2.7.16</version>
        <relativePath/> <!-- lookup parent from repository -->
    </parent>
    <groupId>com.example</groupId>
    <artifactId>kafkaDemo</artifactId>
    <version>0.0.1-SNAPSHOT</version>
    <name>kafkaDemo</name>
    <description>kafkaDemo</description>
    <properties>
        <java.version>1.8</java.version>
    </properties>
    <dependencies>
        <dependency>
            <groupId>org.springframework.boot</groupId>
            <artifactId>spring-boot-starter</artifactId>
        </dependency>
        <dependency>
            <groupId>org.springframework.kafka</groupId>
            <artifactId>spring-kafka</artifactId>
        </dependency>

        <dependency>
            <groupId>org.springframework.boot</groupId>
            <artifactId>spring-boot-starter-test</artifactId>
            <scope>test</scope>
        </dependency>
        <dependency>
            <groupId>org.springframework.kafka</groupId>
            <artifactId>spring-kafka-test</artifactId>
            <scope>test</scope>
        </dependency>
        <dependency>
            <groupId>org.springframework.boot</groupId>
            <artifactId>spring-boot-starter-web</artifactId>
        </dependency>
    </dependencies>

    <build>
        <plugins>
            <plugin>
                <groupId>org.springframework.boot</groupId>
                <artifactId>spring-boot-maven-plugin</artifactId>
            </plugin>
        </plugins>
    </build>

</project>

5:命令行方式启动kafka

bash 复制代码
docker run -d --name kafka1 -p 19092:19092 -p 19093:19093 -e KAFKA_CFG_ADVERTISED_LISTENERS=PLAINTEXT://192.168.11.50:19092 -e ALLOW_PLAINTEXT_LISTENER=yes -e KAFKA_BROKER_ID=1 -e KAFKA_KRAFT_CLUSTER_ID=1P5TYc6qRpmNeZ2ZIps4TQ -e KAFKA_CFG_PROCESS_ROLES=broker,controller -e KAFKA_CFG_LISTENERS=PLAINTEXT://:19092,CONTROLLER://:19093 -e KAFKA_CFG_CONTROLLER_LISTENER_NAMES=CONTROLLER  -e KAFKA_CFG_CONTROLLER_QUORUM_VOTERS=1@192.168.11.50:19093 -e KAFKA_CFG_NODE_ID=1 -e KAFKA_CFG_LISTENER_SECURITY_PROTOCOL_MAP=CONTROLLER:PLAINTEXT,PLAINTEXT:PLAINTEXT -e KAFKA_ENABLE_KRAFT=yes -e KAFKA_CFG_KRAFT_MODE=kraft bitnami/kafka

6:命令行方式启动kafka-ui

bash 复制代码
docker run -d --name kafka-ui -p 8080:8080 -e KAFKA_CLUSTERS_0_BOOTSTRAPSERVERS=192.168.11.50:19092 provectuslabs/kafka-ui
相关推荐
小橘快跑2 小时前
动态控制rabbitmq中的消费者监听的启动和停止
分布式·rabbitmq
在未来等你2 小时前
Elasticsearch面试精讲 Day 15:索引别名与零停机更新
大数据·分布式·elasticsearch·搜索引擎·面试
无名客03 小时前
redis分布式锁为什么采用Lua脚本实现。而不是事务
redis·分布式·lua·事务
在未来等你3 小时前
Elasticsearch面试精讲 Day 12:数据建模与字段类型选择
大数据·分布式·elasticsearch·搜索引擎·面试
a587694 小时前
消息队列(MQ)初级入门:详解RabbitMQ与Kafka
java·分布式·microsoft·面试·kafka·rabbitmq
Hello.Reader5 小时前
Kafka在多环境中安全管理敏感
分布式·安全·kafka
AutoMQ6 小时前
AutoMQ 亮相首尔:KafkaKRU 分享日志流处理新思路
kafka·开源·云计算
白鲸开源6 小时前
一行代码引发 12G 内存 5 分钟爆仓!SeaTunnel Kafka 连接器"内存溢出"元凶抓到了
数据库·kafka·开源
在未来等你7 小时前
Elasticsearch面试精讲 Day 14:数据写入与刷新机制
大数据·分布式·elasticsearch·搜索引擎·面试