win setup kafka 3.6.2 Step-by-Step Guide

At the end of the document, some bugs are recorded

setup

from https://kafka.apache.org/downloads download .tgz binary package to local and extract

Prerequisites

  1. Kafka Installed: Ensure Kafka is installed and running.
  2. Java Installed: Kafka requires Java. Make sure having the JDK installed.

edit config file

  • edit config/server.properties file:

    broker.id=0

    log.dirs=/tmp/kafka-logs # or in window use D:\\tmp\\kafka-logs

    zookeeper.connect=localhost:2181

    listeners=PLAINTEXT://:9092

  • edit config/zookeeper.properties file:

    dataDir=/bigdata/zk # in win use D:\\bigdata\\zk

1. Start Kafka Server

Make sure Zookeeper and Kafka server are running.

Start Zookeeper:
复制代码
.\bin\windows\zookeeper-server-start.bat .\config\zookeeper.properties
Start Kafka server:
复制代码
.\bin\windows\kafka-server-start.bat .\config\server.properties

2. Create a Kafka Topic

Before producing and consuming messages, need a topic.

复制代码
.\bin\windows\kafka-topics.bat --create --topic test --bootstrap-server localhost:9092 --partitions 1 --replication-factor 1

3. Set Up Kafka Producer

use the Kafka console producer to send messages to the topic.

Open a new Command Prompt and run:

复制代码
.\bin\windows\kafka-console-producer.bat --topic test --bootstrap-server localhost:9092

Type messages in the console to send them to the Kafka topic.

4. Set Up Kafka Consumer

Open another Command Prompt to start the consumer that reads messages from the topic.

复制代码
.\bin\windows\kafka-console-consumer.bat --topic test --bootstrap-server localhost:9092 --from-beginning

should see messages in the consumer console as type them in the producer console.

Connecting Kafka with Code

Here are examples in Java and Python.

Java Example

First, add Kafka client dependencies topom.xml if using Maven:

复制代码
<dependency>
    <groupId>org.apache.kafka</groupId>
    <artifactId>kafka-clients</artifactId>
    <version>3.6.2</version>
</dependency>

Producer Example

复制代码
import org.apache.kafka.clients.producer.KafkaProducer;
import org.apache.kafka.clients.producer.ProducerRecord;

import java.util.Properties;

public class KafkaProducerExample {
    public static void main(String[] args) {
        Properties props = new Properties();
        props.put("bootstrap.servers", "localhost:9092");
        props.put("key.serializer", "org.apache.kafka.common.serialization.StringSerializer");
        props.put("value.serializer", "org.apache.kafka.common.serialization.StringSerializer");

        KafkaProducer<String, String> producer = new KafkaProducer<>(props);
        producer.send(new ProducerRecord<>("test", "key", "value"));
        producer.close();
    }
}

Consumer Example

复制代码
import org.apache.kafka.clients.consumer.ConsumerRecords;
import org.apache.kafka.clients.consumer.KafkaConsumer;

import java.time.Duration;
import java.util.Collections;
import java.util.Properties;

public class KafkaConsumerExample {
    public static void main(String[] args) {
        Properties props = new Properties();
        props.put("bootstrap.servers", "localhost:9092");
        props.put("group.id", "test-group");
        props.put("key.deserializer", "org.apache.kafka.common.serialization.StringDeserializer");
        props.put("value.deserializer", "org.apache.kafka.common.serialization.StringDeserializer");

        KafkaConsumer<String, String> consumer = new KafkaConsumer<>(props);
        consumer.subscribe(Collections.singletonList("test"));

        while (true) {
            ConsumerRecords<String, String> records = consumer.poll(Duration.ofMillis(100));
            records.forEach(record -> {
                System.out.printf("offset = %d, key = %s, value = %s%n", record.offset(), record.key(), record.value());
            });
        }
    }
}
Python Example

First, install the Kafka Python client:

复制代码
pip install kafka-python

Producer Example

复制代码
from kafka import KafkaProducer

producer = KafkaProducer(bootstrap_servers='localhost:9092')
producer.send('test', b'Hello, Kafka!')
producer.close()

Consumer Example

复制代码
from kafka import KafkaConsumer

consumer = KafkaConsumer('test', bootstrap_servers='localhost:9092', auto_offset_reset='earliest')
for message in consumer:
    print(f"Key: {message.key}, Value: {message.value}")

Note

  • Start Zookeeper and Kafka server: Ensure they are running correctly.
  • Create topics: Use Kafka commands to create the required topics.

some bug

'wmic' is not recognized as an internal or external command, operable program or batch file

click Environment Variables. In the section for system variables, find PATH (or any capitalization thereof). Add this entry to it:

%SystemRoot%\System32\Wbem

ERROR Exiting Kafka due to fatal exception during startup. (kafka.Kafka$) java.nio.file.InvalidPathException: Illegal char < > at index 2: D: mpdownloadkafkakafka_2.13-3.6.2log\meta.properties.tmp

Correct the Path Format:

Ensure that the path specified in configuration does not contain illegal characters or spaces. Paths in Windows should use double backslashes \ or a single forward slash /.

WARN [SocketServer listenerType=ZK_BROKER, nodeId=0] Unexpected error from /0:0:0:0:0:0:0:1 (channelId=0:0:0:0:0:0:0:1:9092-0:0:0:0:0:0:0:1:62710-1); closing connection (org.apache.kafka.common.network.Selector) org.apache.kafka.common.network.InvalidReceiveException: Invalid receive (size = 1195725856 larger than 104857600)

  • Edit server.properties:

    Open the server.properties file in Kafka config directory and increase the max.request.size property. Add or modify the following lines:

    max.request.size=209715200 # Increase this value as needed, default is 104857600 (100MB)

    socket.request.max.bytes=209715200 # Ensure this matches or exceeds max.request.size

  • Edit consumer.properties and producer.properties (if applicable):

    If have consumer and producer configurations, ensure that these properties are set appropriately there as well:

    max.request.size=209715200

相关推荐
码农小灰13 小时前
Kafka消息持久化机制全解析:存储原理与实战场景
java·分布式·kafka
Raisy_14 小时前
05 ODS层(Operation Data Store)
大数据·数据仓库·kafka·flume
纪莫19 小时前
Kafka如何保证「消息不丢失」,「顺序传输」,「不重复消费」,以及为什么会发生重平衡(reblanace)
java·分布式·后端·中间件·kafka·队列
想躺平的咸鱼干19 小时前
RabbitMQ 基础
java·分布式·rabbitmq·idea·amqp·消息转换器·交换机模型
poemyang21 小时前
千亿消息“过眼云烟”?Kafka把硬盘当内存用的性能魔法,全靠这一手!
kafka·高并发·pagecache·存储架构·顺序i/o·局部性原理
KaiwuDB21 小时前
KWDB 分布式架构探究——数据分布与特性
数据库·分布式
武子康1 天前
大数据-75 Kafka 高水位线 HW 与日志末端 LEO 全面解析:副本同步与消费一致性核心
大数据·后端·kafka
华仔啊1 天前
乐观锁、悲观锁和分布式锁,你用对了吗?
java·分布式
艾希逐月1 天前
分布式唯一 ID 生成方案
分布式
齐木卡卡西在敲代码2 天前
kafka的pull的依据
分布式·kafka