win setup kafka 3.6.2 Step-by-Step Guide

At the end of the document, some bugs are recorded

setup

from https://kafka.apache.org/downloads download .tgz binary package to local and extract

Prerequisites

  1. Kafka Installed: Ensure Kafka is installed and running.
  2. Java Installed: Kafka requires Java. Make sure having the JDK installed.

edit config file

  • edit config/server.properties file:

    broker.id=0

    log.dirs=/tmp/kafka-logs # or in window use D:\\tmp\\kafka-logs

    zookeeper.connect=localhost:2181

    listeners=PLAINTEXT://:9092

  • edit config/zookeeper.properties file:

    dataDir=/bigdata/zk # in win use D:\\bigdata\\zk

1. Start Kafka Server

Make sure Zookeeper and Kafka server are running.

Start Zookeeper:
复制代码
.\bin\windows\zookeeper-server-start.bat .\config\zookeeper.properties
Start Kafka server:
复制代码
.\bin\windows\kafka-server-start.bat .\config\server.properties

2. Create a Kafka Topic

Before producing and consuming messages, need a topic.

复制代码
.\bin\windows\kafka-topics.bat --create --topic test --bootstrap-server localhost:9092 --partitions 1 --replication-factor 1

3. Set Up Kafka Producer

use the Kafka console producer to send messages to the topic.

Open a new Command Prompt and run:

复制代码
.\bin\windows\kafka-console-producer.bat --topic test --bootstrap-server localhost:9092

Type messages in the console to send them to the Kafka topic.

4. Set Up Kafka Consumer

Open another Command Prompt to start the consumer that reads messages from the topic.

复制代码
.\bin\windows\kafka-console-consumer.bat --topic test --bootstrap-server localhost:9092 --from-beginning

should see messages in the consumer console as type them in the producer console.

Connecting Kafka with Code

Here are examples in Java and Python.

Java Example

First, add Kafka client dependencies topom.xml if using Maven:

复制代码
<dependency>
    <groupId>org.apache.kafka</groupId>
    <artifactId>kafka-clients</artifactId>
    <version>3.6.2</version>
</dependency>

Producer Example

复制代码
import org.apache.kafka.clients.producer.KafkaProducer;
import org.apache.kafka.clients.producer.ProducerRecord;

import java.util.Properties;

public class KafkaProducerExample {
    public static void main(String[] args) {
        Properties props = new Properties();
        props.put("bootstrap.servers", "localhost:9092");
        props.put("key.serializer", "org.apache.kafka.common.serialization.StringSerializer");
        props.put("value.serializer", "org.apache.kafka.common.serialization.StringSerializer");

        KafkaProducer<String, String> producer = new KafkaProducer<>(props);
        producer.send(new ProducerRecord<>("test", "key", "value"));
        producer.close();
    }
}

Consumer Example

复制代码
import org.apache.kafka.clients.consumer.ConsumerRecords;
import org.apache.kafka.clients.consumer.KafkaConsumer;

import java.time.Duration;
import java.util.Collections;
import java.util.Properties;

public class KafkaConsumerExample {
    public static void main(String[] args) {
        Properties props = new Properties();
        props.put("bootstrap.servers", "localhost:9092");
        props.put("group.id", "test-group");
        props.put("key.deserializer", "org.apache.kafka.common.serialization.StringDeserializer");
        props.put("value.deserializer", "org.apache.kafka.common.serialization.StringDeserializer");

        KafkaConsumer<String, String> consumer = new KafkaConsumer<>(props);
        consumer.subscribe(Collections.singletonList("test"));

        while (true) {
            ConsumerRecords<String, String> records = consumer.poll(Duration.ofMillis(100));
            records.forEach(record -> {
                System.out.printf("offset = %d, key = %s, value = %s%n", record.offset(), record.key(), record.value());
            });
        }
    }
}
Python Example

First, install the Kafka Python client:

复制代码
pip install kafka-python

Producer Example

复制代码
from kafka import KafkaProducer

producer = KafkaProducer(bootstrap_servers='localhost:9092')
producer.send('test', b'Hello, Kafka!')
producer.close()

Consumer Example

复制代码
from kafka import KafkaConsumer

consumer = KafkaConsumer('test', bootstrap_servers='localhost:9092', auto_offset_reset='earliest')
for message in consumer:
    print(f"Key: {message.key}, Value: {message.value}")

Note

  • Start Zookeeper and Kafka server: Ensure they are running correctly.
  • Create topics: Use Kafka commands to create the required topics.

some bug

'wmic' is not recognized as an internal or external command, operable program or batch file

click Environment Variables. In the section for system variables, find PATH (or any capitalization thereof). Add this entry to it:

%SystemRoot%\System32\Wbem

ERROR Exiting Kafka due to fatal exception during startup. (kafka.Kafka$) java.nio.file.InvalidPathException: Illegal char < > at index 2: D: mpdownloadkafkakafka_2.13-3.6.2log\meta.properties.tmp

Correct the Path Format:

Ensure that the path specified in configuration does not contain illegal characters or spaces. Paths in Windows should use double backslashes \ or a single forward slash /.

WARN [SocketServer listenerType=ZK_BROKER, nodeId=0] Unexpected error from /0:0:0:0:0:0:0:1 (channelId=0:0:0:0:0:0:0:1:9092-0:0:0:0:0:0:0:1:62710-1); closing connection (org.apache.kafka.common.network.Selector) org.apache.kafka.common.network.InvalidReceiveException: Invalid receive (size = 1195725856 larger than 104857600)

  • Edit server.properties:

    Open the server.properties file in Kafka config directory and increase the max.request.size property. Add or modify the following lines:

    max.request.size=209715200 # Increase this value as needed, default is 104857600 (100MB)

    socket.request.max.bytes=209715200 # Ensure this matches or exceeds max.request.size

  • Edit consumer.properties and producer.properties (if applicable):

    If have consumer and producer configurations, ensure that these properties are set appropriately there as well:

    max.request.size=209715200

相关推荐
PeterLi3 小时前
Redis 分布式锁架构全解析:从基础实现到生产级选型指南
redis·分布式
qq_435287924 小时前
第18章 闻仲西征:单体应用被分布式集群拖垮?十战十捷是回光返照
分布式·微服务·分布式架构·健康检查·单体应用·闻仲·垂直扩展
过期动态5 小时前
【RabbitMQ基础篇】RabbitMQ从入门到实战
java·jvm·数据库·分布式·spring·rabbitmq·intellij-idea
麟听科技6 小时前
HarmonyOS 6.0+ 跨端智能写作助手开发实战:多设备接续编辑与AI辅助创作落地
人工智能·分布式·华为·harmonyos·ai写作
贺国亚6 小时前
Kafka系统设计与编码
后端·kafka
Volunteer Technology7 小时前
Hadoop NameNode HA
大数据·hadoop·分布式
hyunbar8 小时前
ZooKeeper 未授权访问漏洞:你做的 ACL 加固可能只是“假动作”
分布式·zookeeper·云原生
卷毛的技术笔记8 小时前
双十一零点扛过10倍流量洪峰:Sentinel与Redis+Lua的分布式限流深度避坑指南
java·redis·分布式·后端·系统架构·sentinel·lua
Volunteer Technology8 小时前
Hadoop Federation 联邦
大数据·hadoop·分布式
书香门第8 小时前
系统设计练习 - 实时警员安全报警系统
分布式·系统架构·系统设计