一、环境准备
1️⃣ 拉取镜像
bash
docker pull apache/kafka:3.8.1
2️⃣ 创建数据目录
bash
mkdir -p /opt/services-data/kafka/{data,logs}
cd /opt/services-data/kafka
3️⃣ 设置权限(⚠️必须)
bash
chown -R 1000:1000 /opt/services-data/kafka
chmod -R 755 /opt/services-data/kafka
👉 原因:Kafka 容器默认使用 UID 1000(appuser)
二、生成 Cluster ID(KRaft 必须)
bash
docker run --rm apache/kafka:3.8.1 \
/opt/kafka/bin/kafka-storage.sh random-uuid
👉 示例输出:
text
06M1KKGfRuyK7XWurAYXkA
👉 记下来,后面要用
三、编写 docker-compose.yml
yaml
version: '3.8'
services:
kafka:
image: apache/kafka:3.8.1
container_name: kafka
restart: always
ports:
- "9092:9092"
- "9093:9093"
environment:
KAFKA_NODE_ID: 1
KAFKA_PROCESS_ROLES: broker,controller
KAFKA_LISTENERS: PLAINTEXT://:9092,CONTROLLER://:9093
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://192.168.0.122:9092
KAFKA_CONTROLLER_LISTENER_NAMES: CONTROLLER
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: CONTROLLER:PLAINTEXT,PLAINTEXT:PLAINTEXT
KAFKA_CONTROLLER_QUORUM_VOTERS: 1@192.168.0.122:9093
KAFKA_LOG_DIRS: /var/lib/kafka/data
CLUSTER_ID: 06M1KKGfRuyK7XWurAYXkA
# 单节点必须设置
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
KAFKA_TRANSACTION_STATE_LOG_REPLICATION_FACTOR: 1
KAFKA_TRANSACTION_STATE_LOG_MIN_ISR: 1
volumes:
- ./data:/var/lib/kafka/data
- ./logs:/opt/kafka/logs
⚠️ 必改项
yaml
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://192.168.0.122:9092
👉 改成你服务器 IP
四、启动 Kafka
bash
docker-compose up -d
查看日志
bash
docker logs -f kafka
成功标志
text
Kafka Server started
五、功能验证
1️⃣ 进入容器
bash
docker exec -it kafka bash
cd /opt/kafka/bin
2️⃣ 创建 Topic
bash
./kafka-topics.sh \
--create \
--topic test \
--bootstrap-server 192.168.0.122:9092 \
--partitions 1 \
--replication-factor 1
3️⃣ 启动生产者
bash
./kafka-console-producer.sh \
--topic test \
--bootstrap-server 192.168.0.122:9092
输入:
text
hello
kafka
4️⃣ 启动消费者
bash
./kafka-console-consumer.sh \
--topic test \
--bootstrap-server 192.168.0.122:9092 \
--from-beginning
👉 正常输出:
text
hello
kafka
六、总结
text
✔ 使用 KRaft 模式,无需 Zookeeper
✔ Docker Compose 部署简单
✔ 数据目录持久化
✔ 支持外部访问