kubernetes集群下部署kafka+zookeeper单机部署方案

背景:

注:在kubernetes集群上部署单机版的zookeeper+kafka服务,是采用了kubernetes中的deploment组件+service组件+pvc存储组件

1、部署zookeeper服务:

注:这里时候的镜像是:dockerhub.jiang.com/jiang-public/zookeeper:3.5.9

1. 镜像下载地址:

registry.cn-hangzhou.aliyuncs.com/images-speed-up/zookeeper:3.5.9

2. 开发deploment控制器的配置yaml:

cs 复制代码
kind: Deployment
metadata:
  name: zookeeper-kultz
  namespace: sit
  labels:
    app: zookeeper-kultz
    name: zookeeper
    version: v3.5.9
spec:
  replicas: 1
  selector:
    matchLabels:
      app: zookeeper-kultz
      name: zookeeper
  template:
    metadata:
      labels:
        app: zookeeper-kultz
        name: zookeeper
        version: v3.5.9
    spec:
      volumes:
        - name: zookeeper-pvc
          persistentVolumeClaim:
            claimName: zookeeper-pvc
      containers:
        - name: zookeeper
          image: 'dockerhub.jiang.com/jiang-public/zookeeper:3.5.9'
          ports:
            - containerPort: 2181
              protocol: TCP
          env:
            - name: ALLOW_ANONYMOUS_LOGIN
              value: 'yes'
          resources:
            limits:
              cpu: '1'
              memory: 2Gi
            requests:
              cpu: 800m
              memory: 2Gi
          volumeMounts:
            - name: zookeeper-pvc
              mountPath: /bitnami/zookeeper/data
          terminationMessagePath: /dev/termination-log
          terminationMessagePolicy: File
          imagePullPolicy: IfNotPresent
          securityContext:
            privileged: false
      restartPolicy: Always
      terminationGracePeriodSeconds: 30
      dnsPolicy: ClusterFirst
      serviceAccountName: default
      serviceAccount: default
      securityContext:
        runAsUser: 0
        fsGroup: 0
      imagePullSecrets:
        - name: user-1-registrysecret
      affinity: {}
      schedulerName: default-scheduler
  strategy:
    type: Recreate
  minReadySeconds: 10
  revisionHistoryLimit: 10
  progressDeadlineSeconds: 600

注:这里有两点需要注意的,

1、是env的配置, ALLOW_ANONYMOUS_LOGIN='yes'

2、securityContext:的配置,需要设置 runAsUser: 0和fsGroup: 0,否则提示报错:
mkdir: cannot create directory '/bitnami/zookeeper/data': Permission denied

不然zookeeper服务会无法启动起来。
pvc存储是挂载了/bitnami/zookeeper/data位置,这个地址是zoo.cfg里的配置。

3. pvc存储yaml配置:

cs 复制代码
apiVersion: storage.k8s.io/v1
kind: PersistentVolumeClaim
metadata:
  name: zookeeper-pvc
  namespace: sit
  finalizers:
    - kubernetes.io/pvc-protection
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 10Gi
  storageClassName: hpe-san
  volumeMode: Filesystem

注:这里是用storageclaas控制器,所以这里就不多介绍了。

4.zookeeper的service控制器配置:

cs 复制代码
kind: Service
metadata:
  name: zookeeper
  namespace: sit
  labels:
    name: zookeeper
    system/appName: nginxdemo0516
spec:
  ports:
    - name: tcp-port-0
      protocol: TCP
      port: 2181
      targetPort: 2181
  selector:
    name: zookeeper
  type: ClusterIP
  sessionAffinity: None
status:
  loadBalancer: {}

5.运行zookeeper容器放服务:

先创建pvc存储:

cs 复制代码
# kubectl apply -f zookeeper-pvc.yaml

再次创建deployment:

cs 复制代码
# kubectl apply -f zookeeper-deploy.yaml

最后创建zookeeper的service服务:

cs 复制代码
#  kubectl apply -f zookeeper-svc.yaml

2、部署kafka服务:

注:这里时候的镜像是:dockerhub.jiang.com/jiang-public/kafka:3.2.1

1. 镜像下载地址:

registry.cn-hangzhou.aliyuncs.com/images-speed-up/kafka:3.2.1

2. 开发deploment控制器的配置yaml:

cs 复制代码
kind: Deployment
metadata:
  name: kafka-jbhpb
  namespace: sit
  labels:
    app: kafka-jbhpb
    name: kafka
    version: v3.2.1
spec:
  replicas: 1
  selector:
    matchLabels:
      app: kafka-jbhpb
      name: kafka
  template:
    metadata:
      labels:
        app: kafka-jbhpb
        name: kafka
        version: v3.2.1
    spec:
      volumes:
        - name: kafka-pvc
          persistentVolumeClaim:
            claimName: kafka-pvc
      containers:
        - name: kafka
          image: 'dockerhub.jiang.com/jiang-public/kafka:3.2.1'
          ports:
            - containerPort: 9092
              protocol: TCP
          env:
            - name: KAFKA_ZOOKEEPER_CONNECT
              value: 'zookeeper.sit:2181'
            - name: ALLOW_PLAINTEXT_LISTENER
              value: 'yes'
          resources:
            limits:
              cpu: '1'
              memory: 2Gi
            requests:
              cpu: 800m
              memory: 2Gi
          volumeMounts:
            - name: kafka-pvc
              mountPath: /bitnami/kafka/data/
          terminationMessagePath: /dev/termination-log
          terminationMessagePolicy: File
          imagePullPolicy: IfNotPresent
          securityContext:
            privileged: false
      restartPolicy: Always
      terminationGracePeriodSeconds: 30
      dnsPolicy: ClusterFirst
      serviceAccountName: default
      serviceAccount: default
      securityContext:
        runAsUser: 0
        fsGroup: 0
      imagePullSecrets:
        - name: user-1-registrysecret
      affinity: {}
      schedulerName: default-scheduler
  strategy:
    type: Recreate
  minReadySeconds: 10
  revisionHistoryLimit: 10
  progressDeadlineSeconds: 600

注:这里有两点需要注意的,

1、是env的配置,

KAFKA_ZOOKEEPER_CONNECT='zookeeper.sit:2181'

ALLOW_PLAINTEXT_LISTENER=yes

2、securityContext:的配置,需要设置 runAsUser: 0和fsGroup: 0,不然挂载存储时,会提示报错:
mkdir: cannot create directory '/bitnami/kafka/data': Permission denied

不然kafka服务会无法启动起来。
pvc存储是挂载了/bitnami/kafka/data位置,这个地址是server.properties里的配置。

3. pvc存储yaml配置:

cs 复制代码
apiVersion: storage.k8s.io/v1
kind: PersistentVolumeClaim
metadata:
  name: kafka-pvc
  namespace: sit
  finalizers:
    - kubernetes.io/pvc-protection
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 5Gi
  storageClassName: hpe-san
  volumeMode: Filesystem

4.kafka的service控制器配置:

cs 复制代码
kind: Service
metadata:
  name: kafka
  namespace: sit
  labels:
    name: kafka
    system/appName: nginxdemo0516
spec:
  ports:
    - name: tcp-port-0
      protocol: TCP
      port: 9092
      targetPort: 9092
  selector:
    name: kafka
  type: ClusterIP
  sessionAffinity: None
status:
  loadBalancer: {}

5.运行kafka容器放服务:

先创建pvc存储:

cs 复制代码
# kubectl apply -f kafka-pvc.yaml

再次创建deployment:

cs 复制代码
# kubectl apply -f kafka-deploy.yaml

最后创建zookeeper的service服务:

cs 复制代码
#  kubectl apply -f kafka-svc.yaml

3、测试kafka功能:

在kafak容器里操作:
创建topic:

cs 复制代码
root@kafka-jbhpb-78bb6df4dc-xhmp6:/opt/bitnami/kafka/bin# ./kafka-topics.sh --bootstrap-server localhost:9092 --create --topic my-topic --partitions 1 --replication-factor 1
Created topic my-topic.

查看topic:

cs 复制代码
root@kafka-jbhpb-78bb6df4dc-xhmp6:/opt/bitnami/kafka/bin# ./kafka-topics.sh --bootstrap-server localhost:9092 --list
my-topic

这里就是成功了。

相关推荐
斯普信专业组5 小时前
Kafka偏移量管理全攻略:从基础概念到高级操作实战
分布式·kafka
liulanba14 小时前
八股取士--docker&k8s
docker·容器·kubernetes
桂月二二14 小时前
基于eBPF的云原生网络加速引擎:突破Kubernetes Service转发性能瓶颈
网络·云原生·kubernetes
dvlinker16 小时前
大数据技术Kafka详解 ⑥ | Kafka大厂面试题
大数据·kafka·消息队列·rabbitmq·分布式发布订阅系统·kfaka大厂面试题
xing-xing19 小时前
Kafka
中间件·kafka
格桑阿sir19 小时前
Kubernetes控制平面组件:Kubernetes如何使用etcd
kubernetes·k8s·etcd·高可用集群·故障分析·etcd集群调优
格桑阿sir19 小时前
Kubernetes控制平面组件:etcd常用配置参数
kubernetes·etcd·配置参数·etcd容量·磁盘耗尽·碎片整理·灾备与安全
Rocky00000019 小时前
【云原生】最新版Kubernetes集群基于Containerd部署
云原生·容器·kubernetes
全栈工程师修炼指南21 小时前
云原生 | Kubernetes 原生 Dashboard 已升级至 7.10.x 界面更简洁、功能更强大
云原生·容器·kubernetes
2301_793069821 天前
微服务架构,Spring Cloud、Kubernetes 以及云厂商(AWS、Azure)的管理方式
spring cloud·微服务·云原生·架构·kubernetes