kubernetes集群下部署kafka+zookeeper单机部署方案

背景:

注:在kubernetes集群上部署单机版的zookeeper+kafka服务,是采用了kubernetes中的deploment组件+service组件+pvc存储组件

1、部署zookeeper服务:

注:这里时候的镜像是:dockerhub.jiang.com/jiang-public/zookeeper:3.5.9

1. 镜像下载地址:

registry.cn-hangzhou.aliyuncs.com/images-speed-up/zookeeper:3.5.9

2. 开发deploment控制器的配置yaml:

cs 复制代码
kind: Deployment
metadata:
  name: zookeeper-kultz
  namespace: sit
  labels:
    app: zookeeper-kultz
    name: zookeeper
    version: v3.5.9
spec:
  replicas: 1
  selector:
    matchLabels:
      app: zookeeper-kultz
      name: zookeeper
  template:
    metadata:
      labels:
        app: zookeeper-kultz
        name: zookeeper
        version: v3.5.9
    spec:
      volumes:
        - name: zookeeper-pvc
          persistentVolumeClaim:
            claimName: zookeeper-pvc
      containers:
        - name: zookeeper
          image: 'dockerhub.jiang.com/jiang-public/zookeeper:3.5.9'
          ports:
            - containerPort: 2181
              protocol: TCP
          env:
            - name: ALLOW_ANONYMOUS_LOGIN
              value: 'yes'
          resources:
            limits:
              cpu: '1'
              memory: 2Gi
            requests:
              cpu: 800m
              memory: 2Gi
          volumeMounts:
            - name: zookeeper-pvc
              mountPath: /bitnami/zookeeper/data
          terminationMessagePath: /dev/termination-log
          terminationMessagePolicy: File
          imagePullPolicy: IfNotPresent
          securityContext:
            privileged: false
      restartPolicy: Always
      terminationGracePeriodSeconds: 30
      dnsPolicy: ClusterFirst
      serviceAccountName: default
      serviceAccount: default
      securityContext:
        runAsUser: 0
        fsGroup: 0
      imagePullSecrets:
        - name: user-1-registrysecret
      affinity: {}
      schedulerName: default-scheduler
  strategy:
    type: Recreate
  minReadySeconds: 10
  revisionHistoryLimit: 10
  progressDeadlineSeconds: 600

注:这里有两点需要注意的,

1、是env的配置, ALLOW_ANONYMOUS_LOGIN='yes'

2、securityContext:的配置,需要设置 runAsUser: 0和fsGroup: 0,否则提示报错:
mkdir: cannot create directory '/bitnami/zookeeper/data': Permission denied

不然zookeeper服务会无法启动起来。
pvc存储是挂载了/bitnami/zookeeper/data位置,这个地址是zoo.cfg里的配置。

3. pvc存储yaml配置:

cs 复制代码
apiVersion: storage.k8s.io/v1
kind: PersistentVolumeClaim
metadata:
  name: zookeeper-pvc
  namespace: sit
  finalizers:
    - kubernetes.io/pvc-protection
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 10Gi
  storageClassName: hpe-san
  volumeMode: Filesystem

注:这里是用storageclaas控制器,所以这里就不多介绍了。

4.zookeeper的service控制器配置:

cs 复制代码
kind: Service
metadata:
  name: zookeeper
  namespace: sit
  labels:
    name: zookeeper
    system/appName: nginxdemo0516
spec:
  ports:
    - name: tcp-port-0
      protocol: TCP
      port: 2181
      targetPort: 2181
  selector:
    name: zookeeper
  type: ClusterIP
  sessionAffinity: None
status:
  loadBalancer: {}

5.运行zookeeper容器放服务:

先创建pvc存储:

cs 复制代码
# kubectl apply -f zookeeper-pvc.yaml

再次创建deployment:

cs 复制代码
# kubectl apply -f zookeeper-deploy.yaml

最后创建zookeeper的service服务:

cs 复制代码
#  kubectl apply -f zookeeper-svc.yaml

2、部署kafka服务:

注:这里时候的镜像是:dockerhub.jiang.com/jiang-public/kafka:3.2.1

1. 镜像下载地址:

registry.cn-hangzhou.aliyuncs.com/images-speed-up/kafka:3.2.1

2. 开发deploment控制器的配置yaml:

cs 复制代码
kind: Deployment
metadata:
  name: kafka-jbhpb
  namespace: sit
  labels:
    app: kafka-jbhpb
    name: kafka
    version: v3.2.1
spec:
  replicas: 1
  selector:
    matchLabels:
      app: kafka-jbhpb
      name: kafka
  template:
    metadata:
      labels:
        app: kafka-jbhpb
        name: kafka
        version: v3.2.1
    spec:
      volumes:
        - name: kafka-pvc
          persistentVolumeClaim:
            claimName: kafka-pvc
      containers:
        - name: kafka
          image: 'dockerhub.jiang.com/jiang-public/kafka:3.2.1'
          ports:
            - containerPort: 9092
              protocol: TCP
          env:
            - name: KAFKA_ZOOKEEPER_CONNECT
              value: 'zookeeper.sit:2181'
            - name: ALLOW_PLAINTEXT_LISTENER
              value: 'yes'
          resources:
            limits:
              cpu: '1'
              memory: 2Gi
            requests:
              cpu: 800m
              memory: 2Gi
          volumeMounts:
            - name: kafka-pvc
              mountPath: /bitnami/kafka/data/
          terminationMessagePath: /dev/termination-log
          terminationMessagePolicy: File
          imagePullPolicy: IfNotPresent
          securityContext:
            privileged: false
      restartPolicy: Always
      terminationGracePeriodSeconds: 30
      dnsPolicy: ClusterFirst
      serviceAccountName: default
      serviceAccount: default
      securityContext:
        runAsUser: 0
        fsGroup: 0
      imagePullSecrets:
        - name: user-1-registrysecret
      affinity: {}
      schedulerName: default-scheduler
  strategy:
    type: Recreate
  minReadySeconds: 10
  revisionHistoryLimit: 10
  progressDeadlineSeconds: 600

注:这里有两点需要注意的,

1、是env的配置,

KAFKA_ZOOKEEPER_CONNECT='zookeeper.sit:2181'

ALLOW_PLAINTEXT_LISTENER=yes

2、securityContext:的配置,需要设置 runAsUser: 0和fsGroup: 0,不然挂载存储时,会提示报错:
mkdir: cannot create directory '/bitnami/kafka/data': Permission denied

不然kafka服务会无法启动起来。
pvc存储是挂载了/bitnami/kafka/data位置,这个地址是server.properties里的配置。

3. pvc存储yaml配置:

cs 复制代码
apiVersion: storage.k8s.io/v1
kind: PersistentVolumeClaim
metadata:
  name: kafka-pvc
  namespace: sit
  finalizers:
    - kubernetes.io/pvc-protection
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 5Gi
  storageClassName: hpe-san
  volumeMode: Filesystem

4.kafka的service控制器配置:

cs 复制代码
kind: Service
metadata:
  name: kafka
  namespace: sit
  labels:
    name: kafka
    system/appName: nginxdemo0516
spec:
  ports:
    - name: tcp-port-0
      protocol: TCP
      port: 9092
      targetPort: 9092
  selector:
    name: kafka
  type: ClusterIP
  sessionAffinity: None
status:
  loadBalancer: {}

5.运行kafka容器放服务:

先创建pvc存储:

cs 复制代码
# kubectl apply -f kafka-pvc.yaml

再次创建deployment:

cs 复制代码
# kubectl apply -f kafka-deploy.yaml

最后创建zookeeper的service服务:

cs 复制代码
#  kubectl apply -f kafka-svc.yaml

3、测试kafka功能:

在kafak容器里操作:
创建topic:

cs 复制代码
root@kafka-jbhpb-78bb6df4dc-xhmp6:/opt/bitnami/kafka/bin# ./kafka-topics.sh --bootstrap-server localhost:9092 --create --topic my-topic --partitions 1 --replication-factor 1
Created topic my-topic.

查看topic:

cs 复制代码
root@kafka-jbhpb-78bb6df4dc-xhmp6:/opt/bitnami/kafka/bin# ./kafka-topics.sh --bootstrap-server localhost:9092 --list
my-topic

这里就是成功了。

相关推荐
终端行者41 分钟前
k8s之Ingress服务接入控制器
云原生·容器·kubernetes
IT邦德5 小时前
OGG同步Oracle到Kafka不停库,全量加增量
数据库·oracle·kafka
学Linux的语莫6 小时前
k8s的nodeport和ingress
网络·rpc·kubernetes
aashuii12 小时前
k8s通过NUMA亲和分配GPU和VF接口
云原生·容器·kubernetes
黄雪超15 小时前
Kafka——多线程开发消费者实例
大数据·分布式·kafka
Most6617 小时前
kubesphere安装使用
kubernetes
Kentos(acoustic ver.)19 小时前
云原生 —— K8s 容器编排系统
云原生·容器·kubernetes·云计算·k8s
sanggou21 小时前
Zookeeper的分布式事务与原子性:深入解析与实践指南
分布式·zookeeper·云原生
哈里谢顿1 天前
Kubernetes 简介
kubernetes
__Smile°1 天前
k8s-MongoDB 副本集部署
云原生·容器·kubernetes