【Kubernetes】日志平台EFK+Logstash+Kafka【实战】

一,环境准备

(1)下载镜像包(共3个):

elasticsearch-7-12-1.tar.gz
fluentd-containerd.tar.gz
kibana-7-12-1.tar.gz

(2)在node节点导入镜像:

python 复制代码
ctr -n=k8s.io images import elasticsearch-7-12-1.tar.gz 
ctr -n=k8s.io images import kibana-7-12-1.tar.gz 
ctr -n=k8s.io images import fluentd-containerd.tar.gz

(3)在master节点导入镜像:

python 复制代码
ctr -n=k8s.io images import fluentd-containerd.tar.gz

二,实战操作详细步骤

1,创建命名空间

python 复制代码
kubectl create ns kube-logging

2,创建elasticsearch的 service

yaml 复制代码
# vim elasticsearch_svc.yaml 

kind: Service                  # service服务
apiVersion: v1
metadata:
  name: elasticsearch
  namespace: kube-logging      # 指定命名空间
  labels:
    app: elasticsearch
spec:
  selector:
    app: elasticsearch
  clusterIP: None              # lesshead类型
  ports:
    - port: 9200               # 指定端口9200
      name: rest
    - port: 9300                # 指定端口9300
      name: inter-node

3,安装elasticsearch:类型Statefulset

(1)创建NFS

所有节点执行如下命令,通过NFS 再创建存储类,实现存储类动态供给

1-1 安装NFS
yaml 复制代码
#yum安装nfs
yum install nfs-utils -y

#启动nfs服务
systemctl start nfs

#设置nfs开机自启动
systemctl enable nfs.service
1-2 配置NSF

仅在master节点配置nfs文件,将master作为服务端。

python 复制代码
# master上创建一个nfs共享目录
mkdir /data/v1 -p

#编辑/etc/exports文件
vim /etc/exports
/data/v1 *(rw,no_root_squash)

#加载配置,使配置生效
exportfs -arv

# 重新启动nfs
systemctl restart nfs
(2)创建存储供应商(基于NFS)
2-1 创建sa账号
yaml 复制代码
# vim serviceaccount.yaml 

apiVersion: v1
kind: ServiceAccount
metadata:
  name: nfs-provisioner

说明:为了使Pod里面的进程调用Kubernetes API或其他外部服务。从而指定serviceaccount之后,把pod创建出来,使用这个pod时,就有了我们指定的账户的权限。

2-2 对sa账号授权
python 复制代码
kubectl create clusterrolebinding nfs-provisioner-clusterrolebinding --clusterrole=cluster-admin --serviceaccount=default:nfs-provisioner
2-3 创建nfs-provisioner供应商:类型Deployment

在node节点上,导入镜像:nfs-subdir-external-provisioner.tar.gz

创建供应商:

python 复制代码
ctr -n=k8s.io images import nfs-client-provisioner.tar.gz
yaml 复制代码
# vim deployment.yaml

kind: Deployment
apiVersion: apps/v1
metadata:
  name: nfs-provisioner
spec:
  selector:
    matchLabels:
      app: nfs-provisioner
  replicas: 1
  strategy:
    type: Recreate
  template:
    metadata:
      labels:
        app: nfs-provisioner
    spec:
      serviceAccount: nfs-provisioner
      containers:
        - name: nfs-provisioner
          image: registry.cn-beijing.aliyuncs.com/mydlq/nfs-subdir-external-provisioner:v4.0.0
          imagePullPolicy: IfNotPresent
          volumeMounts:
            - name: nfs-client-root
              mountPath: /persistentvolumes
          env:
            - name: PROVISIONER_NAME
              value: example.com/nfs
            - name: NFS_SERVER
              value: 192.168.40.180 #这个需要写nfs服务端所在的ip地址,大家需要写自己安装了nfs服务的机器ip
            - name: NFS_PATH
              value: /data/v1  #这个是nfs服务端共享的目录
      volumes:
        - name: nfs-client-root
          nfs:
            server: 192.168.40.180   # 和上面保持一直
            path: /data/v1           # 和上面保持一直
python 复制代码
#验证nfs是否创建成功
kubectl get pods | grep nfs
(3)创建Storageclass
yaml 复制代码
# vim class.yaml

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: do-block-storage
provisioner: example.com/nfs

注意:

provisioner: example.com/nfs

该值需要和 nfs-provisioner 配置的 PROVISIONER_NAME 处的value值保持一致。

(4)安装elasticsearch

在node节点,导入镜像:elasticsearch-7-12-1.tar.gz

python 复制代码
ctr -n=k8s.io images import elasticsearch-7-12-1.tar.gz
yaml 复制代码
# vim elasticsearch-statefulset.yaml

apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: es-cluster
  namespace: kube-logging
spec:
  serviceName: elasticsearch
  replicas: 3
  selector:
    matchLabels:
      app: elasticsearch
  template:
    metadata:
      labels:
        app: elasticsearch
    spec:
      containers:
      - name: elasticsearch
        image: elasticsearch:7.12.1
        imagePullPolicy: IfNotPresent
        resources:
            limits:
              cpu: 1000m
            requests:
              cpu: 100m
        ports:
        - containerPort: 9200
          name: rest
          protocol: TCP
        - containerPort: 9300
          name: inter-node
          protocol: TCP
        volumeMounts:
        - name: data
          mountPath: /usr/share/elasticsearch/data
        env:
          - name: cluster.name
            value: k8s-logs
          - name: node.name
            valueFrom:
              fieldRef:
                fieldPath: metadata.name
          - name: discovery.seed_hosts
            value: "es-cluster-0.elasticsearch.kube-logging.svc.cluster.local,es-cluster-1.elasticsearch.kube-logging.svc.cluster.local,es-cluster-2.elasticsearch.kube-logging.svc.cluster.local"   # 创建3个pod的完整域名,可简写
          - name: cluster.initial_master_nodes
            value: "es-cluster-0,es-cluster-1,es-cluster-2"
          - name: ES_JAVA_OPTS
            value: "-Xms512m -Xmx512m"
      initContainers:
      - name: fix-permissions
        image: busybox
        imagePullPolicy: IfNotPresent
        command: ["sh", "-c", "chown -R 1000:1000 /usr/share/elasticsearch/data"]
        securityContext:
          privileged: true
        volumeMounts:
        - name: data
          mountPath: /usr/share/elasticsearch/data
      - name: increase-vm-max-map
        image: busybox
        imagePullPolicy: IfNotPresent
        command: ["sysctl", "-w", "vm.max_map_count=262144"]
        securityContext:
          privileged: true
      - name: increase-fd-ulimit
        image: busybox
        imagePullPolicy: IfNotPresent
        command: ["sh", "-c", "ulimit -n 65536"]
        securityContext:
          privileged: true
  volumeClaimTemplates:
  - metadata:
      name: data
      labels:
        app: elasticsearch
    spec:
      accessModes: [ "ReadWriteOnce" ]
      storageClassName: do-block-storage
      resources:
        requests:
          storage: 10Gi

注意:chmod -R 1000:1000 /usr/:中,这是用户ID(UID)和组ID(GID)的组合,分别用冒号分隔。

在大多数Linux发行版中,UID和GID为1000通常分配给第一个非root用户(即,安装系统后创建的第一个用户账户)。

这意味着该命令将文件或目录的所有者和组更改为UID和GID都为1000的用户和组。


4,安装kibana

创建前端service,及代理的后端pod1个

yaml 复制代码
# vim kibana.yaml

apiVersion: v1
kind: Service
metadata:
  name: kibana
  namespace: kube-logging
  labels:
    app: kibana
spec:
  type: NodePort
  ports:
  - port: 5601
  selector:
    app: kibana
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: kibana
  namespace: kube-logging
  labels:
    app: kibana
spec:
  replicas: 1
  selector:
    matchLabels:
      app: kibana
  template:
    metadata:
      labels:
        app: kibana
    spec:
      containers:
      - name: kibana
        image:  kibana:7.12.1
        imagePullPolicy: IfNotPresent
        resources:
          limits:
            cpu: 1000m
          requests:
            cpu: 100m
        env:
          - name: ELASTICSEARCH_URL
            value: http://elasticsearch:9200
        ports:
        - containerPort: 5601
python 复制代码
kubectl get pods -n kube-logging

kubectl get svc -n kube-logging

浏览器访问地址:

http://<任意节点ip>:<前端service映射端口(32059)> :


5,安装fluentd:类型Daemonset

daemonset控制器可以保证集群中的每个节点都可以运行同样fluentd的pod副本。

从而,可以收集k8s集群中每个节点的日志,将应用应用程序容器的输入输出日志,重定向到node节点里的json文件中即可。

Fluentd不但可以把容器日志转换成指定的格式发送到elasticsearch集群中,还可以采集kubelet、kube-proxy、docker的日志。

创建fluentd服务的sa账号,并分配角色授权:

yaml 复制代码
# vim fluentd.yaml 

apiVersion: v1
kind: ServiceAccount
metadata:
  name: fluentd
  namespace: kube-logging
  labels:
    app: fluentd
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: fluentd
  labels:
    app: fluentd
rules:
- apiGroups:
  - ""
  resources:
  - pods
  - namespaces
  verbs:
  - get
  - list
  - watch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: fluentd
roleRef:
  kind: ClusterRole
  name: fluentd
  apiGroup: rbac.authorization.k8s.io
subjects:
- kind: ServiceAccount
  name: fluentd
  namespace: kube-logging
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: fluentd
  namespace: kube-logging
  labels:
    app: fluentd
spec:
  selector:
    matchLabels:
      app: fluentd
  template:
    metadata:
      labels:
        app: fluentd
    spec:
      serviceAccount: fluentd
      serviceAccountName: fluentd
      tolerations:
      - key: node-role.kubernetes.io/control-plane
        effect: NoSchedule
      containers:
      - name: fluentd
        image: docker.io/fluent/fluentd-kubernetes-daemonset:v1.16-debian-elasticsearch7-1
        imagePullPolicy: IfNotPresent
        env:
          - name:  FLUENT_ELASTICSEARCH_HOST
            value: "elasticsearch.kube-logging.svc.cluster.local"
          - name:  FLUENT_ELASTICSEARCH_PORT
            value: "9200"
          - name: FLUENT_ELASTICSEARCH_SCHEME
            value: "http"
          - name: FLUENTD_SYSTEMD_CONF
            value: disable            
          - name: FLUENT_CONTAINER_TAIL_PARSER_TYPE
            value: "cri"
          - name: FLUENT_CONTAINER_TAIL_PARSER_TIME_FORMAT
            value: "%Y-%m-%dT%H:%M:%S.%L%z"            
        resources:
          limits:
            memory: 512Mi
          requests:
            cpu: 100m
            memory: 200Mi
        volumeMounts:
        - name: varlog
          mountPath: /var/log
        - name: containers
          mountPath: /var/log/containers
          readOnly: true
      terminationGracePeriodSeconds: 30
      volumes:
      - name: varlog
        hostPath:
          path: /var/log
      - name: containers
        hostPath:
          path: /var/log/containers

【注意】日志格式化。容器运行时为containerd时,才加入。为docker时,不用加。

  • name: FLUENT_CONTAINER_TAIL_PARSER_TYPE

value: "cri"

  • name: FLUENT_CONTAINER_TAIL_PARSER_TIME_FORMAT

value: "%Y-%m-%dT%H:%M:%S.%L%z"

6,配置连接





https://www.elastic.co/guide/en/kibana/7.12/kuery-query.html

https://www.elastic.co/guide/en/kibana/7.12/kuery-query.html

相关推荐
蜜獾云19 分钟前
docker 安装雷池WAF防火墙 守护Web服务器
linux·运维·服务器·网络·网络安全·docker·容器
年薪丰厚1 小时前
如何在K8S集群中查看和操作Pod内的文件?
docker·云原生·容器·kubernetes·k8s·container
zhangj11251 小时前
K8S Ingress 服务配置步骤说明
云原生·容器·kubernetes
岁月变迁呀2 小时前
kubeadm搭建k8s集群
云原生·容器·kubernetes
墨水\\2 小时前
二进制部署k8s
云原生·容器·kubernetes
Source、2 小时前
k8s-metrics-server
云原生·容器·kubernetes
上海运维Q先生2 小时前
面试题整理15----K8s常见的网络插件有哪些
运维·网络·kubernetes
颜淡慕潇2 小时前
【K8S问题系列 |19 】如何解决 Pod 无法挂载 PVC问题
后端·云原生·容器·kubernetes
darkdragonking2 小时前
OpenEuler 22.03 不依赖zookeeper安装 kafka 3.3.2集群
kafka
大熊程序猿4 小时前
K8s证书过期
云原生·容器·kubernetes