EFK环境搭建(基于K8S环境部署)

目录

一.环境信息

1.服务器及k8s版本

IP地址 主机名称 角色 版本
192.168.40.180 master1 master节点 1.27
192.168.40.181 node1 node1节点 1.27
192.168.40.182 node2 node2节点 1.27

2.部署组件版本

序号 名称 版本 作用
1 elasticsearch 7.12.1 是一个实时的分布式搜索和分析引擎,它可以用于全文搜索,结构化搜索以及分析。
2 kibana 7.12.1 为 Elasticsearch 提供了分析和 Web 可视化界面,并生成各种维度表格、图形
3 fluentd v1.16 是一个数据收集引擎,主要用于进行数据收集、解析,并将数据发送给ES
4 nfs-client-provisioner v4.0.0 nfs供应商

master1和node2节点上传fluentd组件 node1节点上传全部的组件

链接:https://pan.baidu.com/s/1u2U87Jp4TzJxs7nfVqKM-w

提取码:fcpp

--来自百度网盘超级会员V4的分享

链接:https://pan.baidu.com/s/1ttMaqmeVNpOAJD8G3-6tag

提取码:qho8

--来自百度网盘超级会员V4的分享

链接:https://pan.baidu.com/s/1ttMaqmeVNpOAJD8G3-6tag

提取码:qho8

链接:https://pan.baidu.com/s/1cQSkGz0NO_rrulas2EYv5Q

提取码:rxjx

--来自百度网盘超级会员V4的分享

二.安装nfs供应商

1.安装nfs服务

三个节点都操作

shell 复制代码
yum -y install  nfs-utils

2.启动nfs服务并设置开机自启

三个节点都操作

shell 复制代码
# 开启服务
systemctl start nfs
# 设置开机自启
systemctl enable nfs.service

3.在master1上创建一个共享目录

shell 复制代码
# 创建目录
mkdir /data/v1 -p

# 编辑/etc/exports文件
vim /etc/exports
/data/v1 *(rw,no_root_squash)

#加载配置,使文件生效
exportfs -arv
systemctl restart nfs

4.创建nfs作为存储的供应商

master1上执行

4.1创建运行nfs-provisioner需要的账号

shell 复制代码
vim serviceaccount.yaml 
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: nfs-provisioner

执行配置

shell 复制代码
kubectl apply -f serviceaccount.yaml

4.2对sa授权

shell 复制代码
kubectl create clusterrolebinding nfs-provisioner-clusterrolebinding --clusterrole=cluster-admin --serviceaccount=default:nfs-provisioner

把nfs-subdir-external-provisioner.tar.gz上传到node1上,手动解压。

shell 复制代码
ctr -n=k8s.io images import nfs-subdir-external-provisioner.tar.gz

4.3通过deployment创建pod用来运行nfs-provisioner

shell 复制代码
vim deployment.yaml
---
kind: Deployment
apiVersion: apps/v1
metadata:
  name: nfs-provisioner
spec:
  selector:
    matchLabels:
      app: nfs-provisioner
  replicas: 1
  strategy:
    type: Recreate
  template:
    metadata:
      labels:
        app: nfs-provisioner
    spec:
      serviceAccount: nfs-provisioner
      containers:
        - name: nfs-provisioner
          image: registry.cn-beijing.aliyuncs.com/mydlq/nfs-subdir-external-provisioner:v4.0.0
          imagePullPolicy: IfNotPresent
          volumeMounts:
            - name: nfs-client-root
              mountPath: /persistentvolumes
          env:
            - name: PROVISIONER_NAME
              value: example.com/nfs
            - name: NFS_SERVER
              value: 192.168.40.180
#这个需要写nfs服务端所在的ip地址,大家需要写自己安装了nfs服务的机器ip
            - name: NFS_PATH
              value: /data/v1
#这个是nfs服务端共享的目录
      volumes:
        - name: nfs-client-root
          nfs:
            server: 192.168.40.180
            path: /data/v1

执行配置文件

shell 复制代码
kubectl apply -f deployment.yaml

查看是否创建成功

shell 复制代码
kubectl get pods | grep nfs


5.创建存储类storgeclass

mster1上执行

shell 复制代码
vim es_class.yaml
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: do-block-storage
provisioner: example.com/nfs

执行配置

shell 复制代码
kubectl apply -f es_class.yaml

三.安装elasticsearch

在master1上执行

1.创建kube-logging名称空间

shell 复制代码
vim kube-logging.yaml
---
kind: Namespace
apiVersion: v1
metadata:
  name: kube-logging

执行配置

shell 复制代码
kubectl apply -f kube-logging.yaml

2.安装elasticsearch组件

shell 复制代码
vim elasticsearch_svc.yaml
---
kind: Service
apiVersion: v1
metadata:
  name: elasticsearch
  namespace: kube-logging
  labels:
    app: elasticsearch
spec:
  selector:
    app: elasticsearch
  clusterIP: None
  ports:
    - port: 9200
      name: rest
    - port: 9300
      name: inter-node

执行配置文件

shell 复制代码
kubectl apply -f elasticsearch_svc.yaml

3.创建statefulset资源

shell 复制代码
vim elasticsearch-statefulset.yaml
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: es-cluster
  namespace: kube-logging
spec:
  serviceName: elasticsearch
  replicas: 3
  selector:
    matchLabels:
      app: elasticsearch
  template:
    metadata:
      labels:
        app: elasticsearch
    spec:
      containers:
      - name: elasticsearch
        image:  docker.io/library/elasticsearch:7.12.1
        imagePullPolicy: IfNotPresent
        resources:
            limits:
              cpu: 1000m
            requests:
              cpu: 100m
        ports:
        - containerPort: 9200
          name: rest
          protocol: TCP
        - containerPort: 9300
          name: inter-node
          protocol: TCP
        volumeMounts:
        - name: data
          mountPath: /usr/share/elasticsearch/data
        env:
          - name: cluster.name
            value: k8s-logs
          - name: node.name
            valueFrom:
              fieldRef:
                fieldPath: metadata.name
          - name: discovery.seed_hosts
            value: "es-cluster-0.elasticsearch,es-cluster-1.elasticsearch,es-cluster-2.elasticsearch"
          - name: cluster.initial_master_nodes
            value: "es-cluster-0,es-cluster-1,es-cluster-2"
          - name: ES_JAVA_OPTS
            value: "-Xms512m -Xmx512m"
      initContainers:
      - name: fix-permissions
        image: busybox
        imagePullPolicy: IfNotPresent
        command: ["sh", "-c", "chown -R 1000:1000 /usr/share/elasticsearch/data"]
        securityContext:
          privileged: true
        volumeMounts:
        - name: data
          mountPath: /usr/share/elasticsearch/data
      - name: increase-vm-max-map
        image: busybox
        imagePullPolicy: IfNotPresent
        command: ["sysctl", "-w", "vm.max_map_count=262144"]
        securityContext:
          privileged: true
      - name: increase-fd-ulimit
        image: busybox
        imagePullPolicy: IfNotPresent
        command: ["sh", "-c", "ulimit -n 65536"]
        securityContext:
          privileged: true
  volumeClaimTemplates:
  - metadata:
      name: data
      labels:
        app: elasticsearch
    spec:
      accessModes: [ "ReadWriteOnce" ]
      storageClassName: do-block-storage
      resources:
        requests:
          storage: 10Gi

执行配置

shell 复制代码
kubectl apply -f elasticsearch-statefulset.yaml

四.安装kibana组件

shell 复制代码
vim kibana.yaml
---
apiVersion: v1
kind: Service
metadata:
  name: kibana
  namespace: kube-logging
  labels:
    app: kibana
spec:
  ports:
  - port: 5601
  type: NodePort
  selector:
    app: kibana
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: kibana
  namespace: kube-logging
  labels:
    app: kibana
spec:
  replicas: 1
  selector:
    matchLabels:
      app: kibana
  template:
    metadata:
      labels:
        app: kibana
    spec:
      containers:
      - name: kibana
        image:  docker.io/library/kibana:7.12.1
        imagePullPolicy: IfNotPresent
        resources:
          limits:
            cpu: 1000m
          requests:
            cpu: 100m
        env:
          - name: ELASTICSEARCH_URL
            value: http://elasticsearch:9200
        ports:
        - containerPort: 5601

执行配置

shell 复制代码
kubectl apply -f kibana.yaml

在浏览器中打开http://<k8s集群任意节点IP>:31552即可,如果看到如下欢迎界面证明 Kibana 已经成功部署到了Kubernetes集群之中。

五.安装fluentd

我们使用daemonset控制器部署fluentd组件,这样可以保证集群中的每个节点都可以运行同样fluentd的pod副本,这样就可以收集k8s集群中每个节点的日志,在k8s集群中,容器应用程序的输入输出日志会重定向到node节点里的json文件中,fluentd可以tail和过滤以及把日志转换成指定的格式发送到elasticsearch集群中。除了容器日志,fluentd也可以采集kubelet、kube-proxy、docker的日志。

shell 复制代码
vim  fluentd.yaml
---
kind: ServiceAccount
metadata:
  name: fluentd
  namespace: kube-logging
  labels:
    app: fluentd
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: fluentd
  labels:
    app: fluentd
rules:
- apiGroups:
  - ""
  resources:
  - pods
  - namespaces
  verbs:
  - get
  - list
  - watch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: fluentd
roleRef:
  kind: ClusterRole
  name: fluentd
  apiGroup: rbac.authorization.k8s.io
subjects:
- kind: ServiceAccount
  name: fluentd
  namespace: kube-logging
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: fluentd
  namespace: kube-logging
  labels:
    app: fluentd
spec:
  selector:
    matchLabels:
      app: fluentd
  template:
    metadata:
      labels:
        app: fluentd
    spec:
      serviceAccount: fluentd
      serviceAccountName: fluentd
      tolerations:
      - key: node-role.kubernetes.io/control-plane
        effect: NoSchedule
      containers:
      - name: fluentd
        image: docker.io/fluent/fluentd-kubernetes-daemonset:v1.16-debian-elasticsearch7-1
        imagePullPolicy: IfNotPresent
        env:
          - name:  FLUENT_ELASTICSEARCH_HOST
            value: "elasticsearch.kube-logging.svc.cluster.local"
          - name:  FLUENT_ELASTICSEARCH_PORT
            value: "9200"
          - name: FLUENT_ELASTICSEARCH_SCHEME
            value: "http"
          - name: FLUENTD_SYSTEMD_CONF
            value: disable
          - name: FLUENT_CONTAINER_TAIL_PARSER_TYPE
            value: "cri"
          - name: FLUENT_CONTAINER_TAIL_PARSER_TIME_FORMAT
            value: "%Y-%m-%dT%H:%M:%S.%L%z"
        resources:
          limits:
            memory: 512Mi
          requests:
            cpu: 100m
            memory: 200Mi
        volumeMounts:
        - name: varlog
          mountPath: /var/log
        - name: containers
          mountPath: /var/log/containers
          readOnly: true
      terminationGracePeriodSeconds: 30
      volumes:
      - name: varlog
        hostPath:
          path: /var/log
      - name: containers
        hostPath:
          path: /var/log/containers

执行配置

shell 复制代码
kubectl apply -f fluentd.yaml

Fluentd 启动成功后,我们可以前往 Kibana 的 Dashboard 页面中,点击左侧的Discover,可以看到如下配置页面:


相关推荐
长天一色4 小时前
【Docker从入门到进阶】06.常见问题与解决方案 & 07.总结与资源
运维·docker·容器
大G哥6 小时前
ELK日志收集之ES的DSL查询语句
大数据·elk·elasticsearch·搜索引擎·jenkins
不能放弃治疗7 小时前
重生之我们在ES顶端相遇第 20 章 - Mapping 参数设置大全(进阶)
elasticsearch
妍妍的宝贝7 小时前
k8s 中的金丝雀发布(灰度发布)
云原生·容器·kubernetes
梆子井欢喜坨8 小时前
《Cloud Native Data Center Networking》(云原生数据中心网络设计)读书笔记 -- 12数据中心中的EVPN
网络·云原生
iangyu8 小时前
docker常用命令
运维·docker·容器
Dylanioucn9 小时前
【分布式微服务云原生】掌握 Redis Cluster架构解析、动态扩展原理以及哈希槽分片算法
算法·云原生·架构
飞酱不会电脑11 小时前
云计算第四阶段 CLOUD2周目 01-03
云原生·容器·kubernetes
Elastic 中国社区官方博客12 小时前
Elasticsearch 开放推理 API 增加了对 Google AI Studio 的支持
大数据·数据库·人工智能·elasticsearch·搜索引擎
程序那点事儿13 小时前
k8s 之安装busybox
云原生·容器·kubernetes