基于 K8S kubernetes 搭建 安装 EFK日志收集平台

目录

1、在k8s中安装EFK组件

[1.1 安装elasticsearch组件](#1.1 安装elasticsearch组件)

[1.2 安装kibana组件](#1.2 安装kibana组件)

[1.3 安装fluentd组件](#1.3 安装fluentd组件)


文档中的YAML文件配置直接复制粘贴可能存在格式错误,故实验中所需要的YAML文件以及本地包均打包至网盘

链接:https://pan.baidu.com/s/15Ryaoa0_9ABQElLw9y28DA

提取码:xdbm

基于 K8S kubernetes 的常见日志收集方案:
https://chenyun.blog.csdn.net/article/details/142336441https://chenyun.blog.csdn.net/article/details/142336441

https://registry.hub.docker.com/_/elasticsearch?tab=tags&page=1&ordering=last_updated

K ibana版本,目前官方docker hub更新到7 .12.1

https://registry.hub.docker.com/_/kibana?tab=tags\&page=1\&ordering=last_updated

F luentd版本,目前官方docker hub更新到1 .9.2

https://registry.hub.docker.com/_/fluentd?tab=tags&page=1&ordering=last_updated

1、在k 8s 中安装EFK组件

elasticsearch-7-12-1.tar.gz fluentd-v1-9-1.tar.gz kibana-7-12-1.tar.gz 上传到xianchaomaster1和xianchaonode1机器上,手动解压

docker load -i elasticsearch-7-12-1.tar.gz

docker load -i kibana-7-12-1.tar.gz

docker load -i fluentd-v1-9-1.tar.gz

安装nfs供应商

#安装nfs服务,选择k8s集群的xianchaomaster 1 节点,k 8s 集群的xianchaomaster 1 节点的ip是192.168. 40.180

#yum安装nfs

[root@xianchaomaster1 ~]# yum install nfs-utils -y

[root@xianchaonode1 ~]# yum install nfs-utils -y

#启动nfs服务

[root@xianchaomaster1 ~]# systemctl start nfs

[root@xianchaonode1 ~]# systemctl start nfs

#设置nfs开机自启动

[root@xianchaomaster1 ~]# systemctl enable nfs.service

[root@xianchaonode1 ~]# systemctl enable nfs.service

# 在xianchaomaster1上创建一个nfs共享目录

[root@xianchaomaster1 ~]# mkdir /data/v1 -p

# 编辑 /etc/exports 文件

  1. [root@xianchaomaster1 ~]# vim /etc/exports

/data/v1 *(rw,no_root_squash)

#加载配置,使配置生效

[root@xianchaomaster1 ~]# exportfs -arv

[root@xianchaomaster1 ~]# systemctl restart nfs

#创建nfs作为存储的供应商

1、创建运行nfs - provisioner需要的sa账号

[root@xianchaomaster1 nfs]# cat serviceaccount.yaml

apiVersion: v1

kind: ServiceAccount

metadata:

name: nfs-provisioner

[root@xianchaomaster1 nfs]# kubectl apply -f serviceaccount.yaml

serviceaccount/nfs-provisioner created

扩展:什么是sa?

sa的全称是serviceaccount。

s erviceaccount是为了方便Pod里面的进程调用Kubernetes API或其他外部服务而设计的

指定了 serviceaccount 之后,我们把pod创建出来了,我们在使用这个pod时,这个pod就有了我们指定的账户的权限了

2、对sa授权

[root@xianchaomaster1]# kubectl create clusterrolebinding nfs-provisioner-clusterrolebinding --clusterrole=cluster-admin --serviceaccount=default:nfs-provisioner

#把 nfs-subdir-external-provisioner.tar.gz 上传到xianchaonode1上,手动解压。

[root@xianchaonode1 ~]# docker load -i nfs-subdir-external-provisioner.tar.gz

# 通过deployment创建pod用来运行nfs-provisioner

[root@xianchaomaster1]# kubectl apply -f deployment.yaml

deployment.yaml 文件解释说明:

kind: Deployment

apiVersion: apps/v1

metadata:

name: nfs-provisioner

spec:

selector:

matchLabels:

app: nfs-provisioner

replicas: 1

strategy:

type: Recreate

template:

metadata:

labels:

app: nfs-provisioner

spec:

serviceAccount: nfs-provisioner

containers:

- name: nfs-provisioner

image: registry.cn-beijing.aliyuncs.com/mydlq/nfs-subdir-external-provisioner:v4.0.0

imagePullPolicy: IfNotPresent

volumeMounts:

- name: nfs-client-root

mountPath: /persistentvolumes

env:

- name: PROVISIONER_NAME

value: example.com/nfs

- name: NFS_SERVER

value: 192.168.40.180

#这个需要写nfs服务端所在的ip地址,大家需要写自己安装了nfs服务的机器ip

- name: NFS_PATH

value: /data/v1

#这个是nfs服务端共享的目录

volumes:

- name: nfs-client-root

nfs:

server: 192.168.40.180

path: /data/v1

#验证nfs是否创建成功

[root@xianchaomaster1]# kubectl get pods | grep nfs

# 显示如下说明创建成功:

nfs-provisioner-5975849bb4-92dhq 1/1 Running 3 11h

#创建stoorageclass

[root@xianchaomaster1]# kubectl apply -f class.yaml

class.yaml 文件内容如下:

apiVersion: storage.k8s.io/v1

kind: StorageClass

metadata:

name: do-block-storage

provisioner: example.com/nfs

注:

provisioner: example.com/nfs

#该值需要和 nfs provisioner配置的PROVISIONER_NAME 处的value值 保持一致

1.1 安装elasticsearch组件

下面安装步骤均在 k8s 控制节点操作:

1. 创建kube-logging 名称空间

[root@xianchaomaster1 efk]# cat kube-logging.yaml

kind: Namespace

apiVersion: v1

metadata:

name: kube-logging

[root@xianchaomaster1 efk]# kubectl apply -f kube-logging.yaml

2.查看kube-logging名称空间是否创建成功

kubectl get namespaces | grep kube-logging

显示如下,说明创建成功

kube-logging Active 1m

3 .安装elasticsearch组件

#创建headless service

[root@xianchaomaster1 efk]# cat elasticsearch_svc.yaml

kind: Service

apiVersion: v1

metadata:

name: elasticsearch

namespace: kube-logging

labels:

app: elasticsearch

spec:

selector:

app: elasticsearch

clusterIP: None

ports:

- port: 9200

name: rest

- port: 9300

name: inter-node

[root@xianchaomaster1 efk]# kubectl apply -f elasticsearch_svc.yaml

查看elasticsearch的service是否创建成功

[root@xianchaomaster1 efk]# kubectl get services --namespace=kube-logging

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S)

elasticsearch ClusterIP None <none> 9200/TCP,9300/TCP

#创建storageclass

[root@xianchaomaster1 efk]# cat es_class.yaml

apiVersion: storage.k8s.io/v1

kind: StorageClass

metadata:

name: do-block-storage

provisioner: example.com/nfs

[root@xianchaomaster1 efk]# kubectl apply -f es _ class.yaml

[root@xianchaomaster1 efk]# cat elasticsearch-statefulset.yaml

apiVersion: apps/v1

kind: StatefulSet

metadata:

name: es-cluster

namespace: kube-logging

spec:

serviceName: elasticsearch

replicas: 3

selector:

matchLabels:

app: elasticsearch

template:

metadata:

labels:

app: elasticsearch

spec:

containers:

- name: elasticsearch

image: elasticsearch:7.12.1

imagePullPolicy: IfNotPresent

resources:

limits:

cpu: 1000m

requests:

cpu: 100m

ports:

- containerPort: 9200

name: rest

protocol: TCP

- containerPort: 9300

name: inter-node

protocol: TCP

volumeMounts:

- name: data

mountPath: /usr/share/elasticsearch/data

env:

- name: cluster.name

value: k8s-logs

- name: node.name

valueFrom:

fieldRef:

fieldPath: metadata.name

- name: discovery.seed_hosts

value: "es-cluster-0.elasticsearch,es-cluster-1.elasticsearch,es-cluster-2.elasticsearch"

- name: cluster.initial_master_nodes

value: "es-cluster-0,es-cluster-1,es-cluster-2"

- name: ES_JAVA_OPTS

value: "-Xms512m -Xmx512m"

initContainers:

- name: fix-permissions

image: busybox

imagePullPolicy: IfNotPresent

command: ["sh", "-c", "chown -R 1000:1000 /usr/share/elasticsearch/data"]

securityContext:

privileged: true

volumeMounts:

- name: data

mountPath: /usr/share/elasticsearch/data

- name: increase-vm-max-map

image: busybox

imagePullPolicy: IfNotPresent

command: ["sysctl", "-w", "vm.max_map_count=262144"]

securityContext:

privileged: true

- name: increase-fd-ulimit

image: busybox

imagePullPolicy: IfNotPresent

command: ["sh", "-c", "ulimit -n 65536"]

securityContext:

privileged: true

volumeClaimTemplates:

- metadata:

name: data

labels:

app: elasticsearch

spec:

accessModes: [ "ReadWriteOnce" ]

storageClassName: do-block-storage

resources:

requests:

storage: 10Gi

[root@xianchaomaster1 efk]# kubectl apply -f elasticse a rch-statefulset.yaml

[root@xianchaomaster1 efk]# kubectl get pods -n kube-logging

NAME READY STATUS RESTARTS AGE

es-cluster-0 1/1 Running 6 11h

es-cluster-1 1/1 Running 2 11h

es-cluster-2 1/1 Running 2 11h

[root@xianchaomaster1 efk]# kubectl get svc -n kube-logging

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S)

elasticsearch ClusterIP None <none> 9200/TCP,9300/TCP

1.2 安装kibana组件

[root@xianchaomaster1 efk]# cat kibana.yaml

apiVersion: v1

kind: Service

metadata:

name: kibana

namespace: kube-logging

labels:

app: kibana

spec:

ports:

- port: 5601

selector:

app: kibana

---

apiVersion: apps/v1

kind: Deployment

metadata:

name: kibana

namespace: kube-logging

labels:

app: kibana

spec:

replicas: 1

selector:

matchLabels:

app: kibana

template:

metadata:

labels:

app: kibana

spec:

containers:

- name: kibana

image: kibana:7.12.1

imagePullPolicy: IfNotPresent

resources:

limits:

cpu: 1000m

requests:

cpu: 100m

env:

- name: ELASTICSEARCH_URL

value: http://elasticsearch:9200

ports:

- containerPort: 5601

配置完成后,直接使用 kubectl 工具创建:

[root@xianchaomaster1 efk]# kubectl apply -f kibana.yaml

[root@xianchaomaster1 efk]# kubectl get pods -n kube-logging

NAME READY STATUS RESTARTS AGE

es-cluster-0 1/1 Running 6 11h

es-cluster-1 1/1 Running 2 11h

es-cluster-2 1/1 Running 2 11h

kibana-84cf7f59c-vvm6q 1/1 Running 2 11h

[root@xianchaomaster1 efk]# kubectl get svc -n kube-logging

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S)

elasticsearch ClusterIP None <none> 9200/TCP,9300/TCP

kibana NodePort 10.108.195.109 <none> 5601:32329/TCP

修改service的type类型为NodePort:

kubectl edit svc kibana -n kube-logging

type: ClusterIP 变成 type: NodePort

保存退出之后

[root@xianchaomaster1 efk]# kubectl get svc -n kube-logging

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S)

elasticsearch ClusterIP None <none> 9200/TCP,9300/TCP

kibana NodePort 10.108.195.109 <none> 5601:32329/TCP

在浏览器中打开 http://<k8s 集群 任意节点IP>: 32462 即可,如果看到如下欢迎界面证明 Kibana 已经成功部署到了Kubernetes集群之中。

1.3 安装fluentd组件

我们使用daemonset控制器部署fluentd组件,这样可以保证集群中的每个节点都可以运行同样fluentd的pod副本,这样就可以收集k 8s 集群中每个节点的日志,在k8s集群中,容器应用程序的输入输出日志会重定向到node节点里的json文件中

,fluentd可以tail和过滤以及把日志转换成指定的格式发送到elasticsearch集群中。除了容器日志,fluentd也可以采集kubelet、kube-proxy、docker的日志。

[root@xianchaomaster1 efk]# cat fluentd.yaml

apiVersion: v1

kind: ServiceAccount

metadata:

name: fluentd

namespace: kube-logging

labels:

app: fluentd

---

apiVersion: rbac.authorization.k8s.io/v1

kind: ClusterRole

metadata:

name: fluentd

labels:

app: fluentd

rules:

- apiGroups:

- ""

resources:

- pods

- namespaces

verbs:

- get

- list

- watch

---

kind: ClusterRoleBinding

apiVersion: rbac.authorization.k8s.io/v1

metadata:

name: fluentd

roleRef:

kind: ClusterRole

name: fluentd

apiGroup: rbac.authorization.k8s.io

subjects:

- kind: ServiceAccount

name: fluentd

namespace: kube-logging

---

apiVersion: apps/v1

kind: DaemonSet

metadata:

name: fluentd

namespace: kube-logging

labels:

app: fluentd

spec:

selector:

matchLabels:

app: fluentd

template:

metadata:

labels:

app: fluentd

spec:

serviceAccount: fluentd

serviceAccountName: fluentd

tolerations:

- key: node-role.kubernetes.io/master

effect: NoSchedule

containers:

- name: fluentd

image: fluentd:v1.9.1-debian-1.0

imagePullPolicy: IfNotPresent

env:

- name: FLUENT_ELASTICSEARCH_HOST

value: "elasticsearch.kube-logging.svc.cluster.local"

- name: FLUENT_ELASTICSEARCH_PORT

value: "9200"

- name: FLUENT_ELASTICSEARCH_SCHEME

value: "http"

- name: FLUENTD_SYSTEMD_CONF

value: disable

resources:

limits:

memory: 512Mi

requests:

cpu: 100m

memory: 200Mi

volumeMounts:

- name: varlog

mountPath: /var/log

- name: varlibdockercontainers

mountPath: /var/lib/docker/containers

readOnly: true

terminationGracePeriodSeconds: 30

volumes:

- name: varlog

hostPath:

path: /var/log

- name: varlibdockercontainers

hostPath:

path: /var/lib/docker/containers

[root@xianchaomaster1 efk]# kubectl apply -f fluentd.yaml

[root@xianchaomaster1 efk]# kubectl get pods -n kube-logging

NAME READY STATUS RESTARTS AGE

es-cluster-0 1/1 Running 6 11h

es-cluster-1 1/1 Running 2 11h

es-cluster-2 1/1 Running 2 11h

fluentd-m8rgp 1/1 Running 3 11h

fluentd-wbl4z 1/1 Running 0 11h

kibana-84cf7f59c-vvm6q 1/1 Running 2 11h

Fluentd 启动成功后,我们可以前往 Kibana 的 Dashboard 页面中,点击左侧的Discover,可以看到如下配置页面:

在这里可以配置我们需要的 Elasticsearch 索引,前面 Fluentd 配置文件中我们采集的日志使用的是 logstash 格式,这里只需要在文本框中输入logstash-*即可匹配到 Elasticsearch 集群中的所有日志数据,然后点击下一步,进入以下页面:

点击next step,出现如下

选择@timestamp,创建索引

点击左侧的discover,可看到如下:

Kibana Query Language | Kibana Guide [7.12] | Elastic

Kibana Query Language | Kibana Guide [7.12] | Elastic

相关推荐
Data 3173 分钟前
Shell脚本编程基础(二)
大数据·linux·运维·数据仓库·sql·centos·bash
weixin_443290699 分钟前
【Docker】安装及使用
docker·容器·eureka
nvd111 小时前
K8S - 用service account 登陆kubectl
kubernetes
二进制杯莫停1 小时前
k8s pod网络故障注入,命令行实现
网络·容器·kubernetes
it技术分享just_free1 小时前
基于 K8S kubernetes 的常见日志收集方案
linux·运维·docker·云原生·容器·kubernetes·k8s
小叶子来了啊1 小时前
002.k8s(Kubernetes)一小时快速入门(先看docker30分钟)
java·容器·kubernetes
aidroid1 小时前
git github仓库管理
linux·运维·docker
学习3人组2 小时前
集群服务器主机实现主机名与IP绑定
运维·服务器·tcp/ip
三朝看客2 小时前
k8s自动清理pod脚本分享
linux·docker