1、ELK架构
1.1、部署ES集群
https://mirrors.tuna.tsinghua.edu.cn/elasticstack/apt/7.x/pool/main/e/elasticsearch/
bash
1、下载软件包
root@es-server1:~# wget https://mirrors.tuna.tsinghua.edu.cn/elasticstack/apt/7.x/pool/main/e/elasticsearch/elasticsearch-7.12.0-amd64.deb
2、安装es服务(es服务依赖java环境,从清华源下载的包默认包含了java环境)
root@es-server1:~# dpkg -i elasticsearch-7.12.0-amd64.deb
3、修改es服务的配置文件
root@es-server1:~# vim /etc/elasticsearch/elasticsearch.yml
17 cluster.name: log-cluster #集群的名称(这个必须唯一,每个es节点的集群名称必须保持一致)
23 node.name: node1 #es节点的名称(每个es节点名称不能一样)
37 path.logs: /data/log/elasticsearch #日志数据存放的路径
56 network.host: 172.17.1.11 #es节点的IP
61 http.port: 9200 #es的服务端口
70 discovery.seed_hosts: ["172.17.1.11", "172.17.1.12","172.17.1.13"] #es集群节点的IP
74 cluster.initial_master_nodes: ["172.17.1.11", "172.17.1.12","172.17.1.13"] #参与选用leader的es节点IP
82 action.destructive_requires_name: true #用于提高集群操作的安全性。当这个参数被设置为true时,它要求所有具有破坏性的操作(如删除索引)必须指定明确的名称,而不允许使用通配符或者_全部_选项来执行操作
4、设置es开机自启动
root@es-server1:~# systemctl enable elasticsearch.service
root@es-server1:~# systemctl start elasticsearch.service
5、查看端口是否启动
root@es-server1:~# ss -ntl|grep 9200
LISTEN 0 4096 [::ffff:172.17.1.11]:9200 *:*
注意:其余es的节点只需更改es配置文件里的network.host和node.name参数
1.2、部署kibana
https://mirrors.tuna.tsinghua.edu.cn/elasticstack/apt/7.x/pool/main/k/kibana/kibana-7.12.0-amd64.deb
bash
(1)、下载安装包(包的版本要和es的版本保持一致)
root@es-server1:~# wget https://mirrors.tuna.tsinghua.edu.cn/elasticstack/apt/7.x/pool/main/k/kibana/kibana-7.12.0-amd64.deb
(2)、安装kibana服务
root@es-server1:~# dpkg -i kibana-7.12.0-amd64.deb
(3)、修改kibana配置文件
root@es-server1:/etc/kibana# vim kibana.yml
2 server.port: 5601 #kibana的服务端口
7 server.host: "172.17.1.11" #kibana的服务IP
32 elasticsearch.hosts: ["http://172.17.1.11:9200","http://172.17.1.12:9200","http://172.17.1.13:9200"] #kibana连接es集群的IP
111 i18n.locale: "zh-CN" #kibana的语言设置为中文
(4)设置kibana服务开机自启动
root@es-server1:/etc/kibana# systemctl enable kibana.service
root@es-server1:/etc/kibana# systemctl start kibana.service
root@es-server1:/etc/kibana# ss -ntl|grep 5601
LISTEN 0 511 172.17.1.11:5601 0.0.0.0:*
1.3、部署zookeeper
bash
(1)下载zookeeper安装包
root@zk-kfk-1:~# wget https://archive.apache.org/dist/zookeeper/zookeeper-3.6.4/apache-zookeeper-3.6.4-bin.tar.gz
(2)解压安装包到指定目录
root@zk-kfk-1:~# mkdir /app
root@zk-kfk-1:~# tar -xvf apache-zookeeper-3.6.4-bin.tar.gz
(3)创建软链接
root@zk-kfk-1:~# ln -sv apache-zookeeper-3.6.4-bin zookeeper
(4)修改zookeeper配置文件
root@zk-kfk-1:~# cd /app/zookeeper/conf/
root@zk-kfk-1:/app/zookeeper/conf# cp zoo_sample.cfg zoo.cfg
root@zk-kfk-1:/app/zookeeper/conf# vim zoo.cfg
dataDir=/data/zookeeper #数据目录
clientPort=2181 #客户端连接zookeeper的端口
server.1=172.17.1.14:2888:3888
server.2=172.17.1.15:2888:3888
server.3=172.17.1.16:2888:3888
(4)设置zookeeper节点id号
root@zk-kfk-1:/app/zookeeper/conf# echo "1" > /data/zookeeper/myid
(5)创建zookeeper启动文件
root@zk-kfk-1:/app/kafka/config# cat /etc/systemd/system/zookeeper.service
[Unit]
Description=Apache ZooKeeper Server
Documentation=http://zookeeper.apache.org
Requires=network.target
After=network.target
[Service]
Type=forking
User=root
Group=root
ExecStart=/app/zookeeper/bin/zkServer.sh start
ExecStop=/app/zookeeper/bin/zkServer.sh stop
ExecRestart=/app/zookeeper/bin/zkServer.sh restart
WorkingDirectory=/data/zookeeper
Restart=on-abnormal
[Install]
WantedBy=multi-user.target
(6)设置zookeeper开机自启动
root@zk-kfk-1:/app/zookeeper/conf# systemctl enable zookeeper.service
root@zk-kfk-1:/app/zookeeper/conf# systemctl start zookeeper.service
(7)验证
root@zk-kfk-1:/app/zookeeper/conf# ss -ntl|grep 2181
LISTEN 0 50 *:2181 *:*
1.4、部署kafka
bash
(1)下载kafka安装包
root@zk-kfk-1:/app# wget https://archive.apache.org/dist/kafka/3.6.0/kafka_2.12-3.6.0.tgz
(2) 解压到指定路径
root@zk-kfk-1:/app# tar -xvf kafka_2.12-3.6.0.tgz -C /app
(3)创建软链接
root@zk-kfk-1:/app#ln -sv kafka_2.12-3.6.0 kafka
(4)修改kafka配置文件
root@zk-kfk-1:/app/kafka/config# vim server.properties
broker.id=14 #kafka的id号,每个节点的ID号必须保证不一致
listeners=PLAINTEXT://172.17.1.14:9092 #监听地址
log.dirs=/data/kafka-logs #存放日志的路径
zookeeper.connect=172.17.1.14:2181,172.17.1.15:2181,172.17.1.16:2181 #zookeeper集群IP
(5)安装java环境
root@zk-kfk-1:/app/kafka/config# apt -y install openjdk-11-jdk
(6)创建kafka启动文件service
root@zk-kfk-1:/app/kafka/config# cat /etc/systemd/system/kafka.service
[Unit]
Description=Apache Kafka Server
Documentation=http://kafka.apache.org/documentation.html
Requires=zookeeper.service
After=zookeeper.service
[Service]
Type=simple
User=root
Group=root
Environment="JAVA_HOME=/usr/lib/jvm/java-11-openjdk-amd64"
ExecStart=/app/kafka/bin/kafka-server-start.sh /app/kafka/config/server.properties
ExecStop=/app/kafka/bin/kafka-server-stop.sh
Restart=on-failure
[Install]
WantedBy=multi-user.target
1.5、部署logstash
bash
(1)下载安装包
root@logstach:~# wget https://mirrors.tuna.tsinghua.edu.cn/elasticstack/apt/7.x/pool/main/l/logstash/logstash-7.12.0-amd64.deb
(2)安装java环境和logstash服务
root@logstach:~# dpkg -i logstash-7.12.0-amd64.deb
root@logstach:~# apt -y install openjdk-11-jdk
2、日志收集的方法
2.1基于daemonset收集日志
基于daemonset运行日志收集服务,主要收集以下类型日志:
- node节点收集,基于daemonset部署日志收集进程,实现json-file类型(标准输出/dev/stdout、错误输出/dev/stderr)日志收集,即应用程序产生的标准输出和错误输出的日志
- 宿主机系统日志等以日志文件形式保存的日志
日志收集架构图:
日志收集的流程:
- pod里的应用容器将自身的json格式日志写入到node宿主机上的某个文件(如json-file1)
- 部署在node节点上的logstash容器收集该node宿主机上存储的应用容器的日志文件(json-file1)
- logstash容器将收集到的应用容器的日志转存到kafka集群
- logstash组件从kafka集群获取到pod应用容器的日志并将其转存到es集群,es集群进行日志的分析
- kibana从es上查询获取到的日志信息,然后展示出来,用户从kibana的web界面获取日志
注意:这里需要将node上存储的应用容器的日志目录挂载到logstash容器上,使用hostpath类型的存储卷,便于logstash收集容器日志
2.1.1dockerfile编写logstash镜像
bash
root@k8s-harbor:~/dockerfile/web/daemonset-logstash# ll
total 16
drwxr-xr-x 2 root root 89 Mar 11 21:23 ./
drwxrwxr-x 11 root root 145 Mar 11 21:00 ../
-rw-r--r-- 1 root root 219 Mar 11 21:11 Dockerfile
-rwxr-xr-x 1 root root 169 Mar 11 21:23 build-commond.sh*
-rw-r--r-- 1 root root 805 Mar 11 21:07 logstash.conf
-rw-r--r-- 1 root root 92 Mar 11 21:07 logstash.yml
1、logstash配置文件
root@k8s-harbor:~/dockerfile/web/daemonset-logstash# cat logstash.conf
input { #定义来源的日志文件
file {
#path => "/var/lib/docker/containers/*/*-json.log" #docker
path => "/var/log/pods/*/*/*.log" #表示Logstash会监视匹配该路径下的所有日志文件
start_position => "beginning" #表示Logstash启动时从日志文件的开始位置读取日志,而不是仅读新追加的内容
type => "jsonfile-daemonset-applog" #给输入的日志流赋予一个类型标签,以便后续处理时进行区分(相当于es的一个索引)
}
file {
path => "/var/log/*.log"
start_position => "beginning"
type => "jsonfile-daemonset-syslog"
}
}
output { #定义日志文件输出到kafka
if [type] == "jsonfile-daemonset-applog" {
kafka { #使用kafka作为日志输出的目的地
bootstrap_servers => "${KAFKA_SERVER}" #kafka集群的地址,从环境变量获取,避免后续需手动更改
topic_id => "${TOPIC_ID}" #kafka的主题ID
batch_size => 16384 #logstash每次向ES传输的数据量大小,单位为字节
codec => "${CODEC}" #数据的编码格式
} }
if [type] == "jsonfile-daemonset-syslog" {
kafka {
bootstrap_servers => "${KAFKA_SERVER}"
topic_id => "${TOPIC_ID}"
batch_size => 16384
codec => "${CODEC}" #系统日志不是json格式
}}
}
2、logstash的yml文件
root@k8s-harbor:~/dockerfile/web/daemonset-logstash# cat logstash.yml
http.host: "0.0.0.0"
#xpack.monitoring.elasticsearch.hosts: [ "http://elasticsearch:9200" ] #关闭xpack功能,该功能收费,如果不关闭将无法正常使用logstash
3、编写Logstash的Dockerfile镜像文件
root@k8s-harbor:~/dockerfile/web/daemonset-logstash# cat Dockerfile
FROM logstash:7.12.0
USER root
WORKDIR /usr/share/logstash
#RUN rm -rf config/logstash-sample.conf
ADD logstash.yml /usr/share/logstash/config/logstash.yml
ADD logstash.conf /usr/share/logstash/pipeline/logstash.conf
4、构造镜像文件脚本
root@k8s-harbor:~/dockerfile/web/daemonset-logstash# cat build-commond.sh
#!/bin/bash
docker build -t harbor.qiange.com/baseimages/logstash:v7.12.0-json-file-log-v4 .
docker push harbor.qiange.com/baseimages/logstash:v7.12.0-json-file-log-v4
2.1.2通过daemonset部署logstash容器
bash
root@k8s-master1:/app/yaml/daemonset-logstash# cat DaemonSet-logstash.yaml
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: logstash-elasticsearch
namespace: kube-system
labels:
k8s-app: logstash-logging
spec:
selector:
matchLabels:
name: logstash-elasticsearch
template:
metadata:
labels:
name: logstash-elasticsearch
spec:
tolerations:
# this toleration is to have the daemonset runnable on master nodes
# remove it if your masters can't run pods
- key: node-role.kubernetes.io/master
operator: Exists
effect: NoSchedule
containers:
- name: logstash-elasticsearch
image: harbor.qiange.com/baseimages/logstash:v7.12.0-json-file-log-v4
env:
- name: "KAFKA_SERVER"
value: "172.17.1.14:9092,172.17.1.15:9092,172.17.1.16:9092"
- name: "TOPIC_ID"
value: "jsonfile-log-topic"
- name: "CODEC"
value: "json"
# resources:
# limits:
# cpu: 1000m
# memory: 1024Mi
# requests:
# cpu: 500m
# memory: 1024Mi
volumeMounts:
- name: varlog #定义宿主机系统日志挂载路径
mountPath: /var/log #宿主机系统日志挂载点
- name: varlibdockercontainers #定义容器日志挂载路径,和logstash配置文件中的收集路径保持一直
#mountPath: /var/lib/docker/containers #docker挂载路径
mountPath: /var/log/pods #containerd挂载路径,此路径与logstash的日志收集路径必须一致
readOnly: false
terminationGracePeriodSeconds: 30
volumes:
- name: varlog
hostPath:
path: /var/log #宿主机系统日志
- name: varlibdockercontainers
hostPath:
path: /var/lib/docker/containers #docker的宿主机日志路径
path: /var/log/pods #containerd的宿主机日志路径
#验证
root@k8s-master1:/app/yaml/daemonset-logstash# kubectl get pod -n kube-system
NAME READY STATUS RESTARTS AGE
calico-kube-controllers-59df8b6856-tx7h2 1/1 Running 22 (46m ago) 50d
calico-node-45vdl 1/1 Running 0 19m
calico-node-5dqjz 1/1 Running 0 17m
calico-node-9z6cn 1/1 Running 0 17m
calico-node-bt6zr 1/1 Running 0 16m
calico-node-ct56r 1/1 Running 0 17m
calico-node-dnbwt 1/1 Running 0 16m
coredns-67cb59d684-9jrbw 1/1 Running 14 (46m ago) 46d
logstash-elasticsearch-457h6 1/1 Running 0 7m56s
logstash-elasticsearch-b6g75 1/1 Running 0 7m56s
logstash-elasticsearch-bc5dd 1/1 Running 0 7m56s
logstash-elasticsearch-bhc85 1/1 Running 0 7m56s
logstash-elasticsearch-ctt5r 1/1 Running 0 7m56s
logstash-elasticsearch-kt6bh 1/1 Running 0 7m56s
2.1.3配置logstash服务,使其将数据转存到es集群上
root@logstach:~# cd /etc/logstash/conf.d/
root@logstach:/etc/logstash/conf.d# cat logsatsh-daemonset-jsonfile-kafka-to-es.conf
input {
kafka {
bootstrap_servers => "172.17.1.14:9092,172.17.1.15:9092,172.17.1.16:9092"
topics => ["jsonfile-log-topic"]
codec => "json"
}
}
output {
#if [fields][type] == "app1-access-log" {
if [type] == "jsonfile-daemonset-applog" {
elasticsearch {
hosts => ["172.17.1.11:9200","172.17.1.12:9200","172.17.1.13:9200"]
index => "jsonfile-daemonset-applog-%{+YYYY.MM.dd}"
}}
if [type] == "jsonfile-daemonset-syslog" {
elasticsearch {
hosts => ["172.17.1.11:9200","172.17.1.12:9200","172.17.1.13:9200"]
index => "jsonfile-daemonset-syslog-%{+YYYY.MM.dd}"
}}
}
2.2基于sidecar模式收集日志
使用sidecar容器(一个pod多容器)收集当前pod内一个或者多个业务容器的日志(通常基于emptyDir实现业务容器与sidcar之间的日志共享)
架构图
日志收集流程
- 应用容器将自身的日志写入emptydir存储卷中,同一个pod中的sidecar容器通过挂载emptydir存储卷到容器内获取应用容器的日志
- 然后将日志写入kafka,logstash从kafka获取日志
- kafka获取到日志后将其转存到es集群,es集群对日志进行分析
- 用户从kibana上获取日志,kibana从es上查询获取日志信息,然后展示出来
2.2.1 编写sidecar容器镜像
bash
root@k8s-harbor:~/dockerfile/web/sidecar-logstash# ll
total 16
drwxr-xr-x 2 root root 89 Mar 13 12:26 ./
drwxrwxr-x 12 root root 169 Mar 12 17:20 ../
-rw-r--r-- 1 root root 221 Mar 12 17:26 Dockerfile
-rwxr-xr-x 1 root root 299 Mar 13 10:01 build-commond.sh*
-rw-r--r-- 1 root root 742 Mar 13 09:18 logstash.conf
-rw-r--r-- 1 root root 92 May 23 2022 logstash.yml
#编写lonstash配置文件
root@k8s-harbor:~/dockerfile/web/sidecar-logstash# cat logstash.conf
input {
file {
path => "/var/log/applogs/catalina.out"
start_position => "beginning"
type => "app1-sidecar-catalina-log"
}
file {
path => "/var/log/applogs/localhost_access_log.*.txt"
start_position => "beginning"
type => "app1-sidecar-access-log"
}
}
output {
if [type] == "app1-sidecar-catalina-log" {
kafka {
bootstrap_servers => "${KAFKA_SERVER}"
topic_id => "${TOPIC_ID}"
batch_size => 16384 #logstash每次向ES传输的数据量大小,单位为字节
codec => "${CODEC}"
} }
if [type] == "app1-sidecar-access-log" {
kafka {
bootstrap_servers => "${KAFKA_SERVER}"
topic_id => "${TOPIC_ID}"
batch_size => 16384
codec => "${CODEC}"
}}
}
#配置logstash.yml文件
root@k8s-harbor:~/dockerfile/web/sidecar-logstash# cat logstash.yml
http.host: "0.0.0.0"
#xpack.monitoring.elasticsearch.hosts: [ "http://elasticsearch:9200" ]
#配置logstash容器的镜像文件
root@k8s-harbor:~/dockerfile/web/sidecar-logstash# cat Dockerfile
FROM logstash:7.12.0
USER root
WORKDIR /usr/share/logstash
#RUN rm -rf config/logstash-sample.conf
ADD logstash.yml /usr/share/logstash/config/logstash.yml
ADD logstash.conf /usr/share/logstash/pipeline/logstash.conf
#配置编写镜像的脚本文件
root@k8s-harbor:~/dockerfile/web/sidecar-logstash# cat build-commond.sh
#!/bin/bash
docker build -t harbor.qiange.com/baseimages/logstash:v7.12.0-sidecar-1 .
docker push harbor.qiange.com/baseimages/logstash:v7.12.0-sidecar-1
#nerdctl build -t harbor.magedu.net/baseimages/logstash:v7.12.1-sidecar .
#nerdctl push harbor.magedu.net/baseimages/logstash:v7.12.1-sidecar
2.2.2 通过deployment部署应用容器和sidecar容器
bash
root@k8s-master1:/app/yaml/sidecar-logstash# cat tomcat-app1.yaml
kind: Deployment
#apiVersion: extensions/v1beta1
apiVersion: apps/v1
metadata:
labels:
app: magedu-tomcat-app1-deployment-label
name: magedu-tomcat-app1-deployment #当前版本的deployment 名称
namespace: qiange
spec:
replicas: 3
selector:
matchLabels:
app: magedu-tomcat-app1-selector
template:
metadata:
labels:
app: magedu-tomcat-app1-selector
spec:
containers:
- name: sidecar-container
image: harbor.qiange.com/baseimages/logstash:v7.12.0-sidecar-1
imagePullPolicy: IfNotPresent
#imagePullPolicy: Always
env:
- name: "KAFKA_SERVER"
value: "172.17.1.14:9092,172.17.1.15:9092,172.17.1.16:9092"
- name: "TOPIC_ID"
value: "tomcat-app-topic"
- name: "CODEC"
value: "json"
volumeMounts:
- name: applogs
mountPath: /var/log/applogs
- name: magedu-tomcat-app1-container
image: harbor.qiange.com/tomcat/tomcat-app1:v1
imagePullPolicy: IfNotPresent
#imagePullPolicy: Always
ports:
- containerPort: 8080
protocol: TCP
name: http
env:
- name: "password"
value: "123456"
- name: "age"
value: "18"
resources:
limits:
cpu: 1
memory: "512Mi"
requests:
cpu: 500m
memory: "512Mi"
volumeMounts:
- name: applogs
mountPath: /apps/tomcat/logs
startupProbe:
httpGet:
path: /app1/index.jsp
port: 8080
initialDelaySeconds: 5 #首次检测延迟5s
failureThreshold: 3 #从成功转为失败的次数
periodSeconds: 3 #探测间隔周期
readinessProbe:
httpGet:
#path: /monitor/monitor.html
path: /app1/index.jsp
port: 8080
initialDelaySeconds: 5
periodSeconds: 3
timeoutSeconds: 5
successThreshold: 1
failureThreshold: 3
livenessProbe:
httpGet:
#path: /monitor/monitor.html
path: /app1/index.jsp
port: 8080
initialDelaySeconds: 5
periodSeconds: 3
timeoutSeconds: 5
successThreshold: 1
failureThreshold: 3
volumes:
- name: applogs #定义通过emptyDir实现业务容器与sidecar容器的日志共享,以让sidecar收集业务容器中的日志
emptyDir: {}
2.2.3配置访问业务的service文件
bash
root@k8s-master1:/app/yaml/sidecar-logstash# cat tomcat-service.yaml
---
kind: Service
apiVersion: v1
metadata:
labels:
app: magedu-tomcat-app1-service-label
name: magedu-tomcat-app1-service
namespace: qiange
spec:
type: NodePort
ports:
- name: http
port: 80
protocol: TCP
targetPort: 8080
nodePort: 30080
selector:
app: magedu-tomcat-app1-selector
2.2.4配置logstash服务,使其将数据转存到es集群上
bash
root@logstach:/etc/logstash/conf.d# cat logsatsh-sidecar-kafka-to-es.conf
input {
kafka {
bootstrap_servers => "172.17.1.14:9092,172.17.1.15:9092,172.17.1.16:9092"
topics => ["tomcat-app-topic"]
codec => "json"
}
}
output {
#if [fields][type] == "app1-access-log" {
if [type] == "app1-sidecar-access-log" {
elasticsearch {
hosts => ["172.17.1.11:9200","172.17.1.12:9200","172.17.1.13:9200"]
index => "sidecar-app1-accesslog-%{+YYYY.MM.dd}"
}
}
#if [fields][type] == "app1-catalina-log" {
if [type] == "app1-sidecar-catalina-log" {
elasticsearch {
hosts => ["172.17.1.11:9200","172.17.1.12:9200","172.17.1.13:9200"]
index => "sidecar-app1-catalinalog-%{+YYYY.MM.dd}"
}
}
# stdout {
# codec => rubydebug
# }
}
root@logstach:/etc/logstash/conf.d# systemctl restart logrotate.service
2.2.5验证
2.2.5.1pod正常启动
2.2.5.2kafka获取到日志信息
2.2.5.3 es获取到日志数据
注意:如果kafka获取到数据但是es上没有获取到数据,可以访问一下业务或者调整一下pod的副本数
业务日志
系统日志
2.3基于应用容器内置filebeat收取日志
2.3.1 编写filebeat镜像文件
bash
root@k8s-harbor:~/dockerfile/web/filebeat-tomcat# ll
total 31856
drwxr-xr-x 3 root root 172 Mar 14 15:14 ./
drwxrwxr-x 14 root root 208 Mar 13 13:16 ../
-rw-rw-r-- 1 root root 408 Mar 14 15:14 Dockerfile
drwxr-xr-x 2 root root 23 Feb 20 10:38 app1/
-rw-r--r-- 1 root root 193 Feb 20 10:52 app1.tar.gz
-rwxrwxr-x 1 root root 155 Mar 13 13:26 build-command.sh*
-rw-r--r-- 1 root root 32590134 Mar 13 21:54 filebeat-7.12.0-x86_64.rpm
-rw-r--r-- 1 root root 727 Mar 13 22:06 filebeat.yml
-rw-rw-r-- 1 root root 265 Mar 13 14:10 run_tomcat.sh
-rw-rw-r-- 1 root root 7593 Jan 19 10:21 server.xml
#dockerfile文件
root@k8s-harbor:~/dockerfile/web/filebeat-tomcat# cat Dockerfile
FROM harbor.qiange.com/tomcat/tomcat-app1:v1
ADD run_tomcat.sh /apps/tomcat/bin
ADD filebeat-7.12.0-x86_64.rpm /root
RUN yum -y remove filebeat && rpm -ivh /root/filebeat-7.12.0-x86_64.rpm
RUN rm -f /etc/filebeat/filebeat.yml
ADD filebeat.yml /etc/filebeat
RUN chmod 777 /apps/tomcat/bin/run_tomcat.sh && chown www.www /apps/tomcat/bin/run_tomcat.sh
CMD ["/bin/sh", "-c", "/apps/tomcat/bin/run_tomcat.sh"]
#filebeat的配置文件
root@k8s-harbor:~/dockerfile/web/filebeat-tomcat# cat filebeat.yml
filebeat.inputs:
- type: log
enabled: true
paths:
- /apps/tomcat/logs/catalina.out
fields:
type: filebeat-tomcat-catalina
- type: log
enabled: true
paths:
- /apps/tomcat/logs/localhost_access_log.*.txt
fields:
type: filebeat-tomcat-accesslog
filebeat.config.modules:
path: ${path.config}/modules.d/*.yml
reload.enabled: false
setup.template.settings:
index.number_of_shards: 1
setup.kibana:
output.kafka:
hosts: ["172.17.1.14:9092","172.17.1.15:9092","172.17.1.16:9092"]
required_acks: 1
topic: "filebeat-magedu-app1"
compression: gzip
max_message_bytes: 1000000
#output.redis:
# hosts: ["172.31.2.105:6379"]
# key: "k8s-magedu-app1"
# db: 1
# timeout: 5
# password: "123456"
#服务启动脚本
root@k8s-harbor:~/dockerfile/web/filebeat-tomcat# cat run_tomcat.sh
#!/bin/bash
/usr/share/filebeat/bin/filebeat -e -c /etc/filebeat/filebeat.yml -path.home /usr/share/filebeat -path.config /etc/filebeat -path.data /var/lib/filebeat -path.logs /var/log/filebeat &
su - root -c "/apps/tomcat/bin/catalina.sh start"
tail -f /etc/hosts
#构建镜像脚本
root@k8s-harbor:~/dockerfile/web/filebeat-tomcat# cat build-command.sh
#!/bin/bash
docker build -t harbor.qiange.com/tomcat/tomcat-app1-filebeat-7.12.0:v1 .
docker push harbor.qiange.com/tomcat/tomcat-app1-filebeat-7.12.0:v1
2.3.2 使用Deployment控制器部署应用
bash
root@k8s-master1:/app/yaml/filebeat-log# cat tomcat-app2.yaml
kind: Deployment
#apiVersion: extensions/v1beta1
apiVersion: apps/v1
metadata:
labels:
app: magedu-tomcat-app1-filebeat-deployment-label
name: magedu-tomcat-app1-filebeat-deployment
namespace: qiange
spec:
replicas: 3
selector:
matchLabels:
app: magedu-tomcat-app1-filebeat-selector
template:
metadata:
labels:
app: magedu-tomcat-app1-filebeat-selector
spec:
containers:
- name: magedu-tomcat-app1-filebeat-container
image: harbor.qiange.com/tomcat/tomcat-app1-filebeat-7.12.0:v1
imagePullPolicy: IfNotPresent
#imagePullPolicy: Always
ports:
- containerPort: 8080
protocol: TCP
name: http
env:
- name: "password"
value: "123456"
- name: "age"
value: "18"
resources:
limits:
cpu: 1
memory: "512Mi"
requests:
cpu: 500m
memory: "512Mi"
2.3.3 部署service
bash
root@k8s-master1:/app/yaml/filebeat-log# cat tomcat-service.yaml
---
kind: Service
apiVersion: v1
metadata:
labels:
app: magedu-tomcat-app1-filebeat-service-label
name: magedu-tomcat-app1-filebeat-service
namespace: qiange
spec:
type: NodePort
ports:
- name: http
port: 81
protocol: TCP
targetPort: 8080
nodePort: 30092
selector:
app: magedu-tomcat-app1-filebeat-selector
2.3.4 配置logstash服务,使其将数据转存到es集群上
bash
root@logstach:~# cd /etc/logstash/conf.d/
root@logstach:/etc/logstash/conf.d# cat logstash-filebeat-process-kafka-to-es.conf
input {
kafka {
bootstrap_servers => "172.17.1.14:9092,172.17.1.15:9092,172.17.1.16:9092"
topics => ["filebeat-magedu-app1"]
codec => "json"
}
}
output {
if [fields][type] == "filebeat-tomcat-catalina" {
elasticsearch {
hosts => ["172.17.1.11:9200","172.17.1.12:9200","172.17.1.13:9200"]
index => "filebeat-tomcat-catalina-%{+YYYY.MM.dd}"
}}
if [fields][type] == "filebeat-tomcat-accesslog" {
elasticsearch {
hosts => ["172.17.1.11:9200","172.17.1.12:9200","172.17.1.13:9200"]
index => "filebeat-tomcat-accesslog-%{+YYYY.MM.dd}"
}}
}
2.3.5验证
2.3.5.1 pod启动正常
2.3.5.2 kafka获取日志数据
2.3.5.3 es获取到来自kafka转存的数据
业务日志:
系统日志: