docker-compose 快速搭建日志平台

部署ELK

  • 创建部署目录

    mkdir -p /data/apps/elk/{elasticsearch,kibana,logstash}
    mkdir -p /data/apps/elk/elasticsearch/{data,plugins,logs,cert}
    chmod 777 elasticsearch kibana logstash -R

  • 创建logstash配置文件

    cd /data/apps/elk/
    touch /data/apps/elk/logstash/logstash.conf

  • 配置logstash.conf,如果只用filebeat采集,logstash可以不用部署

    input {
    tcp {
    mode => "server"
    host => "0.0.0.0"
    port => 4560
    }
    }

    output {
    elasticsearch {
    hosts => ["elasticsearch:9200"]
    index => "logstash-%{+YYYY.MM.dd}"
    }
    stdout { codec => rubydebug }
    }

  • 创建和配置elasticsearch.yml

    touch /data/apps/elk/elasticsearch/config/elasticsearch.yml

    cluster.name: "docker-cluster"
    network.host: 0.0.0.0

    这里等生成好证书再开启配置

    #xpack.security.enabled: true
    #xpack.security.transport.ssl.enabled: true
    #xpack.security.transport.ssl.verification_mode: certificate
    #xpack.security.transport.ssl.keystore.path: /usr/share/elasticsearch/config/elastic-certificates.p12
    #xpack.security.transport.ssl.truststore.path: /usr/share/elasticsearch/config/elastic-certificates.p12
    #xpack.security.transport.ssl.truststore.type: PKCS12
    #xpack.security.transport.ssl.keystore.type: PKCS12

  • 创建和配置kibana.yml

    server.name: kibana
    server.host: "0"
    elasticsearch.hosts: [ "http://elasticsearch:9200" ]
    xpack.monitoring.ui.container.elasticsearch.enabled: true

    设置中文

    i18n.locale: zh-CN

    #elasticsearch中用户名和密码,这也里要等ES开启xpack再启用
    #elasticsearch.username: "kibana"
    #elasticsearch.password: "****************"

  • 创建docker-compose.yml文件

    touch /data/apps/elk/docker-compose.yml

  • 根据实际情况配置 docker-compose.yml,这是最终版

    version: '3.5'
    services:
    elasticsearch:
    image: docker.elastic.co/elasticsearch/elasticsearch:7.6.2
    container_name: elasticsearch
    privileged: true
    user: root
    environment:
    #设置集群名称为elasticsearch
    - cluster.name=elasticsearch
    #以单一节点模式启动
    - discovery.type=single-node
    #设置使用jvm内存大小
    - ES_JAVA_OPTS=-Xms512m -Xmx512m
    volumes:
    - /data/apps/elk/elasticsearch/plugins:/usr/share/elasticsearch/plugins
    - /data/apps/elk/elasticsearch/data:/usr/share/elasticsearch/data
    - /data/apps/elk/elasticsearch/logs:/usr/share/elasticsearch/logs
    - /data/apps/elk/elasticsearch/config/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml
    #- /data/apps/elk/elasticsearch/cert/elastic-certificates.p12:/usr/share/elasticsearch/config/elastic-certificates.p12
    ports:
    - 9200:9200
    - 9300:9300
    networks:
    - elk-network

    复制代码
    logstash:
      image: docker.elastic.co/logstash/logstash:7.6.2
      container_name: logstash
      ports:
         - 4560:4560
      privileged: true
      environment:
        - TZ=Asia/Shanghai
      volumes:
        #挂载logstash的配置文件
        - /data/apps/elk/logstash/logstash.conf:/usr/share/logstash/pipeline/logstash.conf 
      depends_on:
        - elasticsearch 
      links:
        #可以用es这个域名访问elasticsearch服务
        - elasticsearch:es 
      networks:
        - elk-network
    
    kibana:
      image: docker.elastic.co/kibana/kibana:7.6.2
      container_name: kibana
      ports:
          - 5601:5601
      privileged: true
      links:
        #可以用es这个域名访问elasticsearch服务
        - elasticsearch:es 
      depends_on:
        - elasticsearch 
      environment:
        #设置访问elasticsearch的地址
        - elasticsearch.hosts=http://elasticsearch:9200
      volumes:
        #挂载kibana的配置文件
        - /data/apps/elk/kibana/kibana.yml:/usr/share/kibana/config/kibana.yml
      networks:
        - elk-network

    networks:
    elk-network:
    driver: bridge

  • 启动docker-compose

    #启动
    docker-compose up -d

    #关闭
    docker-compose down

    #重启某个容器
    docker-compose restart logstash

安装获取elastic-certificates.p12文件

  • 进es容器中操作

    docker exec -it elasticsearch /bin/bash

  • 执行下面命令,生成新文件 elastic-stack-ca.p12文件,系统会提示生成名称和密码,可直接回车跳过,无需输入

    ./bin/elasticsearch-certutil ca

  • 执行下面命令,elastic-certificates.p12文件(所需文件),系统会提示生成名称和密码,可直接回车跳过,无需输入

    ./bin/elasticsearch-certutil cert --ca elastic-stack-ca.p12

  • 拷贝文件至宿主机,选退出容器,进入宿主机再操作

    docker cp es:/usr/share/elasticsearch/elastic-certificates.p12 /data/apps/elk/elasticsearch/cert/
    chmod 755 /data/apps/elk/elasticsearch/cert/elastic-certificates.p12

ES启用xpack配置

  • 修改配置elasticsearch.yml,取消以下注释

    xpack.security.enabled: true
    xpack.security.transport.ssl.enabled: true
    xpack.security.transport.ssl.verification_mode: certificate
    xpack.security.transport.ssl.keystore.path: /usr/share/elasticsearch/config/elastic-certificates.p12
    xpack.security.transport.ssl.truststore.path: /usr/share/elasticsearch/config/elastic-certificates.p12
    xpack.security.transport.ssl.truststore.type: PKCS12
    xpack.security.transport.ssl.keystore.type: PKCS12

  • 修改docker-compose.yml配置,取消20行注释

    • /data/apps/elk/elasticsearch/cert/elastic-certificates.p12:/usr/share/elasticsearch/config/elastic-certificates.p12
  • 重启es

    docker-compose restart elasticsearch

  • 这时可以看到kibana已经无法连接es

elasticsearch 帐号密码设置

  • 进入 elasticsearch 容器

    docker exec -it elasticsearch /bin/bash

  • 生成密码,自行选择自动生成或者手动设置

    #自动生成密码
    ./bin/elasticsearch-setup-passwords auto
    #手动设置密码
    ./bin/elasticsearch-setup-passwords interactive

  • 我这里选择自动生成,生成好的密码注意保存

    [root@9e4a89f00a05 elasticsearch]# ./bin/elasticsearch-setup-passwords auto
    Initiating the setup of passwords for reserved users elastic,apm_system,kibana,logstash_system,beats_system,remote_monitoring_user.
    The passwords will be randomly generated and printed to the console.
    Please confirm that you would like to continue [y/N]y

    Changed password for user apm_system
    PASSWORD apm_system = 此处有明文密码**

    Changed password for user kibana
    PASSWORD kibana = 此处有明文密码**

    Changed password for user logstash_system
    PASSWORD logstash_system = 此处有明文密码**

    Changed password for user beats_system
    PASSWORD beats_system = 此处有明文密码**

    Changed password for user remote_monitoring_user
    PASSWORD remote_monitoring_user = 此处有明文密码**

    Changed password for user elastic
    PASSWORD elastic = IGnVa7HrYCzReBFWOCmX

修改kibana配置,添加ES的账号和密码

  • kibana/kibana.yml

    #elasticsearch中用户名和密码
    elasticsearch.username: "kibana"
    elasticsearch.password: "************"

  • 重启kibana,使用配置生效

    docker-compose restart kibana

kibana

  • 使用elastic用户登录kibana

输入http://你的IP:5601/,访问Kibana web界面。点击左侧设置,进入Management界面

  • 这里建议使用Nginx做七层反向代理,使用域名http://elk.xxxx.com访问ELK
复制代码
server {  
    listen 80;
    server_name elk.xxxx.com;
    charset utf-8;
    location = /favicon.ico { access_log off; log_not_found off; }
    location / {
        proxy_pass http://你的IP:5601;
        proxy_redirect off;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
    }
    access_log  /data/logs/elk.xxxx.com.access.log main;
    error_log  /data/logs/elk.xxxx.com.error.log;
}

使用filebeat采集Nginx日志和后台业务日志

  • 安装filebeat

    curl -L -O https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-7.6.2-x86_64.rpm

    rpm -vih filebeat-7.6.2-x86_64.rpm

  • 配置filebeat.yml

    #logging.level: debug
    filebeat.inputs:
    - type: log
    enabled: true
    paths:
    - /data/wwwlogs/*access.log
    json.keys_under_root: true
    json.add_error_key: true
    index: "nginx-access-%{+yyyy.MM.dd}"

    复制代码
      # 如果有其他日志要采集,参考以上配置,继续追加即可

    name: 可以填写你的主机名

    #开启子配置文件查看,在这个目录里新增子配置文件会自动加载,不需要重启filebeat
    filebeat.config.inputs:
    enabled: true
    path: ${path.config}/inputs.d/*.yml
    reload.enabled: true
    reload.period: 10s

    output.elasticsearch:
    hosts: ["elasticsearch:9200"]
    protocol: "http"
    # 开启认证后filebeat也要配置es密码
    username: "elastic"
    password: "************"
    processors:

    • drop_fields:
      fields: ["log.offset", "input", "agent.type", "agent.ephemeral_id", "agent.id", "agent.version", "agent.name", "ecs", "host"]

    close_older: 30m # 如果文件在某个时间段内没有发生过更新,则关闭监控的文件handle。默认1h
    force_close_files: false # 这个选项关闭一个文件,当文件名称的变化。只在window建议为true
    close_inactive: 1m
    close_timeout: 3h
    clean_inactive: 72h
    ignore_older: 70h

  • 如果需要新增filebeat配置则添加- input_type: log段配置即可

  • 启动filebeat

    systemctl start filebeat.service

  • 设置开机启动

    systemctl enable filebeat.service

相关推荐
xcs194053 分钟前
集运维 麒麟桌面版v10 sp1 2403 aarch64 离线java开发环境自动化安装
运维·自动化
BAOYUCompany6 分钟前
暴雨服务器成功中标华中科技大学集成电路学院服务器采购项目
运维·服务器
超龄超能程序猿43 分钟前
Bitvisse SSH Client 安装配置文档
运维·ssh·github
奈斯ing1 小时前
【Redis篇】数据库架构演进中Redis缓存的技术必然性—高并发场景下穿透、击穿、雪崩的体系化解决方案
运维·redis·缓存·数据库架构
鳄鱼皮坡1 小时前
仿muduo库One Thread One Loop式主从Reactor模型实现高并发服务器
运维·服务器
即将头秃的程序媛2 小时前
centos 7.9安装tomcat,并实现开机自启
linux·运维·centos
fangeqin2 小时前
ubuntu源码安装python3.13遇到Could not build the ssl module!解决方法
linux·python·ubuntu·openssl
小Mie不吃饭2 小时前
FastAPI 小白教程:从入门级到实战(源码教程)
运维·服务器
fo安方3 小时前
运维的利器–监控–zabbix–第三步:配置zabbix–中间件–Tomcat–步骤+验证
运维·中间件·zabbix
爱奥尼欧3 小时前
【Linux 系统】基础IO——Linux中对文件的理解
linux·服务器·microsoft