docker-compose 启动 elk

一 docker-compose 新增节点

复制代码
# elasticsearch
    elasticsearch:
      image: elasticsearch:7.17.6
      container_name: elasticsearch
      ports:
        - "9410:9410"
        - "9420:9420"
      environment:
        # 设置集群名称
        cluster.name: elasticsearch
        # 以单一节点模式启动
        discovery.type: single-node
        ES_JAVA_OPTS: "-Xms512m -Xmx512m"
      volumes:
        - /root/docker/elk/elasticsearch/plugins:/usr/share/elasticsearch/plugins
        - /root/docker/elk/elasticsearch/data:/usr/share/elasticsearch/data
        - /root/docker/elk/elasticsearch/logs:/usr/share/elasticsearch/logs
      network_mode: "host"

    kibana:
      image: kibana:7.17.6
      container_name: kibana
      ports:
        - "9430:9430"
      depends_on:
        # kibana在elasticsearch启动之后再启动
        - elasticsearch
      environment:
        #设置系统语言文中文
        I18N_LOCALE: zh-CN
        # 访问域名
        # SERVER_PUBLICBASEURL: https://kibana.cloud.com
      volumes:
        - /root/docker/elk/kibana/config/kibana.yml:/usr/share/kibana/config/kibana.yml
      network_mode: "host"

    logstash:
      image: logstash:7.17.6
      container_name: logstash
      ports:
        - "9440:9440"
      volumes:
        - /root/docker/elk/logstash/pipeline/logstash.conf:/usr/share/logstash/pipeline/logstash.conf
        - /root/docker/elk/logstash/config/logstash.yml:/usr/share/logstash/config/logstash.yml
      depends_on:
        - elasticsearch
      network_mode: "host"

二 创建文件夹

Docker下创建elk及子文件夹

三 拷贝配置文件文件

复制代码
1 拷贝kibana-es 的配置文件
/root/docker/elk/kibana/config/kibana.yml

server.host: "0.0.0.0"
server.shutdownTimeout: "5s"
elasticsearch.hosts: [ "http://127.0.0.1:9200" ]
monitoring.ui.container.elasticsearch.enabled: true

2 拷贝logstash-es 的配置文件
/root/docker/elk/logstash/config/logstash.yml

http.host: "0.0.0.0"
xpack.monitoring.elasticsearch.hosts: [ "http://127.0.0.1:9200" ]

3 拷贝logstash-mysql 的配置文件
 /root/docker/elk/logstash/pipeline/logstash.conf

input {
  jdbc {
    jdbc_connection_string => "jdbc:mysql://192.168.1.250:3306/kintech-cloud-bo?useUnicode=true&characterEncoding=UTF-8&autoReconnect=true&autoReconnectForPools=true&noAccessToProcedureBodies=true&useSSL=false"
    jdbc_user => "root"
    jdbc_password => "Helka1234!@#$"
    jdbc_driver_library => "/app/mysql.jar"
    jdbc_driver_class => "com.mysql.cj.jdbc.Driver"
    statement => "SELECT * FROM bo_sop_content where update_time>:sql_last_value"
    schedule => "* * * * *"
    use_column_value => true
    #last_run_metadata_path = >"/usr/share/logstash/track_time"
    #clean_run => false
    tracking_column_type => "timestamp"
    tracking_column => "update_time"
  }
}

output {
  elasticsearch {
    hosts => "192.168.1.247:9200"
    index => "bo_sop_content"
  }
}

四 启动

复制代码
#1 同时启动 elasticsearch kibana,但 logstash 需要单独启动
docker-compose up -d elasticsearch kibana

#2 启动es 默认端口9200
docker run -d elasticsearch:7.17.6

#3 启动kibana 默认端口5601
docker run -d kibana:7.17.6

#4 启动logstash
docker run -d \
-v /root/docker/elk/logstash/pipeline/logstash.conf:/usr/share/logstash/pipeline/logstash.conf \
-v /root/docker/elk/logstash/config/logstash.yml:/usr/share/logstash/config/logstash.yml \
-v /root/lib/mysql.jar:/app/mysql.jar --name=logstash logstash:7.17.6
相关推荐
scriptsboy4 小时前
Halo Docker 迁移方法
运维·docker·容器
R.lin5 小时前
Docker核心原理详解
运维·docker·容器
颜淡慕潇5 小时前
容器生态双核心:Podman与Docker深度对比及实战指南
docker·eureka·podman
头发多的码农5 小时前
jenkins docker ssh发布效率提升
运维·docker·jenkins
起个名字总是说已存在5 小时前
Kylin Linux麒麟环境docker启动容器报错permission denied解决
linux·docker·kylin
纷飞梦雪6 小时前
排查k8s连接mysql的pod
云原生·容器·kubernetes
qq_348231856 小时前
Kubernetes负载均衡方案详解
容器·kubernetes·负载均衡
weixin_46686 小时前
Kubernetes Service
云原生·容器·kubernetes
会飞的小蛮猪8 小时前
K8s-1.29.2二进制安装-第二章(K8s及ETCD下载及安装)
云原生·容器·kubernetes·etcd
古城小栈15 小时前
Docker 多阶段构建:Go_Java 镜像瘦身运动
java·docker·golang