elasticsearch-7.17.29 集群案例,k8s方式和原始方式

官网文档

Install Elasticsearch with .zip on Windows | Elasticsearch Guide [7.17] | Elastic

一 下载

1.最新版本

Download Elasticsearch | Elastic

2不同版本的

Past Releases of Elastic Stack Software | Elastic

客户端工具 elasticvue

复制代码
docker run -p 8080:8080 --name elasticvue -d 
harbor.jettech.com/jettechtools/elasticvue:v1.8.0



k8s 
[root@localhost deployment]# cat  jettech-elasticvue-deployment-prod.yaml 
apiVersion: v1
kind: Service
metadata:
  labels: {name: jettech-elasticvue}
  name: jettech-elasticvue
  namespace: jettech-prod
spec:
  ports:
  - {name: t8080, port: 8080, protocol: TCP, targetPort: t8080}
  selector: {name: jettech-elasticvue}
  #type: NodePort
  type: ClusterIP
  #clusterIP: None
---
apiVersion: apps/v1
kind: Deployment
metadata:
  labels: {name: jettech-elasticvue}
  name: jettech-elasticvue
  namespace: jettech-prod
spec:
  replicas: 1
  selector:
    matchLabels: {name: jettech-elasticvue}
  template:
    metadata:
      labels: {name: jettech-elasticvue}
      name: jettech-elasticvue
    spec:
      containers:
      - name: jettech-elasticvue
        image: harbor.jettech.com/jettechtools/elasticvue:v1.8.0
        securityContext:
          privileged: true
        ports:
        - {containerPort: 8080, name: t8080, protocol: TCP}
        imagePullPolicy: Always #[Always | Never | IfNotPresent]
      dnsPolicy: ClusterFirstWithHostNet
      restartPolicy: Always #Never

二、集群角色与节点规划

节点类型 数量 硬件配置 主要职责 节点命名
Master 节点 3 8C 16G 集群元数据管理、主节点选举、集群状态维护 master-01、master-02、master-03
Data 节点 3 16C 32G 数据存储、索引分片、搜索计算 data-01、data-02、data-03
ingest节点 节点 1 8C 16G 数据预处理、请求路由、负载均衡 ingest-01、ingest-02 [跟你需求添加]
路由节点 1 8 核 / 16GB / 无数据盘 (空角色,仅协调节点) route-01

三、网络规划

节点类型 节点名称 IP 地址 端口规划
Master 节点 master-01 192.168.1.105 9200 (HTTP)、9300 (节点间通信)
Master 节点 master-02 192.168.1.106 9200 (HTTP)、9300 (节点间通信)
Master 节点 master-03 192.168.1.107 9200 (HTTP)、9300 (节点间通信)
Data 节点 data-01 192.168.1.100 9200 (HTTP)、9300 (节点间通信)
Data 节点 data-02 192.168.1.101 9200 (HTTP)、9300 (节点间通信)
Data 节点 data-03 192.168.1.102 9200 (HTTP)、9300 (节点间通信)
Ingest 节点 ingest-01 192.168.1.108 9200 (HTTP)、9300 (节点间通信)
Ingest 节点 ingest-02 192.168.1.109 9200 (HTTP)、9300 (节点间通信)

四、目录规划(所有节点统一)

目录路径 用途 权限 大小建议
/home/es/elasticsearch-7.17.29 程序安装目录 es:es 10GB
/home/es/data/data 数据存储目录 es:es Data 节点:500GB+,其他节点:50GB+
/home/es/data/logs 日志目录 es:es 100GB+
/home/es/elasticsearch-7.17.29/config/ 配置文件目录 es:es 1GB
/home/es/data/backups 快照备份目录 es:es 视数据量而定

五、JVM 配置(jvm.options)

节点类型 配置内容 说明
Master 节点 -Xms8g -Xmx8g 堆内存设置为物理内存的 50%,不超过 31GB
Data 节点 -Xms16g -Xmx16g 堆内存设置为物理内存的 50%,不超过 31GB
Ingest 节点 -Xms8g -Xmx8g 堆内

六、系统优化配置

配置项 配置内容 配置文件
文件描述符限制 es soft nofile 65536 es hard nofile 65536 /etc/security/limits.conf
内存锁定 es soft memlock unlimited es hard memlock unlimited /etc/security/limits.conf
最大进程数 es soft nproc 4096 es hard nproc 4096 /etc/security/limits.conf
虚拟内存 vm.max_map_count=262144 /etc/sysctl.conf
禁用 swap vm.swappiness=1 /etc/sysctl.conf
网络优化 net.core.somaxconn=65535 net.ipv4.tcp_max_syn_backlog=65535 /etc/sysctl.conf
复制代码
echo "* soft nofile 65536" >> /etc/security/limits.conf
echo "* hard nofile 65536" >> /etc/security/limits.conf
echo "* soft nproc 4096" >> /etc/security/limits.conf
echo "* hard nproc 4096" >> /etc/security/limits.conf
echo "* soft memlock unlimited" >> /etc/security/limits.conf
echo "* hard memlock unlimited" >> /etc/security/limits.conf

echo "vm.max_map_count = 262144" >> /etc/sysctl.conf
echo "vm.swappiness=1" >> /etc/sysctl.conf
echo "net.core.somaxconn=65535" >> /etc/sysctl.conf
echo "net.ipv4.tcp_max_syn_backlog=65535" >> /etc/sysctl.conf
sysctl -p

七、部署步骤

  1. 环境准备

    • 所有节点安装 JDK 11(Elasticsearch 7.x 推荐版本)
    • 配置主机名解析(/etc/hosts)
    • 关闭防火墙或开放必要端口
    • 禁用 SELinux
  2. 安装 Elasticsearch

    • 所有节点统一安装 7.17.29 版本
    • 创建专用用户 es 并设置权限
    • 按照目录规划创建相关目录并授权
  3. 配置文件部署

    • 根据节点类型部署对应的 elasticsearch.yml 配置文件
    • 配置 jvm.options 文件设置堆内存
    • 应用系统优化配置并重启生效
  4. 集群启动

    • 先启动 3 个 master 节点
    • 待 master 节点集群稳定后启动 data 节点
    • 最后启动 ingest 节点

八、监控与维护

  1. 监控配置

    • 部署 Elasticsearch Monitoring 收集集群指标
    • 配置日志聚合(如使用 Filebeat 收集 ES 日志)
    • 设置关键指标告警(集群健康状态、磁盘使用率、节点存活等)
  2. 备份策略

    • 配置快照仓库(如 NFS 共享存储)
    • 设置定时快照任务(每日凌晨执行)
    • 定期验证快照可恢复性
  3. 日常维护

    • 监控磁盘使用率,保持至少 20% 空闲空间
    • 定期检查分片均衡状态
    • 根据业务增长调整分片配置

该方案充分考虑了集群的高可用性、性能和可扩展性,3 个 master 节点确保集群元数据服务稳定,3 个 data 节点提供充足的存储和计算能力,1 个 ingest 节点专门处理数据预处理和1个请求路由,适合中大规模的生产环境使用。

第七、部署详细步骤

7.1 master节点配置

  1. master-01-192.168.1.105节点elasticsearch.yml

    [es@localhost ~]$ cat elasticsearch-7.17.29/config/elasticsearch.yml

    集群名称(所有节点必须一致)

    cluster.name: jettech-elasticsearch-cluster

    节点名称(每个节点唯一)

    node.name: master-01-192.168.1.105

    节点角色配置(仅作为master节点)

    node.roles: [master]

    网络配置

    network.host: 192.168.1.105
    http.port: 9200
    transport.tcp.port: 9300

    发现与集群形成配置

    discovery.seed_hosts: ["192.168.1.105:9300","192.168.1.106:9300","192.168.1.107:9300"]

    初始主节点(集群首次启动时选举使用)

    cluster.initial_master_nodes: ["master-01-192.168.1.105","master-02-192.168.1.106","master-03-192.168.1.107"]

    主节点选举配置

    discovery.zen.minimum_master_nodes: 2 # (master节点数/2)+1,确保脑裂防护
    cluster.fault_detection.leader_check.interval: 10s

    内存锁定(防止swap,需配合系统配置)

    bootstrap.memory_lock: true

    最大并发恢复数

    cluster.routing.allocation.node_concurrent_recoveries: 2

    备份仓库路径

    path.repo: ["/home/es/data/backups"]

    日志配置

    logger.org.elasticsearch.discovery: WARN
    #数据存储位置
    path.data: /home/es/data/data
    #日志存储位置
    path.logs: /home/es/data/logs
    ingest.geoip.downloader.enabled: false
    #不开启认证
    xpack.security.enabled: false

    跨域配置(如需前端访问)

    http.cors.enabled: true
    http.cors.allow-origin: "*"

jvm.options

复制代码
-Xms16g
-Xmx16g

一下是master节点的配置

discovery.seed_hosts和cluster.initial_master_nodes只配置master集群的节点就可以不用配置data和ingest以及其他节点

  1. master-01-192.168.1.106节点elasticsearch.yml

    [es@localhost ~]$ cat elasticsearch-7.17.29//config/elasticsearch.yml

    集群名称(所有节点必须一致)

    cluster.name: jettech-elasticsearch-cluster

    节点名称(每个节点唯一)

    node.name: master-02-192.168.1.106

    节点角色配置(仅作为master节点)

    node.roles: [master]

    网络配置

    network.host: 192.168.1.106
    http.port: 9200
    transport.tcp.port: 9300

    发现与集群形成配置

    discovery.seed_hosts: ["192.168.1.105:9300","192.168.1.106:9300","192.168.1.107:9300"]

    初始主节点(集群首次启动时选举使用)

    cluster.initial_master_nodes: ["master-01-192.168.1.105","master-02-192.168.1.106","master-03-192.168.1.107"]

    主节点选举配置

    discovery.zen.minimum_master_nodes: 2 # (master节点数/2)+1,确保脑裂防护
    cluster.fault_detection.leader_check.interval: 10s

    内存锁定(防止swap,需配合系统配置)

    bootstrap.memory_lock: true

    最大并发恢复数

    cluster.routing.allocation.node_concurrent_recoveries: 2

    备份仓库路径

    path.repo: ["/home/es/data/backups"]

    日志配置

    logger.org.elasticsearch.discovery: WARN
    #数据存储位置
    path.data: /home/es/data/data
    #日志存储位置
    path.logs: /home/es/data/logs
    ingest.geoip.downloader.enabled: false
    #不开启认证
    xpack.security.enabled: false

    跨域配置(如需前端访问)

    http.cors.enabled: true
    http.cors.allow-origin: "*"

3.master-03-192.168.1.107节点elasticsearch.yml

复制代码
[es@localhost ~]$ cat elasticsearch-7.17.29/config/elasticsearch.yml 
# 集群名称(所有节点必须一致)
cluster.name: jettech-elasticsearch-cluster
# 节点名称(每个节点唯一)
node.name: master-03-192.168.1.107
# 节点角色配置(仅作为master节点)
node.roles: [master]
# 网络配置
network.host: 192.168.1.107
http.port: 9200
transport.tcp.port: 9300
# 发现与集群形成配置
discovery.seed_hosts: ["192.168.1.105:9300","192.168.1.106:9300","192.168.1.107:9300"]
# 初始主节点(集群首次启动时选举使用)
cluster.initial_master_nodes: ["master-01-192.168.1.105","master-02-192.168.1.106","master-03-192.168.1.107"]
# 主节点选举配置
discovery.zen.minimum_master_nodes: 2  # (master节点数/2)+1,确保脑裂防护
cluster.fault_detection.leader_check.interval: 10s
# 内存锁定(防止swap,需配合系统配置)
bootstrap.memory_lock: true
# 最大并发恢复数
cluster.routing.allocation.node_concurrent_recoveries: 2
# 备份仓库路径
path.repo: ["/home/es/data/backups"]
# 日志配置
logger.org.elasticsearch.discovery: WARN
#数据存储位置
path.data: /home/es/data/data
#日志存储位置
path.logs: /home/es/data/logs
ingest.geoip.downloader.enabled: false
#不开启认证
xpack.security.enabled: false
# 跨域配置(如需前端访问)
http.cors.enabled: true
http.cors.allow-origin: "*"

4 分别启动各自master服务和观察日志

复制代码
[es@localhost ~]$ ./elasticsearch-7.17.29/bin/elasticsearch -d

[es@localhost ~]$ tail -f data/logs/jettech-elasticsearch-cluster.log
  1. 查看状态和集群信息

    [root@localhost es]# curl -X GET "http://192.168.1.105:9200/_cat/nodes?v"
    ip heap.percent ram.percent cpu load_1m load_5m load_15m node.role master name
    192.168.1.105 2 65 0 0.00 0.04 0.08 m - master-01-192.168.1.105
    192.168.1.107 2 65 0 0.00 0.05 0.12 m - master-03-192.168.1.107
    192.168.1.106 3 65 0 0.00 0.04 0.10 m * master-02-192.168.1.106

    [root@localhost es]# curl -X GET "http://192.168.1.105:9200/?pretty"
    {
    "name" : "master-01-192.168.1.105",
    "cluster_name" : "jettech-elasticsearch-cluster",
    "cluster_uuid" : "BkyGmY6DQLiCW2pW_74QYw",
    "version" : {
    "number" : "7.17.29",
    "build_flavor" : "default",
    "build_type" : "tar",
    "build_hash" : "580aff1a0064ce4c93293aaab6fcc55e22c10d1c",
    "build_date" : "2025-06-19T01:37:57.847711500Z",
    "build_snapshot" : false,
    "lucene_version" : "8.11.3",
    "minimum_wire_compatibility_version" : "6.8.0",
    "minimum_index_compatibility_version" : "6.0.0-beta1"
    },
    "tagline" : "You Know, for Search"
    }

可以看到没有数据节点 3个都是master节点目前角色

复制代码
[root@localhost es]# curl -X GET "http://192.168.1.105:9200/_cluster/health?pretty"
{
  "cluster_name" : "jettech-elasticsearch-cluster",
  "status" : "red",
  "timed_out" : false,
  "number_of_nodes" : 3,
  "number_of_data_nodes" : 0,
  "active_primary_shards" : 0,
  "active_shards" : 0,
  "relocating_shards" : 0,
  "initializing_shards" : 0,
  "unassigned_shards" : 3,
  "delayed_unassigned_shards" : 0,
  "number_of_pending_tasks" : 0,
  "number_of_in_flight_fetch" : 0,
  "task_max_waiting_in_queue_millis" : 0,
  "active_shards_percent_as_number" : 0.0
}

7.2 data节点配置

1.data-01-192.168.1.100

复制代码
[es@localhost ~]$ cat elasticsearch-7.17.29/config/elasticsearch.yml
# 集群名称(所有节点必须一致)
cluster.name: jettech-elasticsearch-cluster
# 节点名称(每个节点唯一)
node.name: data-01-192.168.1.100
# 节点角色配置(仅作为master节点)
node.roles: [data]
# 网络配置
network.host: 192.168.1.100
http.port: 9200
transport.tcp.port: 9300
# 发现与集群形成配置
discovery.seed_hosts: ["192.168.1.105:9300","192.168.1.106:9300","192.168.1.107:9300"]
# 初始主节点(集群首次启动时选举使用)
cluster.initial_master_nodes: ["master-01-192.168.1.105","master-02-192.168.1.106","master-03-192.168.1.107"]
# 内存锁定(防止swap,需配合系统配置)
bootstrap.memory_lock: true
# 备份仓库路径
path.repo: ["/home/es/data/backups"]
# 日志配置
logger.org.elasticsearch.discovery: WARN
#数据存储位置
path.data: /home/es/data/data
#日志存储位置
path.logs: /home/es/data/logs
ingest.geoip.downloader.enabled: false
#不开启认证
xpack.security.enabled: false
# 跨域配置(如需前端访问)
http.cors.enabled: true
http.cors.allow-origin: "*"

2.data-02-192.168.1.101

复制代码
[es@localhost ~]$ cat elasticsearch-7.17.29/config/elasticsearch.yml
# 集群名称(所有节点必须一致)
cluster.name: jettech-elasticsearch-cluster
# 节点名称(每个节点唯一)
node.name: data-02-192.168.1.101
# 节点角色配置(仅作为master节点)
node.roles: [data]
# 网络配置
network.host: 192.168.1.101
http.port: 9200
transport.tcp.port: 9300
# 发现与集群形成配置
discovery.seed_hosts: ["192.168.1.105:9300","192.168.1.106:9300","192.168.1.107:9300"]
# 初始主节点(集群首次启动时选举使用)
cluster.initial_master_nodes: ["master-01-192.168.1.105","master-02-192.168.1.106","master-03-192.168.1.107"]
# 内存锁定(防止swap,需配合系统配置)
bootstrap.memory_lock: true
# 备份仓库路径
path.repo: ["/home/es/data/backups"]
# 日志配置
logger.org.elasticsearch.discovery: WARN
#数据存储位置
path.data: /home/es/data/data
#日志存储位置
path.logs: /home/es/data/logs
ingest.geoip.downloader.enabled: false
#不开启认证
xpack.security.enabled: false
# 跨域配置(如需前端访问)
http.cors.enabled: true
http.cors.allow-origin: "*"

3.data-03-192.168.1.102

复制代码
[es@localhost ~]$  cat elasticsearch-7.17.29/config/elasticsearch.yml
# 集群名称(所有节点必须一致)
cluster.name: jettech-elasticsearch-cluster
# 节点名称(每个节点唯一)
node.name: data-03-192.168.1.102
# 节点角色配置(仅作为master节点)
node.roles: [data]
# 网络配置
network.host: 192.168.1.102
http.port: 9200
transport.tcp.port: 9300
# 发现与集群形成配置
discovery.seed_hosts: ["192.168.1.105:9300","192.168.1.106:9300","192.168.1.107:9300"]
# 初始主节点(集群首次启动时选举使用)
cluster.initial_master_nodes: ["master-01-192.168.1.105","master-02-192.168.1.106","master-03-192.168.1.107"]
# 内存锁定(防止swap,需配合系统配置)
bootstrap.memory_lock: true
# 备份仓库路径
path.repo: ["/home/es/data/backups"]
# 日志配置
logger.org.elasticsearch.discovery: WARN
#数据存储位置
path.data: /home/es/data/data
#日志存储位置
path.logs: /home/es/data/logs
ingest.geoip.downloader.enabled: false
#不开启认证
xpack.security.enabled: false
# 跨域配置(如需前端访问)
http.cors.enabled: true
http.cors.allow-origin: "*"

4 验证

复制代码
[root@localhost es]# curl -X GET "http://192.168.1.105:9200/_cat/nodes?v"
ip            heap.percent ram.percent cpu load_1m load_5m load_15m node.role master name
192.168.1.105            4          65   0    0.00    0.01     0.05 m         -      master-01-192.168.1.105
192.168.1.107            4          65   0    0.00    0.01     0.05 m         -      master-03-192.168.1.107
192.168.1.102            4          63   0    0.20    0.36     0.20 d         -      data-03-192.168.1.102
192.168.1.106            5          66   0    0.00    0.01     0.05 m         *      master-02-192.168.1.106
192.168.1.100            2          63   0    0.17    0.38     0.23 d         -      data-01-192.168.1.100
192.168.1.101            4          63   0    0.14    0.32     0.20 d         -      data-02-192.168.1.101

7.3 ingest-01-192.168.1.108节点配置

复制代码
[es@localhost ~]$ cat elasticsearch-7.17.29/config/elasticsearch.yml 
# 集群名称(所有节点必须一致)
cluster.name: jettech-elasticsearch-cluster
# 节点名称(每个节点唯一)
node.name: ingest-01-192.168.1.108
# 节点角色配置(仅作为master节点)
node.roles: [ingest]
# 网络配置
network.host: 192.168.1.108
http.port: 9200
transport.tcp.port: 9300
# 发现与集群形成配置
discovery.seed_hosts: ["192.168.1.105:9300","192.168.1.106:9300","192.168.1.107:9300"]
# 初始主节点(集群首次启动时选举使用)
cluster.initial_master_nodes: ["master-01-192.168.1.105","master-02-192.168.1.106","master-03-192.168.1.107"]
# 内存锁定(防止swap,需配合系统配置)
bootstrap.memory_lock: true
# 备份仓库路径
path.repo: ["/home/es/data/backups"]
# 日志配置
logger.org.elasticsearch.discovery: WARN
#数据存储位置
path.data: /home/es/data/data
#日志存储位置
path.logs: /home/es/data/logs
ingest.geoip.downloader.enabled: false
#不开启认证
xpack.security.enabled: false
# 跨域配置(如需前端访问)
http.cors.enabled: true
http.cors.allow-origin: "*"

7.4 coordinating-01-192.168.1.109 节点配置 输入路由节点

复制代码
[es@localhost ~]$ cat elasticsearch-7.17.29/config/elasticsearch.yml 
# 集群名称(所有节点必须一致)
cluster.name: jettech-elasticsearch-cluster
# 节点名称(每个节点唯一)
node.name: coordinating-01-192.168.1.109
# 节点角色配置(仅作为master节点)
node.roles: []
# 网络配置
network.host: 192.168.1.109
http.port: 9200
transport.tcp.port: 9300
# 发现与集群形成配置
discovery.seed_hosts: ["192.168.1.105:9300","192.168.1.106:9300","192.168.1.107:9300"]
# 初始主节点(集群首次启动时选举使用)
cluster.initial_master_nodes: ["master-01-192.168.1.105","master-02-192.168.1.106","master-03-192.168.1.107"]
# 内存锁定(防止swap,需配合系统配置)
bootstrap.memory_lock: true
# 备份仓库路径
path.repo: ["/home/es/data/backups"]
# 日志配置
logger.org.elasticsearch.discovery: WARN
#数据存储位置
path.data: /home/es/data/data
#日志存储位置
path.logs: /home/es/data/logs
ingest.geoip.downloader.enabled: false
#不开启认证
xpack.security.enabled: false
# 跨域配置(如需前端访问)
http.cors.enabled: true
http.cors.allow-origin: "*"
http.max_content_length: 1024mb
# 传输层优化(节点间通信)
transport.tcp.compress: true
transport.tcp.keep_alive: true

验证

复制代码
[root@localhost es]# curl -X GET "http://192.168.1.105:9200/_cat/nodes?v" | sort -r
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100   962  100   962    0     0  13227      0 --:--:-- --:--:-- --:--:-- 13361
ip            heap.percent ram.percent cpu load_1m load_5m load_15m node.role master name
192.168.1.109            2          65  32    1.08    0.28     0.13 -         -      coordinating-01-192.168.1.109
192.168.1.108            2          65  17    0.45    0.16     0.10 i         -      ingest-01-192.168.1.108
192.168.1.107            4          66   0    0.00    0.01     0.05 m         -      master-03-192.168.1.107
192.168.1.106            6          66   0    0.00    0.01     0.05 m         *      master-02-192.168.1.106
192.168.1.105            4          66   0    0.01    0.03     0.05 m         -      master-01-192.168.1.105
192.168.1.102            4          63   0    0.00    0.04     0.11 d         -      data-03-192.168.1.102
192.168.1.101            4          63   0    0.05    0.07     0.12 d         -      data-02-192.168.1.101
192.168.1.100            1          63   0    0.00    0.04     0.11 d         -      data-01-192.168.1.100

data在细分

1)hot

复制代码
[es@localhost ~]$ cat elasticsearch-7.17.29/config/elasticsearch.yml
# 集群名称(所有节点必须一致)
cluster.name: jettech-elasticsearch-cluster
# 节点名称(每个节点唯一)
node.name: data-hot-01-192.168.1.100
# 节点角色配置(仅作为master节点)
node.roles: [data]
node.attr.data: hot
# 网络配置
network.host: 192.168.1.100
http.port: 9200
transport.tcp.port: 9300
# 发现与集群形成配置
discovery.seed_hosts: ["192.168.1.105:9300","192.168.1.106:9300","192.168.1.107:9300"]
# 初始主节点(集群首次启动时选举使用)
cluster.initial_master_nodes: ["master-01-192.168.1.105","master-02-192.168.1.106","master-03-192.168.1.107"]
# 内存锁定(防止swap,需配合系统配置)
bootstrap.memory_lock: true
# 备份仓库路径
path.repo: ["/home/es/data/backups"]
# 日志配置
logger.org.elasticsearch.discovery: WARN
#数据存储位置
path.data: /home/es/data/data
#日志存储位置
path.logs: /home/es/data/logs
ingest.geoip.downloader.enabled: false
#不开启认证
xpack.security.enabled: false
# 跨域配置(如需前端访问)
http.cors.enabled: true
http.cors.allow-origin: "*"
# 热数据优化配置
indices.memory.index_buffer_size: 40%
thread_pool.write.queue_size: 2000
# 分片分配配置
cluster.routing.allocation.node_concurrent_recoveries: 8
cluster.routing.allocation.include.data: hot

2)warm

复制代码
[es@localhost ~]$ cat elasticsearch-7.17.29/config/elasticsearch.yml 
# 集群名称(所有节点必须一致)
cluster.name: jettech-elasticsearch-cluster
# 节点名称(每个节点唯一)
node.name: data-warm-01-192.168.1.101
node.attr.data: warm
# 节点角色配置(仅作为master节点)
node.roles: [data]
# 网络配置
network.host: 192.168.1.101
http.port: 9200
transport.tcp.port: 9300
# 发现与集群形成配置
discovery.seed_hosts: ["192.168.1.105:9300","192.168.1.106:9300","192.168.1.107:9300"]
# 初始主节点(集群首次启动时选举使用)
cluster.initial_master_nodes: ["master-01-192.168.1.105","master-02-192.168.1.106","master-03-192.168.1.107"]
# 内存锁定(防止swap,需配合系统配置)
bootstrap.memory_lock: true
# 备份仓库路径
path.repo: ["/home/es/data/backups"]
# 日志配置
logger.org.elasticsearch.discovery: WARN
#数据存储位置
path.data: /home/es/data/data
#日志存储位置
path.logs: /home/es/data/logs
ingest.geoip.downloader.enabled: false
#不开启认证
xpack.security.enabled: false
# 跨域配置(如需前端访问)
http.cors.enabled: true
http.cors.allow-origin: "*"
# 温数据优化配置
indices.memory.index_buffer_size: 20%
indices.queries.cache.size: 25%  # 增大查询缓存
# 分片分配配置
cluster.routing.allocation.include.data: warm  # 只存储温数据

3.cold

复制代码
[es@localhost ~]$ cat elasticsearch-7.17.29/config/elasticsearch.yml 
# 集群名称(所有节点必须一致)
cluster.name: jettech-elasticsearch-cluster
# 节点名称(每个节点唯一)
node.name: data-cold-01-192.168.1.102
node.attr.data: cold
# 节点角色配置(仅作为master节点)
node.roles: [data]
# 网络配置
network.host: 192.168.1.102
http.port: 9200
transport.tcp.port: 9300
# 发现与集群形成配置
discovery.seed_hosts: ["192.168.1.105:9300","192.168.1.106:9300","192.168.1.107:9300"]
# 初始主节点(集群首次启动时选举使用)
cluster.initial_master_nodes: ["master-01-192.168.1.105","master-02-192.168.1.106","master-03-192.168.1.107"]
# 内存锁定(防止swap,需配合系统配置)
bootstrap.memory_lock: true
# 备份仓库路径
path.repo: ["/home/es/data/backups"]
# 日志配置
logger.org.elasticsearch.discovery: WARN
#数据存储位置
path.data: /home/es/data/data
#日志存储位置
path.logs: /home/es/data/logs
ingest.geoip.downloader.enabled: false
#不开启认证
xpack.security.enabled: false
# 跨域配置(如需前端访问)
http.cors.enabled: true
http.cors.allow-origin: "*"
# 冷数据优化配置
indices.queries.cache.size: 30%
index.codec: best_compression  # 最高级别的压缩
# 分片分配配置
cluster.routing.allocation.include.data: cold  # 只存储冷数据

4.ilm-policy.json

复制代码
PUT _ilm/policy/logs_lifecycle_policy
{
  "policy": {
    "phases": {
      "hot": {
        "min_age": "0ms",
        "actions": {
          "rollover": {
            "max_age": "7d",
            "max_size": "50gb"
          },
          "set_priority": {
            "priority": 100
          }
        }
      },
      "warm": {
        "min_age": "7d",
        "actions": {
          "shrink": {
            "number_of_shards": 1
          },
          "forcemerge": {
            "max_num_segments": 1
          },
          "allocate": {
            "require": {
              "data": "warm"
            }
          },
          "set_priority": {
            "priority": 50
          }
        }
      },
      "cold": {
        "min_age": "30d",
        "actions": {
          "allocate": {
            "require": {
              "data": "cold"
            }
          },
          "set_priority": {
            "priority": 10
          }
        }
      },
      "delete": {
        "min_age": "90d",
        "actions": {
          "delete": {}
        }
      }
    }
  }
}

5 创建索引模板应用 ILM 策略并指定初始分片分配: index-template.json

复制代码
PUT _index_template/logs_template
{
  "index_patterns": ["logs-*"],
  "settings": {
    "number_of_shards": 3,
    "number_of_replicas": 1,
    "index.lifecycle.name": "logs_lifecycle_policy",
    "index.lifecycle.rollover_alias": "logs-alias",
    "index.routing.allocation.require.data": "hot"  # 新索引优先分配到热节点
  },
  "mappings": {
    "dynamic_templates": [
      {
        "strings_as_keyword": {
          "match_mapping_type": "string",
          "mapping": {
            "type": "keyword"
          }
        }
      }
    ]
  },
  "aliases": {}
}

6.配置说明

  1. 节点属性标记

    • 通过node.attr.data: hot/warm/cold标记节点类型
    • 配合分片分配规则cluster.routing.allocation.include.data控制数据存储位置
  2. 性能优化差异

    • 热数据节点:高频刷新、大索引缓冲区,适合实时写入查询
    • 温数据节点:中等刷新频率、大查询缓存,适合历史数据分析
    • 冷数据节点:低频刷新、高压缩比,适合归档存储
  3. 数据生命周期管理

    • 7 天内的新数据存储在热节点
    • 7-30 天的历史数据迁移到温节点,并进行收缩和强制合并
    • 30 天以上的旧数据迁移到冷节点
    • 90 天以上的数据自动删除(可根据需求调整)
  4. 扩展建议

    • 生产环境建议每个数据层至少部署 2 个节点实现高可用
    • 热数据节点推荐使用 SSD,冷数据节点可使用 HDD 降低成本
    • 可根据数据量和访问频率调整 ILM 策略中的时间阈值

通过这种配置,能够实现数据的智能生命周期管理,在保证查询性能的同时优化存储成本

复制代码
1.健康
curl -X GET "http://192.168.1.100:9200/_cluster/health?pretty"
2. 节点数
curl -X GET "http://192.168.1.100:9200/_cat/nodes?v"
3 集群基本信息
curl -X GET "http://192.168.1.100:9200/?pretty"
4.创建索引
curl -X PUT "http://192.168.1.100:9200/user?pretty" -H "Content-Type: application/json" -d '
{
  "settings": {
    "number_of_shards": 1,
    "number_of_replicas": 1
  },
  "mappings": {
    "properties": {
      "name": { "type": "text" },
      "age": { "type": "integer" },
      "city": { "type": "keyword" }
    }
  }
}'
5.查看索引
curl -X GET "http://192.168.1.100:9200/_cat/indices?v"
curl -X GET "http://192.168.1.100:9200/user?pretty"
curl -X GET "http://192.168.1.100:9200/_all?pretty"

6 修改副本数
curl -X PUT "http://192.168.1.100:9200/user/_settings?pretty" -H "Content-Type: application/json" -d '
{
  "number_of_replicas": 2
}'

7.删除索引
curl -X DELETE "http://192.168.1.100:9200/user?pretty"


8. 创建文档
curl -X PUT "http://192.168.1.100:9200/user/_doc/1?pretty" -H "Content-Type: application/json" -d '
{
  "name": "张三",
  "age": 25,
  "city": "北京"
}'

9.查看文档
curl -X GET "http://192.168.1.100:9200/user/_doc/1?pretty"

10.批量插入创建文档
curl -X POST "http://192.168.1.100:9200/_bulk?pretty" -H "Content-Type: application/json" -d '
{"index": {"_index": "user", "_id": "3"}}  
{"name": "王五", "age": 28, "city": "广州"} 
{"create": {"_index": "user", "_id": "4"}}
{"name": "赵六", "age": 35, "city": "深圳"}
{"update": {"_index": "user", "_id": "3"}}
{"doc": {"age": 29}}
{"delete": {"_index": "user", "_id": "4"}}
'
11. 查看索引的所有文档数据
curl -X GET "http://192.168.1.100:9200/user/_search?pretty" -H "Content-Type: application/json" -d '
{
  "query": {
    "match_all": {}
  }
}'


12.注册备份仓库
curl -X PUT "http://192.168.1.100:9200/_snapshot/jettech_es_backup_repo" -H 'Content-Type: application/json' -d'
{
  "type": "fs",
  "settings": {
    "location": "/home/es/data/backups",
    "compress": true
  }
}'

13.创建快照
curl -X PUT "http://192.168.1.100:9200/_snapshot/jettech_es_backup_repo/snapshot_$(date +%Y%m%d)" -H 'Content-Type: application/json' -d'
{
  "indices": "*",
  "ignore_unavailable": true,
  "include_global_state": true
}'
14.查看所有快照
curl "http://192.168.1.100:9200/_snapshot/jettech_es_backup_repo/_all?pretty"
15.查看特定快照状态
curl "http://192.168.1.100:9200/_snapshot/jettech_es_backup_repo/snapshot_20250903?pretty"

16.查看快照进度
curl "http://192.168.1.100:9200/_snapshot/jettech_es_backup_repo/snapshot_20250903/_status?pretty"

17.恢复索引
curl -X POST "http://192.168.1.100:9200/_snapshot/jettech_es_backup_repo/snapshot_20250903/_restore?pretty"
18.关闭现有冲突索引(保留现有数据)
curl -X POST "http://192.168.1.100:9200/.ds-.logs-deprecation.elasticsearch-default-2025.09.03-000001/_close?pretty"
curl -X POST "http://192.168.1.100:9200/.ds-ilm-history-5-2025.09.03-000001/_close?pretty"
18 打开
curl -X POST "http://192.168.1.100:9200/.ds-.logs-deprecation.elasticsearch-default-2025.09.03-000001/_open?pretty"


6
curl -X GET "http://192.168.1.100:9200/_cat/shards?v"

K8S 方式

1.给节点添加lable和污点

复制代码
kubectl label nodes 192.168.1.100 node-role.jettech.cn/elasticsearch-master=true
kubectl label nodes 192.168.1.101 node-role.jettech.cn/elasticsearch-master=true
kubectl label nodes 192.168.1.102 node-role.jettech.cn/elasticsearch-master=true
kubectl label nodes 192.168.1.100 node-role.jettech.cn/elasticsearch-data=true
kubectl label nodes 192.168.1.101 node-role.jettech.cn/elasticsearch-data=true
kubectl label nodes 192.168.1.102 node-role.jettech.cn/elasticsearch-data=true

kubectl taint nodes 192.168.1.100  node-role.jettech.cn/elasticsearch-master=true:NoSchedule
kubectl taint nodes 192.168.1.101  node-role.jettech.cn/elasticsearch-master=true:NoSchedule
kubectl taint nodes 192.168.1.102  node-role.jettech.cn/elasticsearch-master=true:NoSchedule
kubectl taint nodes 192.168.1.100  node-role.jettech.cn/elasticsearch-data=true:NoSchedule
kubectl taint nodes 192.168.1.101  node-role.jettech.cn/elasticsearch-datar=true:NoSchedule
kubectl taint nodes 192.168.1.102  node-role.jettech.cn/elasticsearch-data=true:NoSchedule

kubectl taint nodes 192.168.1.100 jettech.cn/dedicated=elasticsearch:NoSchedule-
kubectl label nodes 192.168.1.100 jettech.cn/node-role-

kubectl get node  192.168.1.100 --show-labels
kubectl get nodes -o=custom-columns=NAME:.metadata.name,TAINTS:.spec.taints

2,目标是es的master和es的data在一起 一共6个POD 3个master的pod 3个data的pod,而且master和data在一起 成对出现 均匀分布在3个node节点上面

  1. k8s的yaml文件

3.1 master的yml

复制代码
[root@localhost aa]# cat jettech-elasticsearch-statefulset-master-prod.yaml 
apiVersion: v1
kind: Service
metadata:
  labels: {name: jettech-elasticsearch-master}
  name: jettech-elasticsearch-master
  namespace: jettech-prod
spec:
  ports:
  - {name: t9200, port: 9200, protocol: TCP, targetPort: t9200}
  - {name: t9300, port: 9300, protocol: TCP, targetPort: t9300}
  selector: {name: jettech-elasticsearch-master}
  #clusterIP: None
  type: ClusterIP

---
apiVersion: apps/v1
kind: StatefulSet
metadata:
  labels: {name: jettech-elasticsearch-master}
  name: jettech-elasticsearch-master
  namespace: jettech-prod
spec:
  serviceName: "jettech-elasticsearch-master"
  replicas: 3
  selector:
    matchLabels: {name: jettech-elasticsearch-master}
  template:
    metadata:
      labels: {name: jettech-elasticsearch-master}
      name: jettech-elasticsearch-master
    spec:
      tolerations:
      - key: node-role.jettech.cn/elasticsearch-master
        operator: Equal
        value: "true"
        effect: NoSchedule #referNoSchedule NoSchedule NoExecute
      - key: node-role.jettech.cn/elasticsearch-data
        operator: Equal
        value: "true"
        effect: NoSchedule #referNoSchedule NoSchedule NoExecute
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
            - matchExpressions:
              - key: node-role.jettech.cn/elasticsearch-master  #node-role.jettech.cn/elasticsearch-master #kubernetes.io/hostname
                operator: In
                values: ["true"]  #["192.168.1.100", "192.168.1.101", "192.168.1.102"] #["true"]  #["192.168.1.100", "192.168.1.101", "192.168.1.102"] 
        podAntiAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            - labelSelector:
                matchExpressions:
                  - key: name
                    operator: In
                    values: ["jettech-elasticsearch-master"]
              topologyKey: "kubernetes.io/hostname"
       # podAffinity:
       #   requiredDuringSchedulingIgnoredDuringExecution:
       #     - labelSelector:
       #         matchExpressions:
       #           - key: name
       #             operator: In
       #             values: ["jettech-elasticsearch-data"]
       #       topologyKey: "kubernetes.io/hostname"
       # podAffinity:
       #   preferredDuringSchedulingIgnoredDuringExecution:
       #   - weight: 100
       #     podAffinityTerm:
       #       labelSelector:
       #         matchExpressions:
       #           - key: name
       #             operator: In
       #             values: ["jettech-elasticsearch-data"]
       #       topologyKey: "kubernetes.io/hostname"
      containers:
      - name: jettech-elasticsearch-master
        image: harbor.jettech.com/jettechtools/elasticsearch:7.17.27
        env:
        - name: node.name
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        - name: cluster.name
          value: "jettech-elasticsearch-cluster"
        - name: discovery.seed_hosts
          value: "jettech-elasticsearch-master-0.jettech-elasticsearch-master.jettech-prod.svc.cluster.local,jettech-elasticsearch-master-1.jettech-elasticsearch-master.jettech-prod.svc.cluster.local,jettech-elasticsearch-master-2.jettech-elasticsearch-master.jettech-prod.svc.cluster.local"
        - name: cluster.initial_master_nodes
          value: "jettech-elasticsearch-master-0,jettech-elasticsearch-master-1,jettech-elasticsearch-master-2"
        - name: xpack.security.enabled
          value: "false"
        - name: node.roles
          #value: "master,ingest"
          value: "master"
        - name: ingest.geoip.downloader.enabled
          value: "false"
        - name: http.max_content_length
          value: 200mb
        - name: ES_JAVA_OPTS
          value: "-Xms13g -Xmx13g"
        - name : path.repo
          value: "/usr/share/elasticsearch/backups"
       # - name: bootstrap.memory_lock
       #   value: "true"
        - name: http.cors.enabled
          value: "true"
        - name: http.cors.allow-origin
          value: "*"
        securityContext:
          privileged: true
        ports:
        - {containerPort: 9200, name: t9200, protocol: TCP}
        - {containerPort: 9300, name: t9300, protocol: TCP}
        resources:
          requests:
            cpu: "4"
            memory: 8G
          limits:
            cpu: "8"
            memory: 16G
        volumeMounts:
        - name: jettech-elasticsearch-data
          mountPath: /usr/share/elasticsearch/data
        - name: jettech-elasticsearch-backups
          mountPath: /usr/share/elasticsearch/backups
        - name: jettech-elasticsearch-logs
          mountPath: /usr/share/elasticsearch/logs
        - name: jettech-host-time
          mountPath: /etc/localtime
        imagePullPolicy: Always
      restartPolicy: Always
      volumes:
      - name: jettech-elasticsearch-data
        hostPath:
          path: /data/jettech/elasticsearch/master/data
          type: DirectoryOrCreate
      - name: jettech-elasticsearch-backups
        hostPath:
          path: /data/jettech/elasticsearch/master/backups
          type: DirectoryOrCreate
      - name: jettech-elasticsearch-logs
        hostPath:
          path: /data/jettech/elasticsearch/master/logs
          type: DirectoryOrCreate
      - name: jettech-host-time
        hostPath:
          path: /etc/localtime

3.2 data的yaml文件

复制代码
[root@localhost aa]# cat jettech-elasticsearch-statefulset-data-prod.yaml 
apiVersion: v1
kind: Service
metadata:
  labels: {name: jettech-elasticsearch-data}
  name: jettech-elasticsearch-data
  namespace: jettech-prod
spec:
  ports:
  - {name: t9200, port: 9200, protocol: TCP, targetPort: t9200}
  - {name: t9300, port: 9300, protocol: TCP, targetPort: t9300}
  selector: {name: jettech-elasticsearch-data}
  #clusterIP: None
  type: ClusterIP

---
apiVersion: apps/v1
kind: StatefulSet
metadata:
  labels: {name: jettech-elasticsearch-data}
  name: jettech-elasticsearch-data
  namespace: jettech-prod
spec:
  serviceName: "jettech-elasticsearch-data"
  replicas: 3
  selector:
    matchLabels: {name: jettech-elasticsearch-data}
  template:
    metadata:
      labels: {name: jettech-elasticsearch-data}
      name: jettech-elasticsearch-data
    spec:
      tolerations:
      - key: node-role.jettech.cn/elasticsearch-master
        operator: Equal
        value: "true"
        effect: NoSchedule #referNoSchedule NoSchedule NoExecute
      - key: node-role.jettech.cn/elasticsearch-data
        operator: Equal
        value: "true"
        effect: NoSchedule #referNoSchedule NoSchedule NoExecute
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
            - matchExpressions:
              - key: node-role.jettech.cn/elasticsearch-data  #node-role.jettech.cn/elasticsearch-master #kubernetes.io/hostname
                operator: In
                values: ["true"]  #["192.168.1.100", "192.168.1.101", "192.168.1.102"] #["true"]
        podAntiAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            - labelSelector:
                matchExpressions:
                  - key: name
                    operator: In
                    values: ["jettech-elasticsearch-data"]
              topologyKey: "kubernetes.io/hostname"
       # podAffinity:
       #   requiredDuringSchedulingIgnoredDuringExecution:
       #     - labelSelector:
       #         matchExpressions:
       #           - key: name
       #             operator: In
       #             values: ["jettech-elasticsearch-master"]
       #       topologyKey: "kubernetes.io/hostname"
       # podAffinity:
       #   preferredDuringSchedulingIgnoredDuringExecution:
       #   - weight: 100
       #     podAffinityTerm:
       #       labelSelector:
       #         matchExpressions:
       #           - key: name
       #             operator: In
       #             values: ["jettech-elasticsearch-data"]
       #       topologyKey: "kubernetes.io/hostname"
      containers:
      - name: jettech-elasticsearch-data
        image: harbor.jettech.com/jettechtools/elasticsearch:7.17.27
        env:
        - name: node.name
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        - name: cluster.name
          value: "jettech-elasticsearch-cluster"
        - name: discovery.seed_hosts
          value: "jettech-elasticsearch-master-0.jettech-elasticsearch-master.jettech-prod.svc.cluster.local,jettech-elasticsearch-master-1.jettech-elasticsearch-master.jettech-prod.svc.cluster.local,jettech-elasticsearch-master-2.jettech-elasticsearch-master.jettech-prod.svc.cluster.local"
        - name: cluster.initial_master_nodes
          value: "jettech-elasticsearch-master-0,jettech-elasticsearch-master-1,jettech-elasticsearch-master-2"
        - name: xpack.security.enabled
          value: "false"
        - name: node.roles
          value: "master,ingest"
        - name: ingest.geoip.downloader.enabled
          value: "false"
        - name: http.max_content_length
          value: 200mb
        - name: ES_JAVA_OPTS
          value: "-Xms31g -Xmx31g"
        - name : path.repo
          value: "/usr/share/elasticsearch/backups"
       # - name: bootstrap.memory_lock
       #   value: "true"
        - name: http.cors.enabled
          value: "true"
        - name: http.cors.allow-origin
          value: "*"
        securityContext:
          privileged: true
        ports:
        - {containerPort: 9200, name: t9200, protocol: TCP}
        - {containerPort: 9300, name: t9300, protocol: TCP}
        resources:
          requests:
            cpu: "8"
            memory: 16G
          limits:
            cpu: "16"
            memory: 36G
        volumeMounts:
        - name: jettech-elasticsearch-data
          mountPath: /usr/share/elasticsearch/data
        - name: jettech-elasticsearch-backups
          mountPath: /usr/share/elasticsearch/backups
        - name: jettech-elasticsearch-logs
          mountPath: /usr/share/elasticsearch/logs
        - name: jettech-host-time
          mountPath: /etc/localtime
        imagePullPolicy: Always
      restartPolicy: Always
      volumes:
      - name: jettech-elasticsearch-data
        hostPath:
          path: /data/jettech/elasticsearch/data/data
          type: DirectoryOrCreate
      - name: jettech-elasticsearch-backups
        hostPath:
          path: /data/jettech/elasticsearch/data/backups
          type: DirectoryOrCreate
      - name: jettech-elasticsearch-logs
        hostPath:
          path: /data/jettech/elasticsearch/data/logs
          type: DirectoryOrCreate
      - name: jettech-host-time
        hostPath:
          path: /etc/localtime

一:污点:作用就是排斥所有的pod到加有污点的节点上面去,除非在pod的yaml文件中添加后门。注意是节点加污点然后pod去选择污点的节点进行调度

1.给节点添加污点的命令

复制代码
kubectl taint nodes 192.168.1.100  node-role.jettech.cn/elasticsearch-master=true:NoSchedule
  1. 运行pod在带有污点的节点上面调度,可以有多个污点

    复制代码
     spec:
       tolerations:
       - key: node-role.jettech.cn/elasticsearch-master
         operator: Equal
         value: "true"
         effect: NoSchedule #referNoSchedule NoSchedule NoExecute
       - key: node-role.jettech.cn/elasticsearch-data
         operator: Equal
         value: "true"
         effect: NoSchedule #referNoSchedule NoSchedule NoExecute

二:亲和性和反亲和性

1.亲和性:

1).节点有亲和性

2).pod有亲和性

2反亲和性

1).只有pod有反亲和性

2).节点没有反亲和性,是通过污点来排斥的

3案例

1.节点亲和性,意思就是pod选择 给节点node打了标的匹配的节点调度,这个是pod选择node的pod和node之间的

复制代码
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
            - matchExpressions:
              - key: node-role.jettech.cn/elasticsearch-data  #node-role.jettech.cn/elasticsearch-master #kubernetes.io/hostname
                operator: In
                values: ["true"]  #["192.168.1.100", "192.168.1.101", "192.168.1.102"] #["true"]

2.pod亲和性 是pod选择和那些pod在一起 调度 是pod和pod之间的 就是当前的pod要和jettech-elasticsearch-master调度到一起

复制代码
       # podAffinity:
       #   requiredDuringSchedulingIgnoredDuringExecution:
       #     - labelSelector:
       #         matchExpressions:
       #           - key: name
       #             operator: In
       #             values: ["jettech-elasticsearch-master"]
       #       topologyKey: "kubernetes.io/hostname

3.反亲和性:意思就是不选中的pod调度到一起,一般是部署多个pod实例的时候不调度到一个节点

复制代码
        podAntiAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            - labelSelector:
                matchExpressions:
                  - key: name
                    operator: In
                    values: ["jettech-elasticsearch-data"]
              topologyKey: "kubernetes.io/hostname"

requiredDuringSchedulingIgnoredDuringExecution:是强制的

preferredDuringSchedulingIgnoredDuringExecution:是尽可能

相关推荐
TDengine (老段)5 小时前
TDengine 选择函数 Last() 用户手册
大数据·数据库·sql·物联网·时序数据库·tdengine·涛思数据
little_xianzhong5 小时前
关于对逾期提醒的定时任务~改进完善
java·数据库·spring boot·spring·mybatis
Sally璐璐6 小时前
Go正则表达式实战指南
数据库·mysql·golang
小猪咪piggy6 小时前
【JavaEE】(23) 综合练习--博客系统
java·数据库·java-ee
bikong76 小时前
一种高效绘制余晖波形的方法Qt/C++
数据库·c++·qt
一叶飘零_sweeeet6 小时前
从 0 到 1 攻克订单表分表分库:亿级流量下的数据库架构实战指南
java·数据库·mysql·数据库架构·分库分表
xianyinsuifeng6 小时前
Oracle 10g → Oracle 19c 升级后问题解决方案(Pro*C 项目)
c语言·数据库·oracle
TDengine (老段)6 小时前
TDengine 选择函数 First 用户手册
大数据·数据库·物联网·时序数据库·iot·tdengine·涛思数据
dreams_dream7 小时前
企业级 Django 日志配置示例
数据库·django·sqlite