使用logstash迁移ES数据并解决限流等问题

老铁们好,我是V,今天我们简单聊聊使用logstash从ES集群迁移索引的数据到另外一个ES集群相关的问题

什么是logstash

www.elastic.co/guide/en/lo...

如何不知道这是个啥东东的,就自己看下官方文档吧

下载logstash

www.elastic.co/cn/download...

尽量选择和自己ES版本相同的版本号吧,不然不知道会不会有些问题

比如我们线上的ES版本是7.10.0,这里我就选择7.10.0

执行

直接运行

bash 复制代码
bin/logstash -f config/es/xxx.conf --path.data=/opt/apps/logstash-7.10.0/datas/xxx -b 100

参数含义

-f 配置文件位置

-b 批量大小

-w 工作线程大小,一般不用设置,默认取cpu核心数量

--path.data 指向一个有写入权限的目录,需要存储数据时会使用该目录

具体的参数介绍见文档

www.elastic.co/guide/en/lo...

后台运行

bash 复制代码
nohup bin/logstash -f config/es/xxx.conf --path.data=/opt/apps/logstash-7.10.0/datas/xxx -b 100 > /opt/apps/log/xxx.log 2>&1 &

不知道nohup啥意思的自己搜索下

配置文件

上游elasticsearch文档

www.elastic.co/guide/en/lo...

下游elasticsearch文档

www.elastic.co/guide/en/lo...

查看文档一顿吭哧吭哧配置文件写好了

ini 复制代码
input {
  # 上游
  elasticsearch {
    hosts => "http://es1.es.com:80"
    index => "xxx"
    user => "elastic"
    password => "XXX"
    query => '{ "query": { "query_string": { "query": "*" } } }'
    size => 2000
    scroll => "10m"
    docinfo => true
  }
}
​
output {
  # 下游
  elasticsearch {
    hosts => "http://es2.es.com:80"
    index => "xxx"
    user => "elastic"
    password => "XXX"
    document_id => "%{[@metadata][_id]}"
  }
}

是不是很简单?当然这个从一个ES级群迁移数据到两一个ES集群的事情虽然不难,其实还是会遇到一些问题的。

遇到的问题

文档中指定了routing

你直接用上面的配置文件硬怼,就会遇到如下的告警日志

dart 复制代码
​
[2024-03-04T10:56:51,751][WARN ][logstash.outputs.elasticsearch][[main]>worker6][main][b7552c5d93f7de321e4e8f1e6da7bf8ec4696e8dff2bb087018235182d1f7fe2] Could not index event to Elasticsearch. {:status=>400, :action=>["index", {:_id=>"ded5349e62e678cbf222560e5da90a47", :_index=>"xxx", :routing=>nil, :_type=>"_doc"}, #<LogStash::Event:0x5d3bdb61>], :response=>{"index"=>{"_index"=>"xxx", "_type"=>"_doc", "_id"=>"ded5349e62e678cbf222560e5da90a47", "status"=>400, "error"=>{"type"=>"routing_missing_exception", "reason"=>"routing is required for [xxx]/[_doc]/[ded5349e62e678cbf222560e5da90a47]", "index_uuid"=>"_na_", "index"=>"xxx"}}}}
[2024-03-04T10:56:51,751][WARN ][logstash.outputs.elasticsearch][[main]>worker8][main][b7552c5d93f7de321e4e8f1e6da7bf8ec4696e8dff2bb087018235182d1f7fe2] Could not index event to Elasticsearch. {:status=>400, :action=>["index", {:_id=>"1181a16445b0069dc824fdde48454b57", :_index=>"xxx", :routing=>nil, :_type=>"_doc"}, #<LogStash::Event:0x5a1ba4d6>], :response=>{"index"=>{"_index"=>"xxx", "_type"=>"_doc", "_id"=>"1181a16445b0069dc824fdde48454b57", "status"=>400, "error"=>{"type"=>"routing_missing_exception", "reason"=>"routing is required for [xxx]/[_doc]/[1181a16445b0069dc824fdde48454b57]", "index_uuid"=>"_na_", "index"=>"xxx"}}}}

啥情况?

dart 复制代码
{"type"=>"routing_missing_exception", "reason"=>"routing is required for [xxx]/[_doc]/[ded5349e62e678cbf222560e5da90a47]", "index_uuid"=>"_na_", "index"=>"xxx"}}}

原来是没有指定routing字段

我们来看下索引信息

json 复制代码
{
  "xxx" : {
    "aliases" : { },
    "mappings" : {
      "_routing" : {
        "required" : true
      },
      "properties" : {
    
      }
    },
    "settings" : {
    }
  }
}

原来如此,需要指定routing,配置文件一通改,就变成了下面的模样

ini 复制代码
```
input {
  elasticsearch {
    hosts => "http://es1.es.com:80"
    index => "xxx"
    user => "elastic"
    password => "XXX"
    query => '{ "query": { "query_string": { "query": "*" } } }'
    size => 2000
    scroll => "1m"
    docinfo => true
    # input中添加routing
    docinfo_fields => ["_index", "_id", "_type", "_routing"]
  }
}
​
output {
  elasticsearch {
    hosts => "http://es2.es.com:80"
    index => "xxx"
    user => "elastic"
    password => "XXX"
    document_id => "%{[@metadata][_id]}"
    # 指定routing
    routing => "%{[@metadata][_routing]}"
  }
}
```

那么问题来了,如果你所有的索引都用这个模板,那么当上游没有指定routing字段的时候,下游的数据中的routing字段就会是[@metadata][_routing],真的是人都麻了,这个logstash组件一段都不智能,那么这个问题能解决吗?别急,看到最后你就知道了

索引严格模式,无法写入@timestamp和@version字段

上面的问题解决了,跑着跑着,又遇到事了

ruby 复制代码
[2024-03-04T11:43:48,372][WARN ][logstash.outputs.elasticsearch][[main]>worker0][main][23eda3c9518e4ba5a787adadf9714d5512c8ad9a9754020744b84ca81fe1bedc] Could not index event to Elasticsearch. {:status=>400, :action=>["index", {:_id=>"110109711637125402", :_index=>"xxx", :routing=>nil, :_type=>"_doc"}, #<LogStash::Event:0x5e156236>], :response=>{"index"=>{"_index"=>"xxx", "_type"=>"_doc", "_id"=>"110109711637125402", "status"=>400, "error"=>{"type"=>"strict_dynamic_mapping_exception", "reason"=>"mapping set to strict, dynamic introduction of [@timestamp] within [_doc] is not allowed"}}}}
[2024-03-04T11:43:48,372][WARN ][logstash.outputs.elasticsearch][[main]>worker0][main][23eda3c9518e4ba5a787adadf9714d5512c8ad9a9754020744b84ca81fe1bedc] Could not index event to Elasticsearch. {:status=>400, :action=>["index", {:_id=>"110109711960916147", :_index=>"xxx", :routing=>nil, :_type=>"_doc"}, #<LogStash::Event:0x75333e01>], :response=>{"index"=>{"_index"=>"xxx", "_type"=>"_doc", "_id"=>"110109711960916147", "status"=>400, "error"=>{"type"=>"strict_dynamic_mapping_exception", "reason"=>"mapping set to strict, dynamic introduction of [@timestamp] within [_doc] is not allowed"}}}}
[2024-03-04T11:43:48,372][WARN ][logstash.outputs.elasticsearch][[main]>worker0][main][23eda3c9518e4ba5a787adadf9714d5512c8ad9a9754020744b84ca81fe1bedc] Could not index event to Elasticsearch. {:status=>400, :action=>["index", {:_id=>"110109712328692950", :_index=>"xxx", :routing=>nil, :_type=>"_doc"}, #<LogStash::Event:0x7405cd45>], :response=>{"index"=>{"_index"=>"xxx", "_type"=>"_doc", "_id"=>"110109712328692950", "status"=>400, "error"=>{"type"=>"strict_dynamic_mapping_exception", "reason"=>"mapping set to strict, dynamic introduction of [@timestamp] within [_doc] is not allowed"}}}}

看下索引结构

json 复制代码
{
    "xxx" : {
      "aliases" : { },
      "mappings" : {
        "dynamic" : "strict",
        "properties" : {
        }
      },
      "settings" : {
        "index" : {
        }
      }
    }
  }
 

原来是索引设置了,严格模式,不允许插入新的字段,那咋整?

还有logstash支持一些filter可以删除掉一些字段,那么我们安排上

ini 复制代码
input {
  elasticsearch {
    hosts => "http://es1.es.com:80"
    index => "merchant_order_rel_pro_v2"
    user => "elastic"
    password => "XXX"
    query => '{ "query": { "query_string": { "query": "*" } } }'
    size => 2000
    scroll => "1m"
    docinfo => true
  }
}
filter {
  mutate {
    # 删除logstash多余字段
    remove_field => ["@version","@timestamp"]
  }
}
output {
  elasticsearch {
   hosts => "http://es2.es.com:80"
    index => "xxx"
    user => "elastic"
    password => "XXX"
    document_id => "%{[@metadata][_id]}"
  }
}

logstash限流

有的时候写入的太快了,下游扛不住,刚开始是通过修改参数来解决,但是每次修改任务都要重新跑,人有点麻了

网上找了一通也没见到logstash有限流插件

发现可以调用本地ruby脚本,不会ruby让gpt生成了一个令牌桶算法的脚本,但是限流效果一言难尽,只能说能限流,但是数字不是你想要的值。

没办法了只好研究下怎么编写插件,结果gradle功底太差了,源码编译不过彻底麻了

最后没办法,自己写了个java版本的基于guava的RateLimiter实现的限流插件打成jar包直接放进去解决了该问题

github.com/valsong/log...

logstash-java-rate-limiter使用方法

使用方法也很简单,将我编写的插件的jar放到目录logstash/logstash-core/lib/jars/中即可

  • 参数
param type required 默认值 样例 desc
rate_path string no /usr/share/logstash/rate.txt 从该文件中读取第一行作为限流值,你可以随时修改这个文件中的限流值
count_path string no /usr/share/logstash/count.txt 记录已经同步的事件的数量到该文件中
count_log_delay_sec long no 30 30 根据设置的秒数以固定间隔在logstash的日志中打印事件数量
  • 在文件中设置限流值
bash 复制代码
echo 5000 > /usr/share/logstash/rate.txt
  • 添加一个filter叫java_rate_limit到任务的配置文件中
ini 复制代码
input {
  elasticsearch {
    hosts => "http://xxx-es.xxx.com:9200"
    index => "xxx"
    user => "elastic"
    password => "XXXX"
    query => '{ "query": { "query_string": { "query": "*" } } }'
    size => 2000
    scroll => "10m"
    docinfo => true
    # docinfo_fields => ["_index", "_id", "_type", "_routing"]
  }
}


filter {
  # plugin name
  java_rate_limit {
    # 设置限流值到该文件的第一行
    rate_path => "/usr/share/logstash/rate.txt"
    # 用于记录时间的数量的文件
    count_path => "/usr/share/logstash/count.txt"
    #  根据设置的秒数定时打印事件数量到日志中
    count_log_delay_sec => 30
  }
}


output {
  elasticsearch {
   hosts => "yyy-es.yyy.com:9200"
    index => "xxx"
    user => "elastic"
    password => "YYYY"
    document_id => "%{[@metadata][_id]}"
    # document_type => "%{[@metadata][_type]}"
    # routing => "%{[@metadata][_routing]}"
  }
}

然后就可以限流了,如果需要调整限流值,直接改文本中的数字即可,过了几秒就会生效

效果如下:

less 复制代码
[2024-02-01T16:44:41,515][WARN ][org.logstash.plugins.filters.RateLimitFilter][Converge PipelineAction::Create<main>] ### Rate limiter enabled:[true]! ratePath:[/usr/share/logstash/rate.txt].
[2024-02-01T16:44:41,519][WARN ][org.logstash.plugins.filters.RateLimitFilter][rate-limit-0] # Rate changed, set new RateLimiter! lastRate:[0.0] rate:[5000.0] ratePath:[/usr/share/logstash/rate.txt].
[2024-02-01T16:44:41,520][WARN ][org.logstash.plugins.filters.RateLimitFilter][Converge PipelineAction::Create<main>] ### Record event count to file enabled:[true]! countPath:[/usr/share/logstash/count.txt].
[2024-02-01T16:44:50,536][INFO ][org.logstash.plugins.filters.RateLimitFilter][rate-limit-1] Event count:[36500] rate:[5000.0].
[2024-02-01T16:45:00,561][INFO ][org.logstash.plugins.filters.RateLimitFilter][rate-limit-1] Event count:[87000] rate:[5000.0].
[2024-02-01T16:45:10,587][INFO ][org.logstash.plugins.filters.RateLimitFilter][rate-limit-1] Event count:[137000] rate:[5000.0].
[2024-02-01T16:45:11,587][WARN ][org.logstash.plugins.filters.RateLimitFilter][rate-limit-0] # Rate changed, set new RateLimiter! lastRate:[5000.0] rate:[6000.0] ratePath:[/usr/share/logstash/rate.txt].
[2024-02-01T16:45:20,591][INFO ][org.logstash.plugins.filters.RateLimitFilter][rate-limit-0] Event count:[204000] rate:[6000.0].
[2024-02-01T16:45:30,595][INFO ][org.logstash.plugins.filters.RateLimitFilter][rate-limit-0] Event count:[264000] rate:[6000.0].
[2024-02-01T16:45:40,638][INFO ][org.logstash.plugins.filters.RateLimitFilter][rate-limit-0] Event count:[324000] rate:[6000.0].
[2024-02-01T16:45:50,647][INFO ][org.logstash.plugins.filters.RateLimitFilter][rate-limit-0] Event count:[384000] rate:[6000.0].
[2024-02-01T16:46:00,649][WARN ][org.logstash.plugins.filters.RateLimitFilter][rate-limit-1] # Rate changed, set new RateLimiter! lastRate:[6000.0] rate:[3000.0] ratePath:[/usr/share/logstash/rate.txt].
[2024-02-01T16:46:00,651][INFO ][org.logstash.plugins.filters.RateLimitFilter][rate-limit-0] Event count:[444000] rate:[3000.0].
[2024-02-01T16:46:10,655][INFO ][org.logstash.plugins.filters.RateLimitFilter][rate-limit-0] Event count:[482000] rate:[3000.0].

配置文件最终版本

如果你用了我的插件,又不想每次都判断routing值,同时不想将@version和@timestamp两个字段写入下游,那么配置文件这么写就对了

注意output中的if判断条件,不能写到elasticsearch插件内,折腾了一下午才知道这个问题

ini 复制代码
input {
  elasticsearch {
    hosts => "http://es1.es:80"
    index => "xxx_pro_v1"
    user => "elastic"
    password => "XXXXXX"
    query => '{ "query": { "query_string": { "query": "*" } } }'
    size => 2000
    scroll => "1m"
    docinfo => true
    # input中添加routing
    docinfo_fields => ["_index", "_id", "_type", "_routing"]
  }
}

filter {
  # 限流插件名称,没有用限流插件就把这个去掉即可
  java_rate_limit {
    # 限流插件限流值地址
    rate_path => "/usr/share/logstash/rate.txt"
  }
  mutate{
    # 移除logstash新增的两个字段
    remove_field => ["@version","@timestamp"]
  }
}

output {
  # 判断是否有routing
  if [@metadata][_routing] {
    elasticsearch {
      hosts => "http://es2.es.com:80"
      index => "xxx_pro_v1"
      user => "elastic"
      password => "XXX"
      document_id => "%{[@metadata][_id]}"
      # ES6需要指定type
      # document_type => "%{[@metadata][_type]}"
      # 指定routing
      routing => "%{[@metadata][_routing]}"
    }
  } else {
    elasticsearch {
      hosts => "http://es2.es.com:80"
      index => "xxx_pro_v1"
      user => "elastic"
      password => "XXX"
      document_id => "%{[@metadata][_id]}"
      # ES6需要指定type
      # document_type => "%{[@metadata][_type]}"
    }
  }
}
相关推荐
arnold661 小时前
探索 ElasticSearch:性能优化之道
大数据·elasticsearch·性能优化
成长的小牛2333 小时前
es使用knn向量检索中numCandidates和k应该如何配比更合适
大数据·elasticsearch·搜索引擎
Elastic 中国社区官方博客4 小时前
Elasticsearch:什么是查询语言?
大数据·数据库·elasticsearch·搜索引擎·oracle
启明真纳5 小时前
elasticache备份
运维·elasticsearch·云原生·kubernetes
幽弥千月15 小时前
【ELK】ES单节点升级为集群并开启https【亲测可用】
elk·elasticsearch·https
运维&陈同学15 小时前
【Elasticsearch05】企业级日志分析系统ELK之集群工作原理
运维·开发语言·后端·python·elasticsearch·自动化·jenkins·哈希算法
Y编程小白1 天前
Git版本控制工具--基础命令和分支管理
大数据·git·elasticsearch
酱学编程1 天前
ES搜索原理
大数据·elasticsearch·搜索引擎
龙少95431 天前
【SpringBoot中怎么使用ElasticSearch】
spring boot·elasticsearch·jenkins
liupenglove1 天前
Kibana8.17.0在mac上的安装
elasticsearch·macos·kibana