ELK之LogStash插件grok和geoip的配置使用

本文针对LogStash常用插件grok和geoip的使用进行说明:

一、使用grok输出结构化数据

编辑 first-pipeline.conf 文件,修改为如下内容:

bash 复制代码
input{
  #stdin{type => stdin}
  file {
    # 读取文件的路径
    path => ["/tmp/access.log"]
    start_position => "beginning"
  }
}

filter{
  grok{
    match => {"message" => "%{COMBINEDAPACHELOG}" }
  }

}

output{
  stdout{codec => rubydebug}
}

启动./logstash -f ../config/first-pipeline.conf后输出就为结构化的数据了:

bash 复制代码
{
        "message" => "140.77.188.102 - - [25/Jun/2022:05:11:33 +0800] \"GET /api/ss/api/v1/login/getBaseUrl HTTP/1.1\" 200 103 \"-\" \"Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/534+ (KHTML, like Gecko) BingPreview/1.0b\"",
       "response" => "200",
           "auth" => "-",
          "bytes" => "103",
       "referrer" => "\"-\"",
           "host" => "nb002",
       "@version" => "1",
          "agent" => "\"Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/534+ (KHTML, like Gecko) BingPreview/1.0b\"",
     "@timestamp" => 2022-06-26T00:28:24.302Z,
      "timestamp" => "25/Jun/2022:05:11:33 +0800",
          "ident" => "-",
    "httpversion" => "1.1",
           "path" => "/tmp/access.log",
       "clientip" => "140.77.188.102",
           "verb" => "GET",
        "request" => "/api/ss/api/v1/login/getBaseUrl"
}

二、使用grok对输出数据进行修改

编辑 first-pipeline.conf 文件,修改为如下内容:

bash 复制代码
input{
  #stdin{type => stdin}
  file {
    path => ["/tmp/access.log"]
    start_position => "beginning"
  }
}

filter{
  grok{
    match => {"message" => "%{COMBINEDAPACHELOG}" }
  }
  mutate{
    # 重命名字段
    rename => {"clientip" => "cip"}
  }
  mutate{
    # 移出特定字段
    remove_field => ["timestamp","agent"]
  }
}

output{
  stdout{codec => rubydebug}
}

重新启动./logstash -f ../config/first-pipeline.conf 后,往 /tmp/access.log 中新增一条数据,看输出:发现"clientip" 变成了 "cip" 和timestamp agent 字段已经没有了。NICE

bash 复制代码
{
           "verb" => "GET",
     "@timestamp" => 2022-06-26T00:48:28.224Z,
       "referrer" => "\"-\"",
           "path" => "/tmp/access.log",
           "auth" => "-",
        "message" => "140.77.188.102 - - [25/Jun/2022:05:11:33 +0800] \"GET /api/ss/api/v1/login/getBaseUrl HTTP/1.1\" 200 103 \"-\" \"Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/534+ (KHTML, like Gecko) BingPreview/1.0b\"",
       "@version" => "1",
          "ident" => "-",
       "response" => "200",
          "bytes" => "103",
        "request" => "/api/ss/api/v1/login/getBaseUrl",
    "httpversion" => "1.1",
           "host" => "nb002",
            "cip" => "140.77.188.102"
}

三、使用geoip过滤器插件

使用geoip过滤器插件,可以增强数据。

geoip插件可以针对IP地址进行地理位置信息来源的查找

编辑 first-pipeline.conf 文件,修改为如下内容:

bash 复制代码
input{
  #stdin{type => stdin}
  file {
    path => ["/tmp/access.log"]
    start_position => "beginning"
  }
}

filter{
  grok{
    match => {"message" => "%{COMBINEDAPACHELOG}" }
  }
  mutate{
    # 重命名字段
    rename => {"clientip" => "cip"}
  }
  mutate{
    # 移出特定字段
    remove_field => ["timestamp","agent"]
  }
  geoip{
    # 由于上面将clientip修改为了cip,故此处配置cip,如果没有rename字段则用clientip
    source => "cip"
  }
}

output{
  stdout{codec => rubydebug}
}

重新启动./logstash -f ../config/first-pipeline.conf 后,往 /tmp/access.log 中新增一条数据,看输出:发现输出结果中新增了geoip 字段,并展示了地区、国家、省份、经纬度等地理位置信息。

外国ip示例:

bash 复制代码
{
           "host" => "nb002",
           "auth" => "-",
          "bytes" => "103",
            "cip" => "140.77.188.104",
       "@version" => "1",
        "message" => "140.77.188.104 - - [25/Jun/2022:05:11:33 +0800] \"GET /api/ss/api/v1/login/getBaseUrl HTTP/1.1\" 200 103 \"-\" \"Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/534+ (KHTML, like Gecko) BingPreview/1.0b\"",
           "verb" => "GET",
        "request" => "/api/ss/api/v1/login/getBaseUrl",
       "referrer" => "\"-\"",
       "response" => "200",
          "ident" => "-",
           "path" => "/tmp/access.log",
     "@timestamp" => 2022-06-26T00:58:11.786Z,
          "geoip" => {
	         "country_code3" => "FR",
	             "longitude" => 4.85,
	                    "ip" => "140.77.188.104",
	        "continent_code" => "EU",
	           "region_name" => "Rhône",
	         "country_code2" => "FR",
	              "timezone" => "Europe/Paris",
	          "country_name" => "France",
	           "region_code" => "69",
	              "latitude" => 45.748,
	           "postal_code" => "69007",
	              "location" => {
	            "lat" => 45.748,
	            "lon" => 4.85
        },
             "city_name" => "Lyon"
    },
    "httpversion" => "1.1"
}

国内ip示例:

bash 复制代码
{
           "host" => "nb002",
           "auth" => "-",
          "bytes" => "103",
            "cip" => "175.30.108.241",
       "@version" => "1",
        "message" => "175.30.108.241 - - [25/Jun/2022:05:11:33 +0800] \"GET /api/ss/api/v1/login/getBaseUrl HTTP/1.1\" 200 103 \"-\" \"Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/534+ (KHTML, like Gecko) BingPreview/1.0b\"",
           "verb" => "GET",
        "request" => "/api/ss/api/v1/login/getBaseUrl",
       "referrer" => "\"-\"",
       "response" => "200",
          "ident" => "-",
           "path" => "/tmp/access.log",
     "@timestamp" => 2022-06-26T01:00:11.972Z,
          "geoip" => {
         "country_code3" => "CN",
             "longitude" => 125.3247,
                    "ip" => "175.30.108.241",
        "continent_code" => "AS",
           "region_name" => "Jilin",
         "country_code2" => "CN",
              "timezone" => "Asia/Shanghai",
          "country_name" => "China",
           "region_code" => "JL",
              "latitude" => 43.88,
              "location" => {
            "lat" => 43.88,
            "lon" => 125.3247
        },
             "city_name" => "Changchun"
    },
    "httpversion" => "1.1"
}

END

相关推荐
万米商云16 小时前
商城前端监控体系搭建:基于 Sentry + Lighthouse + ELK 的全链路监控实践
前端·elk·sentry
YJQ99671 天前
基于ELK的分布式日志实时分析与可视化系统设计
分布式·elk
hwj运维之路5 天前
k8s部署ELK补充篇:kubernetes-event-exporter收集Kubernetes集群中的事件
elk·容器·kubernetes
twj_one6 天前
SpringBoot+ELK 搭建日志监控平台
spring boot·后端·elk
小韩加油呀6 天前
python定时删除指定索引
python·elk·indics
白-胖-子10 天前
【技术原理】ELK技术栈的历史沿革与技术演进
大数据·运维·elk·互联网
Lw老王要学习20 天前
Linux架构篇、第四章_ELK与EFK-7.17.9的日志管理
linux·运维·elk·架构·云计算
舌尖上的五香1 个月前
ELK格式化处理日志数据并分析
elk
爱吃龙利鱼1 个月前
elk中kibana一直处于可用和降级之间且es群集状态并没有问题的解决方法
大数据·elk·elasticsearch
yuzhangfeng1 个月前
【日志体系】ELK Stack与云原生日志服务
elk·云原生·云计算