线上部署
springboot集成logstash
- 引入logstash坐标
XML
<!--使用logback日志-->
<!-- https://mvnrepository.com/artifact/ch.qos.logback/logback-classic -->
<dependency>
<groupId>ch.qos.logback</groupId>
<artifactId>logback-classic</artifactId>
<version>1.2.3</version>
</dependency>
<dependency>
<groupId>net.logstash.logback</groupId>
<artifactId>logstash-logback-encoder</artifactId>
<version>5.3</version>
</dependency>
- 配置logstash
XML
<?xml version="1.0" encoding="UTF-8"?>
<configuration >
<include resource="org/springframework/boot/logging/logback/base.xml" />
<appender name="LOGSTASH_SUCCESS" class="net.logstash.logback.appender.LogstashTcpSocketAppender">
<!--配置logStash 服务地址-->
<destination>10.124.30.1:5600</destination>
<!-- 日志输出编码 -->
<encoder charset="UTF-8"
class="net.logstash.logback.encoder.LoggingEventCompositeJsonEncoder">
<providers>
<timestamp>
<timeZone>UTC</timeZone>
</timestamp>
<pattern>
<pattern>
{
"message": "%message"
}
</pattern>
</pattern>
</providers>
</encoder>
</appender>
<root level="INFO">
<appender-ref ref="LOGSTASH_SUCCESS" />
</root>
<appender name="LOGSTASH_ERROR" class="net.logstash.logback.appender.LogstashTcpSocketAppender">
<!--配置logStash 服务地址-->
<destination>10.124.30.1:5601</destination>
<!-- 日志输出编码 -->
<encoder charset="UTF-8"
class="net.logstash.logback.encoder.LoggingEventCompositeJsonEncoder">
<providers>
<timestamp>
<timeZone>UTC</timeZone>
</timestamp>
<pattern>
<pattern>
{
"message": "%message"
}
</pattern>
</pattern>
</providers>
</encoder>
</appender>
<root level="ERROR">
<appender-ref ref="LOGSTASH_ERROR" />
</root>
</configuration>
-
修改application.yml中的日志配置
#日志配置
logging:
config: classpath:logback-default.xml # 配置logstash.xml -
项目工程改造
将原本使用logService手动插入es的地方,修改为logstash收集方式
5.编写logstash配置脚本(ruby语言)
6.启动logstash
bash
# 我这边将自己写的配置文件均放在conf包中,与logstash自带配置进行区分
nohup ./logstash -f ../conf/structed-collection-log.conf -w 10 -l /opt/module/weather-collect/log/logstash.log &
# 后台启动并将logstash 日志放到项目目录下的log/logstash.log 包中
springboot项目线上部署命令如下:
bash
nohup java -jar ....jar >/dev/null 2>&1 &
下面是自己搭建本地环境进行演示
ES
(搭建本地es环境)
bash
#修改配置文件
vim conf/elasticsearch.yml
network.host: 0.0.0.0 #设置ip地址,任意网络均可访问
\#说明:在Elasticsearch中如果,network.host不是localhost或者127.0.0.1的话,就会认为是生产环境,
会对环境的要求比较高,我们的测试环境不一定能够满足,一般情况下需要修改2处配置,如下:
\#1:修改jvm启动参数
vim conf/jvm.options
-Xms128m #根据自己机器情况修改
-Xmx128m
\#2:一个进程在VMAs(虚拟内存区域)创建内存映射最大数量
vim /etc/sysctl.conf
vm.max_map_count=655360
sysctl -p #配置生效
\#启动ES服务
su - elsearch
cd bin
./elasticsearch 或 ./elasticsearch -d #后台启动
#启动出错,环境:Centos6
[1]: max file descriptors [4096] for elasticsearch process is too low, increase to at least [65536]
#解决:切换到root用户,编辑limits.conf 添加类似如下内容 vi /etc/security/limits.conf 1234
[2]: max number of threads [1024] for user [elsearch] is too low, increase to at least [4096]
#解决:切换到root用户,进入limits.d目录下修改配置文件。
vi /etc/security/limits.d/90-nproc.conf
#修改如下内容: *
soft nproc 1024
#修改为
soft nproc 4096
[3]: system call filters failed to install; check the logs and fix your configuration or disable system call filters at your own risk
#解决:Centos6不支持SecComp,而ES5.2.0默认bootstrap.system_call_filter为true
vim config/elasticsearch.yml
#添加:
bootstrap.system_call_filter: false
Logstash
步驟一: input-tcp方式 無filter
bash
# tcp-plug.conf
input {
tcp {
mode => "server"
host => "0.0.0.0"
port => 5600
}
}
output{
elasticsearch {
hosts => "http://192.168.32.10:9200"
index => "structed-collection-success"
}
}
未加 filter 输出到es中的格式如下: 注意message格式
{
"_index": "structed-collection-success",
"_type": "doc",
"_id": "8KnuYHIBPM8qiYY-YIOV",
"_score": 1,
"_source": {
"host": "192.168.32.5",
"message": " {\"dataCount\":123,\"dataSize\":20.5,\"endTime\":1590735336414,\"latestTimestamp\":1590735336414,\"productId\":\"test22222\",\"useTimeLength\":410}",
"@timestamp": "2020-05-29T06:55:36.414Z",
"@version": "1",
"port": 58959
}
}
步骤二 : 添加filter
filter {
json {
source => "message"
}
}
### 格式如下
{ -
"_index": "structed-collection-success",
"_type": "doc",
"_id": "Eqn0YHIBPM8qiYY-YIQA",
"_score": 1,
"_source": { -
"host": "192.168.32.5",
"@timestamp": "2020-05-29T07:02:09.498Z",
"dataCount": 123,
"endTime": 1590735729498,
"productId": "test22222",
"@version": "1",
"message": "{\"dataCount\":123,\"dataSize\":20.5,\"endTime\":1590735729498,\"latestTimestamp\":1590735729498,\"productId\":\"test22222\",\"useTimeLength\":410}",
"dataSize": 20.5,
"useTimeLength": 410,
"port": 59474,
"latestTimestamp": 1590735729498
}
剔除message后的配置格式如下
input {
tcp {
mode => "server"
host => "0.0.0.0"
port => 5600
codec => json
}
}
filter {
json {
source => "message"
remove_field => ["message","port","host"]
}
}
output{
elasticsearch {
hosts => "http://192.168.32.10:9200"
index => "structed-collection-success"
}
}
json格式如下:
{ -
"_index": "structed-collection-success",
"_type": "doc",
"_id": "v6kHYXIBPM8qiYY-R4SF",
"_score": 1,
"_source": { -
"@timestamp": "2020-05-29T07:22:48.348Z",
"productId": "test33333",
"endTime": 1590736968348,
"dataCount": 123,
"dataSize": 20.5,
"latestTimestamp": 1590736968348,
"@version": "1",
"useTimeLength": 410
}
}
ES-cerebro 常用查询
ELK:通常是指ES + Logstash + Kibana 咱们这里根据线上使用的ES情况,选择了cerebro替换Kibana
term 精准匹配查询
{
"query" : {
"term" : {
"@timestamp" : "2020-05-29T13:09:19.099Z"
}
}
}
{
"query" : {
"terms" : {
"@timestamp" : ["2020-05-29T13:09:19.099Z","2020-05-28T11:21:14.001Z"]
}
}
}
range查询 时间范围
{
"query":{
"bool":{
"filter":{
"range":{
"invokeTime":{
"gt":"1592424431470",
"lt":"1592424431949"
}
}
}
}
}
}
简单match 匹配
{
"query":{
"match":{
"reason":"重复推送"
}
}
}
附录
结构化错误日志收集配置
java
input {
tcp {
type => "error"
mode => "server"
host => "0.0.0.0"
port => 5601
codec => json
}
tcp {
type => "success"
mode => "server"
host => "0.0.0.0"
port => 5600
codec => json
}
}
filter {
json {
source => "message"
remove_field => ["message","port","host"]
target => "doc"
}
}
output{
if [type] == "success" {
elasticsearch {
hosts => ["http://ip:port"]
index => "structed-collection-success"
}
}
if [type] == "error" {
elasticsearch {
hosts => ["http://ip:port"]
index => "structed-collection-error"
}
}
}
结构化日志收集配置
java
input {
tcp {
type => "error"
mode => "server"
host => "0.0.0.0"
port => 5601
codec => json
}
tcp {
type => "success"
mode => "server"
host => "0.0.0.0"
port => 5600
codec => json
}
}
filter {
json {
source => "message"
remove_field => ["message","port","host"]
target => "doc"
}
}
output{
if [type] == "success" {
elasticsearch {
hosts => ["http://ip:port","http://1ip:port","http://ip:port"]
index => "structed-collection-logstash-success"
}
}
if [type] == "error" {
elasticsearch {
hosts => ["ip:port","ip:port","ip:port"]
index => "structed-collection-logstash-error"
}
}
}