一、准备环境
prometheus下载地址: https://github.com/prometheus/prometheus/releases/download/v2.52.0-rc.1/prometheus-2.52.0-rc.1. windows-amd64.zip
grafana 下 载 地 址 :
https://dl.grafana.com/enterprise/release/grafana-enterprise-10.4.2.windows-amd64.zip windows_exporter下载地址:
https://github.com/prometheus-community/windows_exporter/releases/download/v0.25.1/wi ndows_exporter-0.25.1-amd64.msi
flume下载地址:
https://archive.apache.org/dist/flume/1.7.0/apache-flume-1.7.0-bin.tar.gz
jdk(linux)下载地址:
https://cola-yunos-1305721388.cos.ap-guangzhou.myqcloud.com/20210813/jdk-8u401-linux-x64. tar.gz
mysql下载地址
https://downloads.mysql.com/archives/get/p/23/file/mysql-community-server-5.7.26-1.el7.x86_64.rpm
https://downloads.mysql.com/archives/get/p/23/file/mysql-community-client-5.7.26-1.el7.x86_64.rpm
https://downloads.mysql.com/archives/get/p/23/file/mysql-community-common-5.7.26-1.el7.x86_64.rpm
https://downloads.mysql.com/archives/get/p/23/file/mysql-community-libs-5.7.26-1.el7.x86_64.rpm
驱动包下载地址:
https://downloads.mysql.com/archives/get/p/3/file/mysql-connector-java-5.1.16.tar.gz idea
idea下载地址:
https://www.jetbrains.com/zh-cn/idea/download/download-thanks.html?platform=windows
maven下载地址:
https://dlcdn.apache.org/maven/maven-3/3.8.8/binaries/apache-maven-3.8.8-bin.zip
二、监视面板搭建
1、解压Prometheus,运行prometheus.exe
2、打开网址
3、配置windows服务器监视:
双击运行windows_exporter-0.25.1-amd64.msi,在浏览器进入以下地址:
http://10.225.193.16:9182/metrics,出现以下页面即为运行成功:
4、编辑prometheus.yml(此文件在解压的Prometheus里面)
在文件末段添加以下内容:
cpp
复制代码
- job_name: "windows"
file_sd_configs:
- refresh_interval: 15s
files:
- ".\\windows.yml"
5、在同级目录下新建windows.yml
cpp
复制代码
- targets: ["127.0.0.1:9182"]
labels:
instance: 127.0.0.1
serverName: 'windows server'
6、重启prometheus(就是把黑框框关掉,然后再双击prometheus.exe)
在浏览器打开:http://127.0.0.1:9090/targets?search=
7、解压grafana,在bin目录打开grafana-server.exe
8、打开网址
http://127.0.0.1:3000
默认的账号密码都是admin
9、添加datasource

出现下边页面即为搭建成功
三、Flume日志聚集搭建
1、创建目录
在'/opt'目录下创建 software和module两个目录,命令如下:
cpp
复制代码
mkdir module software
2、使用xftp工具上传文件
3、将JDK和Flume文件解压到'/opt/module'
cpp
复制代码
tar -zxf jdk-8u371-linux-x64.tar.gz -C /opt/module/
tar -zxf apache-flume-1.6.0-bin.tar.gz -C /opt/module/
4、修改安装目录名称
cpp
复制代码
mv jdk1.8.0_371/ jdk
mv apache-flume-1.6.0-bin/ flume
5、配置环境
A)JDK环境配置
root@localhost module\]# vi /etc/profile
配置内容如下:
```cpp
export JAVA_HOME=/opt/module/jdk
export PATH=$PATH:$JAVA_HOME/bin # 将 JAVA 安装目录加入 PATH 路径
```

#### B)Flume环境配置:
```cpp
配置Flume文件
[root@localhost conf]# mv flume-env.sh.template flume-env.sh
```

```cpp
# 编辑配置文件
[root@localhost conf]# vi flume-env.sh
```

### 6、刷新环境变量
```cpp
[root@localhost module]# source /etc/profile
```

## 四、ganglia监控搭建
### 1、安装相关依赖包
```cpp
[root@localhost ~]# yum -y install httpd php
[root@localhost ~]# yum -y install rrdtool perl-rrdtool rrdtool-devel
[root@localhost ~]# yum -y install apr-devel
[root@localhost ~]# yum install -y epel-release
[root@localhost ~]# yum -y install ganglia-gmetad
[root@localhost ~]# yum -y install ganglia-web
[root@localhost ~]# yum install -y ganglia-gmond
```
### 2、安装telent
```cpp
[root@localhost ~]# yum install telnet -y
```

### 3、修改配置文件
```cpp
[root@localhost ~]# vi /etc/httpd/conf.d/ganglia.conf
```

```cpp
[root@localhost ~]# vi /etc/ganglia/gmetad.conf
```

```cpp
修改host文件
[root@localhost ~]# vi /etc/hosts
```

```cpp
[root@localhost ~]# vi /etc/ganglia/gmond.conf
```



## 五、禁用selinux
```cpp
[root@localhost ~]# vi /etc/selinux/config
```

### 1、设置服务自启动
```cpp
[root@master ~]# systemctl enable httpd && systemctl enable gmetad && systemctl enable gmond
```

### 2、关闭防火墙
```cpp
[root@master ~]# systemctl stop firewalld && systemctl disable firewalld
```
### 3、授权
```cpp
[root@master ~]# chmod -R 777 /var/lib/ganglia
```

### 4、重启虚拟机
```cpp
[root@master ~]# reboot
```
### 5、访问ganglia
[http://192.168.229.160/ganglia/](http://192.168.229.160/ganglia/ "http://192.168.229.160/ganglia/")

### 6、修改flume配置文件
\[root@master \~\]# vi /opt/module/flume/conf/flume-env.sh
添加以下内容:
```cpp
JAVA_OPTS="-Dflume.monitoring.type=ganglia
-Dflume.monitoring.hosts=192.168.229.160:8649 #(修改IP)
-Xms100m
-Xmx200m"
```

### 7、创建job文件
在'/opt/module/flume'目录下创建文件夹'job'

进入该目录下创建一个名字为'flume-telnet-logger.conf'文件,并编辑该文件,编辑内容如下:
\[root@master job\]# vi flume-telnet-logger.conf
```cpp
# Name the components on this agent
a1.sources = r1
a1.sinks = k1
a1.channels = c1
# Describe/configure the source
a1.sources.r1.type = netcat
a1.sources.r1.bind = 0.0.0.0
a1.sources.r1.port = 44444
# Describe the sink
a1.sinks.k1.type = logger
# Use a channel which buffers events in memory
a1.channels.c1.type = memory
a1.channels.c1.capacity = 1000
a1.channels.c1.transactionCapacity = 100
# Bind the source and sink to the channel
a1.sources.r1.channels = c1
a1.sinks.k1.channel = c1
```
### 8、启动Flume服务
```cpp
[root@master ~]# /opt/module/flume/bin/flume-ng agent --conf conf/ --name a1 --conf-file job/flume-telnet-logger.conf -Dflume.root.logger=INFO,console -Dflume.monitoring.type=ganglia -Dflume.monitoring.hosts=192.168.229.160:8649
```

### 9、重新开启一个xshell端口,发送数据
```cpp
[root@master ~]# telnet localhost 44444
```

### 10、查看网页端监控情况
[http://192.168.229.160/ganglia/?r=hour\&cs=\&ce=\&c=master\&h=master\&tab=m\&vn=\&tz=\&hide-hf=false\&m=load_one\&sh=1\&z=small\&hc=4\&host_regex=\&max_graphs=0\&s=by+name](http://192.168.229.160/ganglia/?r=hour&cs=&ce=&c=master&h=master&tab=m&vn=&tz=&hide-hf=false&m=load_one&sh=1&z=small&hc=4&host_regex=&max_graphs=0&s=by+name "http://192.168.229.160/ganglia/?r=hour&cs=&ce=&c=master&h=master&tab=m&vn=&tz=&hide-hf=false&m=load_one&sh=1&z=small&hc=4&host_regex=&max_graphs=0&s=by+name")

## 六、flume日志聚集(MySQL)
### 1、使用xftp工具上传文件

### 2、删除mariadb依赖

系统中已经安装了 mariadb-libs-5.5.52-2.el7.x86_64 软件包,需要将其卸载
```cpp
[root@master ~]# rpm -e mariadb-libs-5.5.65-1.el7.x86_64 --nodeps
```

已删除。
### 3、安装mysql服务
```cpp
[root@master software]# rpm -ivh mysql-community-common-5.7.18-1.el7.x86_64.rpm --nodeps
[root@master software]# rpm -ivh mysql-community-libs-5.7.18-1.el7.x86_64.rpm --nodeps
[root@master software]# rpm -ivh mysql-community-client-5.7.18-1.el7.x86_64.rpm --nodeps
[root@master software]# rpm -ivh mysql-community-server-5.7.18-1.el7.x86_64.rpm --nodeps
```

### 4、启动MySQL服务
```cpp
[root@master software]# systemctl start mysqld
```

已启动。
### 5、查找MySQL初始密码
```cpp
[root@master software]# grep "password" /var/log/mysqld.log
```

### 6、登录MySQL
```cpp
[root@master ~]# mysql -uroot -p
# 输入初始密码:>q.r#A9(f&4<
```

### 7、设置密码复杂度
```cpp
set global validate_password_policy=LOW;
```

```cpp
set global validate_password_length=4;
```

### 8、修改密码, 这里设置为root
```cpp
alter user 'root'@'localhost' identified by 'root';
```

### 9、开启远程访问
```cpp
GRANT ALL PRIVILEGES ON *.* TO 'root'@'%' IDENTIFIED BY 'root' WITH GRANT OPTION;
```

### 10、使用navicat工具远程连接MySQL
确保MySQL服务正在运行,使用navicat进行远程访问:

连接成功
## 七、创建msyql source jar包
### 1、安装maven,在windows本机解压后打开conf目录下的setting.conf,添加阿里云镜像源

### 2、打开idea,新建一个maven项目

项目详细信息,如果本机没有jdk需要安装jdk



### 3、导入依赖
```java
4.0.0
org.example
mysql_sourcde
1.0-SNAPSHOT
8
8
UTF-8
org.apache.flume
flume-ng-core
1.6.0
mysql
mysql-connector-java
5.1.16
log4j
log4j
1.2.17
org.slf4j
slf4j-api
1.7.12
org.slf4j
slf4j-log4j12
1.7.12
```
刷新项目

### 4、创建目录


### 5、创建资源文件夹


### 6、创建两个资源文件jdbc.properties和log4j.properties


### 7、添加配置
jdbc.properties配置文件如下:
```java
dbDriver=com.mysql.jdbc.Driver
dbUrl=jdbc:mysql://192.168.229.160:3306/mysqlsource? useUnicode=true&characterEncoding=utf-8
dbUser=root
dbPassword=root
```
log4j.properties配置文件如下:
```java
#--------console-----------
log4j.rootLogger=info,myconsole,myfile log4j.appender.myconsole=org.apache.log4j.ConsoleAppender log4j.appender.myconsole.layout=org.apache.log4j.SimpleLayout #log4j.appender.myconsole.layout.ConversionPattern =%d [%t] %-5p [%c] - %m%n
#log4j.rootLogger=error,myfile
log4j.appender.myfile=org.apache.log4j.DailyRollingFileAppender log4j.appender.myfile.File=/tmp/flume.log log4j.appender.myfile.layout=org.apache.log4j.PatternLayout log4j.appender.myfile.layout.ConversionPattern =%d [%t] %-5p [%c] - %m%n
```
### 8、右键java目录,创建包

### 9、在包内新建两个类


SQLSource代码:
```java
package com.nuit.source;
import org.apache.flume.Context;
import org.apache.flume.Event;
import org.apache.flume.EventDeliveryException;
import org.apache.flume.PollableSource;
import org.apache.flume.conf.Configurable;
import org.apache.flume.event.SimpleEvent;
import org.apache.flume.source.AbstractSource;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import java.text.ParseException;
import java.util.ArrayList;
import java.util.HashMap;
import java.util.List;
public class SQLSource extends AbstractSource implements Configurable, PollableSource {
//打印日志
private static final Logger LOG = LoggerFactory.getLogger(SQLSource.class);
//定义sqlHelper
private SQLSourceHelper sqlSourceHelper;
@Override
public long getBackOffSleepIncrement() {
return 0;
}
@Override
public long getMaxBackOffSleepInterval() {
return 0;
}
@Override
public void configure(Context context) {
try {
//初始化
sqlSourceHelper = new SQLSourceHelper(context);
} catch (ParseException e) {
e.printStackTrace();
}
}
@Override
public Status process() throws EventDeliveryException {
try {
//查询数据表
List> result = sqlSourceHelper.executeQuery();
//存放event的集合
List events = new ArrayList<>();
//存放event头集合
HashMap header = new HashMap<>();
//如果有返回数据,则将数据封装为event
if (!result.isEmpty()) {
List allRows = sqlSourceHelper.getAllRows(result);
Event event = null;
for (String row : allRows) {
event = new SimpleEvent();
event.setBody(row.getBytes());
event.setHeaders(header);
events.add(event);
}
//将event写入channel
this.getChannelProcessor().processEventBatch(events);
//更新数据表中的offset信息
sqlSourceHelper.updateOffset2DB(result.size());
}
//等待时长
Thread.sleep(sqlSourceHelper.getRunQueryDelay());
return Status.READY;
} catch (InterruptedException e) {
LOG.error("Error procesing row", e);
return Status.BACKOFF;
}
}
@Override
public synchronized void stop() {
LOG.info("Stopping sql source {} ...", getName());
try {
//关闭资源
sqlSourceHelper.close();
} finally {
super.stop();
}
}
}
```
SQLSourceHelper代码:
```java
package com.nuit.source;
import org.apache.flume.Context;
import org.apache.flume.conf.ConfigurationException;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import java.io.IOException;
import java.sql.*;
import java.text.ParseException;
import java.util.ArrayList;
import java.util.List;
import java.util.Properties;
public class SQLSourceHelper {
private static final Logger LOG = LoggerFactory.getLogger(SQLSourceHelper.class);
private int runQueryDelay, //两次查询的时间间隔
startFrom, //开始id
currentIndex, //当前id
recordSixe = 0, //每次查询返回结果的条数
maxRow; //每次查询的最大条数
private String table, //要操作的表
columnsToSelect, //用户传入的查询的列
customQuery, //用户传入的查询语句
query, //构建的查询语句
defaultCharsetResultSet;//编码集
//上下文,用来获取配置文件
private Context context;
//为定义的变量赋值(默认值),可在flume任务的配置文件中修改
private static final int DEFAULT_QUERY_DELAY = 10000;
private static final int DEFAULT_START_VALUE = 0;
private static final int DEFAULT_MAX_ROWS = 2000;
private static final String DEFAULT_COLUMNS_SELECT = "*";
private static final String DEFAULT_CHARSET_RESULTSET = "UTF-8";
private static Connection conn = null;
private static PreparedStatement ps = null;
private static String connectionURL, connectionUserName, connectionPassword;
//加载静态资源
static {
Properties p = new Properties();
try {
p.load(SQLSourceHelper.class.getClassLoader().getResourceAsStream("jdbc.properties"));
connectionURL = p.getProperty("dbUrl");
connectionUserName = p.getProperty("dbUser");
connectionPassword = p.getProperty("dbPassword");
Class.forName(p.getProperty("dbDriver"));
} catch (IOException | ClassNotFoundException e) {
LOG.error(e.toString());
}
}
//获取JDBC连接
private static Connection InitConnection(String url, String user, String pw) {
try {
Connection conn = DriverManager.getConnection(url, user, pw);
if (conn == null)
throw new SQLException();
return conn;
} catch (SQLException e) {
e.printStackTrace();
}
return null;
}
//构造方法
SQLSourceHelper(Context context) throws ParseException {
//初始化上下文
this.context = context;
//有默认值参数:获取flume任务配置文件中的参数,读不到的采用默认值
this.columnsToSelect = context.getString("columns.to.select", DEFAULT_COLUMNS_SELECT);
this.runQueryDelay = context.getInteger("run.query.delay", DEFAULT_QUERY_DELAY);
this.startFrom = context.getInteger("start.from", DEFAULT_START_VALUE);
this.defaultCharsetResultSet = context.getString("default.charset.resultset", DEFAULT_CHARSET_RESULTSET);
//无默认值参数:获取flume任务配置文件中的参数
this.table = context.getString("table");
this.customQuery = context.getString("custom.query");
connectionURL = context.getString("connection.url");
connectionUserName = context.getString("connection.user");
connectionPassword = context.getString("connection.password");
conn = InitConnection(connectionURL, connectionUserName, connectionPassword);
//校验相应的配置信息,如果没有默认值的参数也没赋值,抛出异常
checkMandatoryProperties();
//获取当前的id
currentIndex = getStatusDBIndex(startFrom);
//构建查询语句
query = buildQuery();
}
//校验相应的配置信息(表,查询语句以及数据库连接的参数)
private void checkMandatoryProperties() {
if (table == null) {
throw new ConfigurationException("property table not set");
}
if (connectionURL == null) {
throw new ConfigurationException("connection.url property not set");
}
if (connectionUserName == null) {
throw new ConfigurationException("connection.user property not set");
}
if (connectionPassword == null) {
throw new ConfigurationException("connection.password property not set");
}
}
//构建sql语句
private String buildQuery() {
String sql = "";
//获取当前id
currentIndex = getStatusDBIndex(startFrom);
LOG.info(currentIndex + "");
if (customQuery == null) {
sql = "SELECT " + columnsToSelect + " FROM " + table;
} else {
sql = customQuery;
}
StringBuilder execSql = new StringBuilder(sql);
//以id作为offset
if (!sql.contains("where")) {
execSql.append(" where ");
execSql.append("id").append(">").append(currentIndex);
return execSql.toString();
} else {
int length = execSql.toString().length();
return execSql.toString().substring(0, length - String.valueOf(currentIndex).length()) + currentIndex;
}
}
//执行查询
List> executeQuery() {
try {
//每次执行查询时都要重新生成sql,因为id不同
customQuery = buildQuery();
//存放结果的集合
List> results = new ArrayList<>();
if (ps == null) {
//
ps = conn.prepareStatement(customQuery);
}
ResultSet result = ps.executeQuery(customQuery);
while (result.next()) {
//存放一条数据的集合(多个列)
List