Apache Doris安装部署
版本:
CentOS 7.6
Apache Doris 0.14.0
编译
选择合适的版本进行下载,此次选择0.14.0版本
一、CentOS编译
1 安装依赖
sh
sudo yum groupinstall 'Development Tools' && sudo yum install maven cmake byacc flex automake libtool bison binutils-devel zip unzip ncurses-devel curl git wget python2 glibc-static libstdc++-static
升级GCC至7.3.0以上
sh
# 查看当前GCC版本
gcc --version
# 安装
yum -y install centos-release-scl
yum -y install devtoolset-7-gcc devtoolset-7-gcc-c++ devtoolset-7-binutils
scl enable devtoolset-7 bash
# GCC版本持久化
echo "source /opt/rh/devtoolset-7/enable" >>/etc/profile
1.1 升级cmake
sh
cd /opt/software
wget http://www.cmake.org/files/v3.16/cmake-3.16.6.tar.gz
tar xf cmake-3.16.6.tar.gz
cd cmake-3.16.6
./bootstrap
make
make install
安装node
sh
wget https://npm.taobao.org/mirrors/node/v10.14.1/node-v10.14.1-linux-x64.tar.gz
tar -xvf node-v10.14.1-linux-x64.tar.gz
mv node-v10.14.1-linux-x64 node
mv node /usr/local/
# 添加环境变量
vim /etc/profile
# 添加以下两行
export NODE_HOME=/usr/local/node
export PATH=$NODE_HOME/bin:$PATH
# 刷新配置
source /etc/profile
# 查看安装结果
node -v
npm -v
1.2 配置免密登录
第一步:三台节点生成公钥与私钥
在三台机器执行以下命令,生成公钥与私钥
#执行下面的命令之后,要连续按三个回车
sh
cd ~
ssh-keygen -t rsa
第二步:拷贝公钥到同一台节点
四台机器将拷贝公钥到第一台服务器,四台服务器执行命令
ssh-copy-id hadoop01
第三步:复制第一台服务器的认证到其他服务器
将第一台机器的公钥拷贝到其他机器上,在第一台机器上面执行以下命令
sh
scp /root/.ssh/authorized_keys hadoop02:/root/.ssh
scp /root/.ssh/authorized_keys hadoop03:/root/.ssh
scp /root/.ssh/authorized_keys hadoop07:/root/.ssh
1.3 安装mysql客户端
sh
yum install mysql -y
2 编译
2.1 编译Doris
sh
tar -xvf apache-doris-0.14.0-incubating-src.tar.gz
cd apache-doris-0.14.0-incubating-src
mkdir thirdparty/installed/webroot/
直接进行编译会出现
DataTables.zip、s2n
三方依赖包无法下载的问题
修改三方下载源
shell
thirdparty/vars.sh
# aws-s2n
#AWS_S2N_DOWNLOAD="https://github.com/awslabs/s2n/archive/v0.10.0.tar.gz" #s2n-tls-0.10.0.tar.gz
#AWS_S2N_NAME="s2n-0.10.0.tar.gz"
#AWS_S2N_SOURCE="s2n-0.10.0"
#AWS_S2N_MD5SUM="9b3b39803b7090c2bd937f9cc73bc03f"
AWS_S2N_DOWNLOAD="https://github.com/awslabs/s2n/archive/v0.10.0.tar.gz" #s2n-tls-0.10.0.tar.gz
AWS_S2N_NAME="s2n-tls-0.10.0.tar.gz"
AWS_S2N_SOURCE="s2n-tls-0.10.0"
AWS_S2N_MD5SUM="345aa5d2f9e82347bb3e568c22104d0e"
# datatables, bootstrap 3 and jQuery 3
#DATATABLES_DOWNLOAD="https://datatables.net/download/builder?bs-3.3.7/jq-3.3.1/dt-1.10.23"
#DATATABLES_NAME="DataTables.zip"
#DATATABLES_SOURCE="DataTables-1.10.23"
# DATATABLES_MD5SUM="f7f18a9f39d692ec33b5536bff617232"
DATATABLES_DOWNLOAD="https://datatables.net/download/builder?dt/dt-1.11.3"
DATATABLES_NAME="DataTables.zip"
DATATABLES_SOURCE="DataTables"
DATATABLES_MD5SUM="ebb908d3b6ffff355fbbe59f685f4785"
执行编译
sh
sh build.sh
编译第一次会报
/opt/software/apache-doris-0.14.0-incubating-src/thirdparty/installed/webroot/*
不存在。
此时将
/opt/software/apache-doris-0.14.0-incubating-src/webroot
复制到
sh
/opt/software/apache-doris-0.14.0-incubating-src/thirdparty/installed/
然后删除output目录重新编译
sh
sh build.sh
编译完之后会新增一个output目录
其中包含
shell
[root@hadoop02 apache-doris-0.14.0-incubating-src]# cd output/
[root@hadoop02 output]# ll
total 12
drwxr-xr-x 6 root root 4096 Nov 15 16:21 be
drwxr-xr-x 7 root root 4096 Nov 15 16:21 fe
drwxr-xr-x 4 root root 4096 Nov 15 16:21 udf
至此doris编译结束
2.2 编译broker
在源码根目录下有个
fs_brokers
目录
sh
cd /opt/software/apache-doris-0.14.0-incubating-src/fs_brokers/apache_hdfs_broker
sh build.sh
命令执行完之后同样会出现一个
output
目录,其中包含
apache_hdfs_broker
目录
至此broker编译结束
二、部署
集群规划
hadoop01 | hadoop02 | hadoop03 | hadoop07 |
---|---|---|---|
FE(Leader) | FE(Leader) | FE(Follower) | FE(OBserver) |
BE | BE | BE | BE |
broker | broker | broker | broker |
1 部署
首先将编译后的文件拷贝到指定位置.4台机器都要放置。每台机器的服务需要单独启动
shell
mv /opt/software/apache-doris-0.14.0-incubating-src/output/ /opt/module/doris14/
cd /opt/module/doris14/
FE
1.配置元数据存储路径
sh
cd fe
vi conf/fe.conf
######################################fe.conf修改内容#########################################
http_port = 8300
rpc_port = 9200
query_port = 9300
edit_log_port = 9100
priority_networks = 172.16.184.12*/24#根据部署主机的IP进行修改
meta_dir = /opt/module/doris14/fe/doris-meta
###############################################################################
# 2.创建元数据存储路径
mkdir /opt/module/doris14/fe/doris-meta
# 3.启动FE
sh bin/start_fe.sh --daemon
# 4.查看进程
jps
3764 Jps
2940 PaloFe
# BE
# 1.配置数据存储路径
cd ../be
vi conf/be.conf
######################################be.conf修改内容#########################################
be_port = 9600
be_rpc_port = 9700
webserver_port = 8400
heartbeat_service_port = 9500
brpc_port = 8600
priority_networks = 172.16.184.12*/24#根据部署主机的IP进行修改
storage_root_path=/opt/module/doris14/be/doris_storage1;/opt/module/doris14/be/doris_storage2
###############################################################################
# 2.创建数据存储路径
mkdir /opt/module/doris14/be/doris_storage1
mkdir /opt/module/doris14/be/doris_storage2
# BE注册
# 1.使用客户端访问doris fe节点
mysql -hhadoop01 -P 9300 -uroot
# 2.注册be节点
ALTER SYSTEM ADD BACKEND "hadoop01:9500";
ALTER SYSTEM ADD BACKEND "hadoop02:9500";
ALTER SYSTEM ADD BACKEND "hadoop03:9500";
ALTER SYSTEM ADD BACKEND "hadoop07:9500";
# BE启动
# 1.传输到其他节点
scp -r /opt/moudle/doris14 hadoop02:/opt/moudle
scp -r /opt/moudle/doris14 hadoop03:/opt/moudle
scp -r /opt/moudle/doris14 hadoop07:/opt/moudle
cd /opt/moudle/doris12/be
# 2.启动
sh bin/start_be.sh --daemon
# 查看be状态 在注册be节点的mysql连接上执行
SHOW PROC '/backends';
Alive列为true即可
+-----------+-----------------+----------------+-----------------+---------------+--------+----------+----------+---------------------+---------------+-------+----------------------+-----------------------+-----------+------------------+---------------+---------------+---------+----------------+--------+---------+----------------------------------------+
| BackendId | Cluster | IP | HostName | HeartbeatPort | BePort | HttpPort | BrpcPort | LastStartTime | LastHeartbeat | Alive | SystemDecommissioned | ClusterDecommissioned | TabletNum | DataUsedCapacity | AvailCapacity | TotalCapacity | UsedPct | MaxDiskUsedPct | ErrMsg | Version | Status |
+-----------+-----------------+----------------+-----------------+---------------+--------+----------+----------+---------------------+---------------+-------+----------------------+-----------------------+-----------+------------------+---------------+---------------+---------+----------------+--------+---------+----------------------------------------+
| 10004 | default_cluster | 172.16.184.125 | hadoop01.xt.com | 9500 | 9600 | 8400 | 8600 | 2021-11-16 15:08:35 | NULL | true | false | false | 0 | .000 | 339.147 GB | 393.471 GB | 13.81 % | 13.81 % | | | {"lastSuccessReportTabletsTime":"N/A"} |
| 10002 | default_cluster | 172.16.184.126 | hadoop02.xt.com | 9500 | 9600 | 8400 | 8600 | 2021-11-16 15:12:41 | NULL | true | false | false | 0 | .000 | 277.982 GB | 393.471 GB | 29.35 % | 29.35 % | | | {"lastSuccessReportTabletsTime":"N/A"} |
| 10003 | default_cluster | 172.16.184.127 | hadoop03.xt.com | 9500 | 9600 | 8400 | 8600 | 2021-11-16 15:13:14 | NULL | true | false | false | 0 | .000 | 327.646 GB | 393.471 GB | 16.73 % | 16.73 % | | | {"lastSuccessReportTabletsTime":"N/A"} |
| 11001 | default_cluster | 172.16.184.152 | hadoop07.xt.com | 9500 | 9600 | 8400 | 8600 | NULL | NULL | true | false | false | 0 | .000 | 1.000 B | .000 | 0.00 % | 0.00 % | | | {"lastSuccessReportTabletsTime":"N/A"} |
+-----------+-----------------+----------------+-----------------+---------------+--------+----------+----------+---------------------+---------------+-------+----------------------+-----------------------+-----------+------------------+---------------+---------------+---------+----------------+--------+---------+----------------------------------------+
2 配置高可用
通过将 FE 扩容至 3 个以上节点来实现 FE 的高可用。
sh
# 通过 mysql 客户端登录 Master FE
mysql -hhadoop01 -P 9300 -uroot
SHOW PROC '/frontends';
# 增加 FE 节点
ALTER SYSTEM ADD FOLLOWER "hadoop02:9100";
ALTER SYSTEM ADD FOLLOWER "hadoop03:9100";
ALTER SYSTEM ADD OBSERVER "hadoop07:9100";
在hadoop02的
/opt/module/doris14/fe
下执行
./bin/start_fe.sh --helper hadoop01:9100 --daemon
在hadoop03的
/opt/module/doris14/fe
下执行
sh
./bin/start_fe.sh --helper hadoop02:9100 --daemon
在hadoop07的
/opt/module/doris14/fe
下执行
sh
./bin/start_fe.sh --helper hadoop01:9100 --daemon
使用 mysql-client 连接到任一已启动的 FE,并执行:SHOW PROC '/frontends'; 可以查看当前已加入集群的 FE 及其对应角色。
sh
mysql -hhadoop01 -P 9300 -uroot
SHOW PROC '/frontends';
+-----------------------------------+----------------+-----------------+-------------+----------+-----------+---------+----------+----------+------------+------+-------+-------------------+---------------+----------+--------+---------+
| Name | IP | HostName | EditLogPort | HttpPort | QueryPort | RpcPort | Role | IsMaster | ClusterId | Join | Alive | ReplayedJournalId | LastHeartbeat | IsHelper | ErrMsg | Version |
+-----------------------------------+----------------+-----------------+-------------+----------+-----------+---------+----------+----------+------------+------+-------+-------------------+---------------+----------+--------+---------+
| 172.16.184.127_9100_1637052142079 | 172.16.184.127 | hadoop03.xt.com | 9100 | 8300 | 9300 | 9200 | FOLLOWER | false | 1427440867 | true | true | 3133 | NULL | true | | NULL |
| 172.16.184.125_9100_1637046027642 | 172.16.184.125 | hadoop01.xt.com | 9100 | 8300 | 9300 | 9200 | FOLLOWER | true | 1427440867 | true | true | 3133 | NULL | true | | NULL |
| 172.16.184.152_9100_1637052129601 | 172.16.184.152 | hadoop07.xt.com | 9100 | 8300 | 9300 | 9200 | OBSERVER | false | 1427440867 | true | true | 3133 | NULL | false | | NULL |
| 172.16.184.126_9100_1637049113284 | 172.16.184.126 | hadoop02.xt.com | 9100 | 8300 | 9300 | 9200 | FOLLOWER | false | 1427440867 | true | true | 3135 | NULL | true | | NULL |
+-----------------------------------+----------------+-----------------+-------------+----------+-----------+---------+----------+----------+------------+------+-------+-------------------+---------------+----------+--------+---------+
3 Broker
- 拷贝源码 fs_broker 的 output 目录下的相应 Broker 目录到需要部署的所有节点上。建议和 BE 或者 FE 目录保持同级。
- 将编译好的Broker源码复制到每台主机的
/opt/module/doris14/
目录下
- 修改相应 Broker 配置
在相应 broker/conf
目录下对应的配置文件中,可以修改相应配置。此次的从hadoop04
的/etc/hadoop/conf.cloudera.hdfs
目录下获取core-site.xml
/hdfs-site.xml
放入 broker/conf
目录
- 启动 Broker
sh bin/start_broker.sh --daemon
启动 Broker。
- 添加 Broker
要让 Doris 的 FE 和 BE 知道 Broker 在哪些节点上,通过 sql 命令添加 Broker 节点列表。
使用 mysql-client 连接启动的 FE,执行以下命令:
sql
ALTER SYSTEM ADD BROKER broker_name "host1:port1","host2:port2",...;
其中 host 为 Broker 所在节点 ip;port 为 Broker 配置文件中的 broker_ipc_port。
sql
ALTER SYSTEM ADD BROKER broker_name "hadoop01:8000","hadoop02:8000","hadoop03:8000","hadoop07:8000";
- 查看 Broker 状态
使用 mysql-client 连接任一已启动的 FE,执行以下命令查看 Broker 状态:
sql
SHOW PROC "/brokers";
至此Doris全部安装完毕