ubuntu22.04集群部署clickhouse详细步骤
文章目录
一、环境准备
bash
复制代码
系统环境版本:Ubuntu 22.04.5 LTS
node1: 10.0.0.41
node2: 10.0.0.42
node3: 10.0.0.43
二、clickhouse官网文档介绍
bash
复制代码
ClickHouse® 是一款开源的、面向在线分析处理(OLAP)的列式数据库管理系统,能够对海量数据进行实时分析。
特点:
1、极致性能:ClickHouse 通过列式存储、高压缩比、多线程执行和向量化处理,实现了对数十亿行数据的亚秒级查询性能。
2、高效存储:采用列式存储格式,并支持多种压缩算法,可在存储大量数据的同时保持较低的磁盘占用。
3、丰富的表引擎:最常用的是 MergeTree 系列引擎,支持:分区(Partition)、排序键(Primary / Sorting Key)、数据合并(Background merge)、TTL 策略、副本复制(ReplicatedMergeTree)
4、分布式架构:ClickHouse 支持水平扩展,通过分片(Sharding)和副本(Replication)形成高可用、高吞吐的集群。
5、多种接口:支持TCP 原生协议、HTTP API、JDBC / ODBC、各种语言 SDK(Python、Go、Java 等)
6、典型使用场景:官方推荐在以下场景使用;
日志分析
业务报表分析
监控指标系统
Clickstream(点击流分析)
BI 数据分析
实时数据仓库
三、部署zookeeper集群:
bash
复制代码
为什么需要部署zookeeper集群?
ClickHouse 的多副本复制机制(ReplicatedMergeTree)依赖 ZooKeeper 来存储元数据、执行副本间协调、管理任务队列、确保一致性。
没有 ZooKeeper,就无法使用 ReplicatedMergeTree,也无法实现自动同步、多副本容灾、高可用、分布式 DDL 等关键功能。
zookeeper单点部署
bash
复制代码
1.下载zookeeper
wget https://dlcdn.apache.org/zookeeper/zookeeper-3.8.4/apache-zookeeper-3.8.4-bin.tar.gz
SVIP:
root@node-exporter41:~# wget http://192.168.17.253/Resources/ElasticStack/softwares/Zookeeper/apache-zookeeper-3.8.4-bin.tar.gz
2.解压软件包
root@node-exporter41:~# tar xf apache-zookeeper-3.8.4-bin.tar.gz -C /usr/local/
root@node-exporter41:~#
3.配置环境变量
root@node-exporter41:~# cat /etc/profile.d/zk.sh
#!/bin/bash
export ZK_HOME=/usr/local/apache-zookeeper-3.8.4-bin
export JAVA_HOME=/usr/share/elasticsearch/jdk
export PATH=$PATH:${ZK_HOME}/bin:${JAVA_HOME}/bin
root@node-exporter41:~#
root@node-exporter41:~# source /etc/profile.d/zk.sh
root@node-exporter41:~#
root@node-exporter41:~# java --version
openjdk 22.0.2 2024-07-16
OpenJDK Runtime Environment (build 22.0.2+9-70)
OpenJDK 64-Bit Server VM (build 22.0.2+9-70, mixed mode, sharing)
root@node-exporter41:~#
4.准备配置文件
root@node-exporter41:~# cp /usr/local/apache-zookeeper-3.8.4-bin/conf/zoo{_sample,}.cfg
root@node-exporter41:~#
root@node-exporter41:~# ll /usr/local/apache-zookeeper-3.8.4-bin/conf/zoo*
-rw-r--r-- 1 root root 1183 Jun 26 09:07 /usr/local/apache-zookeeper-3.8.4-bin/conf/zoo.cfg
-rw-r--r-- 1 yinzhengjie yinzhengjie 1183 Feb 13 2024 /usr/local/apache-zookeeper-3.8.4-bin/conf/zoo_sample.cfg
root@node-exporter41:~#
5.启动zookeeper服务
root@node-exporter41:~# zkServer.sh start
ZooKeeper JMX enabled by default
Using config: /usr/local/apache-zookeeper-3.8.4-bin/bin/../conf/zoo.cfg
Starting zookeeper ... STARTED
root@node-exporter41:~#
6.查看zookeeper状态
root@node-exporter41:~# zkServer.sh status
ZooKeeper JMX enabled by default
Using config: /usr/local/apache-zookeeper-3.8.4-bin/bin/../conf/zoo.cfg
Client port found: 2181. Client address: localhost. Client SSL: false.
Mode: standalone
root@node-exporter41:~#
7.登录测试
root@node-exporter41:~# zkCli.sh -server 10.0.0.41:2181
...
[zk: 10.0.0.41:2181(CONNECTED) 0] ls /
[zookeeper]
[zk: 10.0.0.41:2181(CONNECTED) 1]
zookeeper的基本使用
bash
复制代码
1.创建zookeeper node
[zk: 10.0.0.41:2181(CONNECTED) 1] create /school oldboyedu
Created /school
[zk: 10.0.0.41:2181(CONNECTED) 2]
2.查看zookeeper node列表
[zk: 10.0.0.41:2181(CONNECTED) 2] ls /
[school, zookeeper]
[zk: 10.0.0.41:2181(CONNECTED) 3]
3.查看zookeeper node数据
[zk: 10.0.0.41:2181(CONNECTED) 3] get /school
oldboyedu
[zk: 10.0.0.41:2181(CONNECTED) 4]
4.修改zookeeper node数据
[zk: 10.0.0.41:2181(CONNECTED) 4] set /school laonanhai
[zk: 10.0.0.41:2181(CONNECTED) 5]
[zk: 10.0.0.41:2181(CONNECTED) 5] get /school
laonanhai
[zk: 10.0.0.41:2181(CONNECTED) 6]
5.删除zookeeper node
[zk: 10.0.0.41:2181(CONNECTED) 6] delete /school
[zk: 10.0.0.41:2181(CONNECTED) 7]
[zk: 10.0.0.41:2181(CONNECTED) 7] ls /
[zookeeper]
[zk: 10.0.0.41:2181(CONNECTED) 8]
6.创建层级的zookeeper node
[zk: 10.0.0.41:2181(CONNECTED) 10] create /school oldboyedu
Created /school
[zk: 10.0.0.41:2181(CONNECTED) 11]
[zk: 10.0.0.41:2181(CONNECTED) 11] create /school/class linux99
Created /school/class
[zk: 10.0.0.41:2181(CONNECTED) 12]
[zk: 10.0.0.41:2181(CONNECTED) 12] create /school/address https://www.oldboyedu.com
Created /school/address
[zk: 10.0.0.41:2181(CONNECTED) 13]
[zk: 10.0.0.41:2181(CONNECTED) 13] ls /
[school, zookeeper]
[zk: 10.0.0.41:2181(CONNECTED) 14]
[zk: 10.0.0.41:2181(CONNECTED) 14] ls /school
[address, class]
[zk: 10.0.0.41:2181(CONNECTED) 15]
[zk: 10.0.0.41:2181(CONNECTED) 15] get /school/address
https://www.oldboyedu.com
[zk: 10.0.0.41:2181(CONNECTED) 16]
[zk: 10.0.0.41:2181(CONNECTED) 16] get /school/class
linux99
[zk: 10.0.0.41:2181(CONNECTED) 17]
7.递归删除zookeeper node
[zk: 10.0.0.41:2181(CONNECTED) 19] ls /school
[address, class]
[zk: 10.0.0.41:2181(CONNECTED) 20]
[zk: 10.0.0.41:2181(CONNECTED) 20] delete /school/class
[zk: 10.0.0.41:2181(CONNECTED) 21]
[zk: 10.0.0.41:2181(CONNECTED) 21] ls /school
[address]
[zk: 10.0.0.41:2181(CONNECTED) 22]
[zk: 10.0.0.41:2181(CONNECTED) 22] delete /school
Node not empty: /school
[zk: 10.0.0.41:2181(CONNECTED) 23]
[zk: 10.0.0.41:2181(CONNECTED) 23] ls /school
[address]
[zk: 10.0.0.41:2181(CONNECTED) 24]
[zk: 10.0.0.41:2181(CONNECTED) 24] deleteall /school
[zk: 10.0.0.41:2181(CONNECTED) 25]
[zk: 10.0.0.41:2181(CONNECTED) 25] ls /school
Node does not exist: /school
[zk: 10.0.0.41:2181(CONNECTED) 26]
[zk: 10.0.0.41:2181(CONNECTED) 26] ls /
[zookeeper]
[zk: 10.0.0.41:2181(CONNECTED) 27]
zookeeper的集群部署
bash
复制代码
1.集群模式节点数量选择:
参考链接:
https://zookeeper.apache.org/doc/current/zookeeperOver.html
2.实战案例
2.1 停止单点服务
root@node-exporter41:~# zkServer.sh stop
ZooKeeper JMX enabled by default
Using config: /usr/local/apache-zookeeper-3.8.4-bin/bin/../conf/zoo.cfg
Stopping zookeeper ... STOPPED
root@node-exporter41:~#
root@node-exporter41:~# zkServer.sh status
ZooKeeper JMX enabled by default
Using config: /usr/local/apache-zookeeper-3.8.4-bin/bin/../conf/zoo.cfg
Client port found: 2181. Client address: localhost. Client SSL: false.
Error contacting service. It is probably not running.
root@node-exporter41:~#
2.2 修改zookeeper的配置文件
root@node-exporter41:~# egrep -v "^#|^$" /usr/local/apache-zookeeper-3.8.4-bin/conf/zoo.cfg
tickTime=2000
initLimit=10
syncLimit=5
dataDir=/var/lib/zookeeper
clientPort=2181
server.11=10.0.0.41:2888:3888
server.22=10.0.0.42:2888:3888
server.33=10.0.0.43:2888:3888
root@node-exporter41:~#
2.3.将程序和环境变量文件同步到集群的其他节点
root@node-exporter41:~# scp -r /usr/local/apache-zookeeper-3.8.4-bin/ 10.0.0.42:/usr/local
root@node-exporter41:~# scp -r /usr/local/apache-zookeeper-3.8.4-bin/ 10.0.0.43:/usr/local
root@node-exporter41:~# scp /etc/profile.d/zk.sh 10.0.0.42:/etc/profile.d/
root@node-exporter41:~# scp /etc/profile.d/zk.sh 10.0.0.43:/etc/profile.d/
2.4.所有节点准备数据目录【此处的myid要和zk集群的配置文件要匹配!!!】
root@node-exporter41:~# mkdir /var/lib/zookeeper && echo 11 > /var/lib/zookeeper/myid
root@node-exporter42:~# mkdir /var/lib/zookeeper && echo 22 > /var/lib/zookeeper/myid
root@node-exporter42:~# mkdir /var/lib/zookeeper && echo 33 > /var/lib/zookeeper/myid
2.5.启动zookeeper集群
root@node-exporter41:~# source /etc/profile.d/zk.sh && zkServer.sh start
root@node-exporter42:~# source /etc/profile.d/zk.sh && zkServer.sh start
root@node-exporter43:~# source /etc/profile.d/zk.sh && zkServer.sh start
2.6.检查集群的状态
root@node-exporter41:~# zkServer.sh status
ZooKeeper JMX enabled by default
Using config: /usr/local/apache-zookeeper-3.8.4-bin/bin/../conf/zoo.cfg
Client port found: 2181. Client address: localhost. Client SSL: false.
Mode: follower
root@node-exporter41:~#
root@node-exporter41:~# ss -ntl | egrep "2181|2888|3888"
LISTEN 0 50 *:2181 *:*
LISTEN 0 50 [::ffff:10.0.0.41]:3888 *:*
root@node-exporter41:~#
root@node-exporter42:~# zkServer.sh status
ZooKeeper JMX enabled by default
Using config: /usr/local/apache-zookeeper-3.8.4-bin/bin/../conf/zoo.cfg
Client port found: 2181. Client address: localhost. Client SSL: false.
Mode: leader
root@node-exporter42:~#
root@node-exporter42:~# ss -ntl | egrep "2181|2888|3888"
LISTEN 0 50 *:2181 *:*
LISTEN 0 50 [::ffff:10.0.0.42]:3888 *:*
LISTEN 0 50 [::ffff:10.0.0.42]:2888 *:*
root@node-exporter42:~#
root@node-exporter43:~# zkServer.sh status
ZooKeeper JMX enabled by default
Using config: /usr/local/apache-zookeeper-3.8.4-bin/bin/../conf/zoo.cfg
Client port found: 2181. Client address: localhost. Client SSL: false.
Mode: follower
root@node-exporter43:~#
root@node-exporter43:~# ss -ntl | egrep "2181|2888|3888"
LISTEN 0 50 [::ffff:10.0.0.43]:3888 *:*
LISTEN 0 50 *:2181 *:*
root@node-exporter43:~#
2.7 客户端链接测试
root@node-exporter41:~# zkCli.sh -server 10.0.0.41:2181,10.0.0.42:2181,10.0.0.43:2181
...
[zk: 10.0.0.41:2181,10.0.0.42:2181,10.0.0.43:2181(CONNECTED) 0] ls /
[zookeeper]
[zk: 10.0.0.41:2181,10.0.0.42:2181,10.0.0.43:2181(CONNECTED) 1]
[zk: 10.0.0.41:2181,10.0.0.42:2181,10.0.0.43:2181(CONNECTED) 1] create /school oldboyedu
Created /school
[zk: 10.0.0.41:2181,10.0.0.42:2181,10.0.0.43:2181(CONNECTED) 2]
[zk: 10.0.0.41:2181,10.0.0.42:2181,10.0.0.43:2181(CONNECTED) 2] ls /
[school, zookeeper]
[zk: 10.0.0.41:2181,10.0.0.42:2181,10.0.0.43:2181(CONNECTED) 3]
3. 验证zookeeper集群高可用
3.1 开启两个终端指向不同的节点分别写入数据测试
root@node-exporter41:~# zkCli.sh -server 10.0.0.42:2181
...
root@node-exporter42:~# zkCli.sh -server 10.0.0.43:2181
...
3.2 停止leader节点观察是否可用
root@node-exporter41:~# zkCli.sh -server 10.0.0.41:2181,10.0.0.42:2181,10.0.0.43:2181
...
3.3 停止服务观察终端是否可用
root@node-exporter43:~# zkServer.sh status
ZooKeeper JMX enabled by default
Using config: /usr/local/apache-zookeeper-3.8.4-bin/bin/../conf/zoo.cfg
Client port found: 2181. Client address: localhost. Client SSL: false.
Mode: leader
root@node-exporter43:~# ss -ntl | egrep "2181|2888|3888"
LISTEN 0 50 [::ffff:10.0.0.43]:2888 *:*
LISTEN 0 50 [::ffff:10.0.0.43]:3888 *:*
LISTEN 0 50 *:2181 *:*
root@node-exporter43:~#
root@node-exporter43:~# zkServer.sh stop
ZooKeeper JMX enabled by default
Using config: /usr/local/apache-zookeeper-3.8.4-bin/bin/../conf/zoo.cfg
Stopping zookeeper ... STOPPED
root@node-exporter43:~#
root@node-exporter43:~# ss -ntl | egrep "2181|2888|3888"
root@node-exporter43:~#
root@node-exporter42:~# zkServer.sh status
ZooKeeper JMX enabled by default
Using config: /usr/local/apache-zookeeper-3.8.4-bin/bin/../conf/zoo.cfg
Client port found: 2181. Client address: localhost. Client SSL: false.
Mode: leader
root@node-exporter42:~# ss -ntl | egrep "2181|2888|3888"
LISTEN 0 50 [::ffff:10.0.0.42]:3888 *:*
LISTEN 0 50 *:2181 *:*
LISTEN 0 50 [::ffff:10.0.0.42]:2888 *:*
root@node-exporter42:~#
root@node-exporter41:~# zkServer.sh status
ZooKeeper JMX enabled by default
Using config: /usr/local/apache-zookeeper-3.8.4-bin/bin/../conf/zoo.cfg
Client port found: 2181. Client address: localhost. Client SSL: false.
Mode: follower
root@node-exporter41:~#
root@node-exporter41:~# ss -ntl | egrep "2181|2888|3888"
LISTEN 0 50 *:2181 *:*
LISTEN 0 50 [::ffff:10.0.0.41]:3888 *:*
root@node-exporter41:~#
四、安装部署clickhouse集群
4.1、方法一(apt安装): # 安装较慢,因为仓库在国外
3.1、更新系统
bash
复制代码
root@node-exporter41:~# apt update
root@node-exporter41:~# apt upgrade
3.2、设置 ClickHouse 官方软件源的 GPG 签名 key
bash
复制代码
root@node-exporter41:~# mkdir -p /etc/apt/keyrings
root@node-exporter41:~# curl https://packages.clickhouse.com/rpm/lts/repodata/repomd.xml.key |
root@node-exporter41:~# tee /etc/apt/keyrings/clickhouse.asc
#这三步的作用是 为 APT 设置 ClickHouse 官方软件源的 GPG 签名 key,确保从官方仓库下载安装的软件是可信、未被篡改的。
3.3、添加ClickHouse官方源
bash
复制代码
root@node-exporter41:~# ARCH=$(dpkg --print-architecture)
root@node-exporter41:~# echo "deb [signed-by=/etc/apt/keyrings/clickhouse.asc arch=${ARCH}] https://packages.clickhouse.com/deb stable main" | sudo tee /etc/apt/sources.list.d/clickhouse.list
3.4、更新数据源:
bash
复制代码
root@node-exporter41:~# apt update
3.5、安装ClickHouse:
bash
复制代码
root@node-exporter41:~# apt install clickhouse-server clickhouse-client -y
4.2、方法二:使用二进制脚本安装:(但是需要自己编写system启动文件)
bash
复制代码
1、先上传二进制脚本文件
root@node-exporter41:~# ./clickhouse install both
2、编写启动脚本
root@node-exporter41:~# cat /lib/systemd/system/clickhouse-server.service
[Unit]
Description=ClickHouse Server (analytic DBMS for big data)
Requires=network-online.target
After=time-sync.target network-online.target
Wants=time-sync.target
[Service]
Type=notify
User=clickhouse
Group=clickhouse
Restart=always
RestartSec=30
TimeoutStopSec=infinity
Environment=CLICKHOUSE_WATCHDOG_NO_FORWARD=1
TimeoutStartSec=0
RuntimeDirectory=%p
ExecStart=/usr/bin/clickhouse-server --config=/etc/clickhouse-server/config.xml --pid-file=%t/%p/%p.pid
EnvironmentFile=-/etc/default/%p
EnvironmentFile=-/etc/default/clickhouse
LimitCORE=infinity
LimitNOFILE=500000
CapabilityBoundingSet=CAP_NET_ADMIN CAP_IPC_LOCK CAP_SYS_NICE CAP_NET_BIND_SERVICE
AmbientCapabilities=CAP_NET_ADMIN CAP_IPC_LOCK CAP_SYS_NICE CAP_NET_BIND_SERVICE
[Install]
WantedBy=multi-user.target
4.3、更改clickhouse数据库的配置文件
4.3.1、41节点
bash
复制代码
root@node-exporter41:~# vim /etc/clickhouse-server/config.xml
289 <listen_host>0.0.0.0</listen_host>
----------------------------------------------------------------------------------------
931 <remote_servers>
932 <my_cluster>
933 <shard>
934 <replica>
935 <host>10.0.0.41</host>
936 <port>9000</port>
937 </replica>
938 <replica>
939 <host>10.0.0.42</host>
940 <port>9000</port>
941 </replica>
942 <replica>
943 <host>10.0.0.43</host>
944 <port>9000</port>
945 </replica>
946 </shard>
947 </my_cluster>
948 </remote_servers>
----------------------------------------------------------------------------------------
991 <zookeeper>
992 <node>
993 <host>10.0.0.41</host>
994 <port>2181</port>
995 </node>
996 <node>
997 <host>10.0.0.42</host>
998 <port>2181</port>
999 </node>
1000 <node>
1001 <host>10.0.0.43</host>
1002 <port>2181</port>
1003 </node>
1004 </zookeeper>
----------------------------------------------------------------------------------------
1014 <macros>
1015 <shard>01</shard>
1016 <replica>node01</replica> # 42,43节点需要改为node02、node03
1017 </macros>
----------------------------------------------------------------------------------------
242 <interserver_http_host>10.0.0.41</interserver_http_host> # 42,43节点需要改为10.0.0.42、43
4.3.2、42节点
bash
复制代码
root@node-exporter41:~# vim /etc/clickhouse-server/config.xml
289 <listen_host>0.0.0.0</listen_host>
----------------------------------------------------------------------------------------
931 <remote_servers>
932 <my_cluster>
933 <shard>
934 <replica>
935 <host>10.0.0.41</host>
936 <port>9000</port>
937 </replica>
938 <replica>
939 <host>10.0.0.42</host>
940 <port>9000</port>
941 </replica>
942 <replica>
943 <host>10.0.0.43</host>
944 <port>9000</port>
945 </replica>
946 </shard>
947 </my_cluster>
948 </remote_servers>
----------------------------------------------------------------------------------------
991 <zookeeper>
992 <node>
993 <host>10.0.0.41</host>
994 <port>2181</port>
995 </node>
996 <node>
997 <host>10.0.0.42</host>
998 <port>2181</port>
999 </node>
1000 <node>
1001 <host>10.0.0.43</host>
1002 <port>2181</port>
1003 </node>
1004 </zookeeper>
----------------------------------------------------------------------------------------
1014 <macros>
1015 <shard>01</shard>
1016 <replica>node02</replica> # 42,43节点需要改为node02、node03
1017 </macros>
----------------------------------------------------------------------------------------
242 <interserver_http_host>10.0.0.42</interserver_http_host> # 42,43节点需要改为10.0.0.42、43
4.3.3、43节点
bash
复制代码
root@node-exporter41:~# vim /etc/clickhouse-server/config.xml
289 <listen_host>0.0.0.0</listen_host>
----------------------------------------------------------------------------------------
931 <remote_servers>
932 <my_cluster>
933 <shard>
934 <replica>
935 <host>10.0.0.41</host>
936 <port>9000</port>
937 </replica>
938 <replica>
939 <host>10.0.0.42</host>
940 <port>9000</port>
941 </replica>
942 <replica>
943 <host>10.0.0.43</host>
944 <port>9000</port>
945 </replica>
946 </shard>
947 </my_cluster>
948 </remote_servers>
----------------------------------------------------------------------------------------
991 <zookeeper>
992 <node>
993 <host>10.0.0.41</host>
994 <port>2181</port>
995 </node>
996 <node>
997 <host>10.0.0.42</host>
998 <port>2181</port>
999 </node>
1000 <node>
1001 <host>10.0.0.43</host>
1002 <port>2181</port>
1003 </node>
1004 </zookeeper>
----------------------------------------------------------------------------------------
1014 <macros>
1015 <shard>01</shard>
1016 <replica>node03</replica> # 42,43节点需要改为node02、node03
1017 </macros>
----------------------------------------------------------------------------------------
242 <interserver_http_host>10.0.0.43</interserver_http_host> # 42,43节点需要改为10.0.0.42、43
4.3.4、三个节点启动clickhouse服务
bash
复制代码
root@node-exporter41:~# systemctl start clickhouse-server.service
root@node-exporter41:~# systemctl enable clickhouse-server.service
root@node-exporter42:~# systemctl start clickhouse-server.service
root@node-exporter42:~# systemctl enable clickhouse-server.service
root@node-exporter43:~# systemctl start clickhouse-server.service
root@node-exporter43:~# systemctl enable clickhouse-server.service
4.3.5、测试验证clickhouse数据库集群
bash
复制代码
root@node-exporter41:~# clickhouse-client --host 10.0.0.43 --query "SELECT * FROM system.clusters;" --user default --password 123
my_cluster 1 1 0 1 10.0.0.41 10.0.0.41 9000 0 default 0 0 \N \N \N \N
my_cluster 1 1 0 2 10.0.0.42 10.0.0.42 9000 0 default 0 0 \N \N \N \N
my_cluster 1 1 0 3 10.0.0.43 10.0.0.43 9000 1 default 0 0 \N \N \N \N
# 看到三个节点都在说明集群搭建成功