clickhouse集群搭建

准备三台机器,192.168.20.7,192.168.20.8,192.168.20.10用于搭建clickhouse集群。本次搭建的集群,为三副本的,即一份数据会在三台机器上分别存储,搭建集群只是为了容灾。

1. 在192.168.20.7上操作

在clickhouse config.d目录下新建cluster.xml,内容如下

<clickhouse>

<keeper_server>

<tcp_port>9181</tcp_port>

<server_id>1</server_id>

<log_storage_path>/data/clickhouse/coordination/log</log_storage_path> <snapshot_storage_path>/data/clickhouse/coordination/snapshots</snapshot_storage_path>

<coordination_settings>

<operation_timeout_ms>10000</operation_timeout_ms>

<session_timeout_ms>30000</session_timeout_ms>

<raft_logs_level>trace</raft_logs_level>

</coordination_settings>

<raft_configuration>

<server>

<id>1</id>

<hostname>192.168.20.7</hostname>

<port>9234</port>

</server>

<server>

<id>2</id>

<hostname>192.168.20.8</hostname>

<port>9234</port>

</server>

<server>

<id>3</id>

<hostname>192.168.20.10</hostname>

<port>9234</port>

</server>

</raft_configuration>

</keeper_server>

<remote_servers>

<default>

<shard>

<internal_replication>1</internal_replication>

<replica>

<host>192.168.20.7</host>

<port>8124</port>

</replica>

<replica>

<host>192.168.20.8</host>

<port>8124</port>

</replica>

<replica>

<host>192.168.20.10</host>

<port>8124</port>

</replica>

</shard>

</default>

</remote_servers>

<macros>

<shard>1</shard>

<replica>192.168.20.7</replica>

</macros>

<interserver_http_host>192.168.20.7</interserver_http_host>

</clickhouse>

2. 在192.168.20.8上操作

<clickhouse>

<keeper_server>

<tcp_port>9181</tcp_port>

<server_id>2</server_id>

<log_storage_path>/data/clickhouse/coordination/log</log_storage_path> <snapshot_storage_path>/data/clickhouse/coordination/snapshots</snapshot_storage_path>

<coordination_settings>

<operation_timeout_ms>10000</operation_timeout_ms>

<session_timeout_ms>30000</session_timeout_ms>

<raft_logs_level>trace</raft_logs_level>

</coordination_settings>

<raft_configuration>

<server>

<id>1</id>

<hostname>192.168.20.7</hostname>

<port>9234</port>

</server>

<server>

<id>2</id>

<hostname>192.168.20.8</hostname>

<port>9234</port>

</server>

<server>

<id>3</id>

<hostname>192.168.20.10</hostname>

<port>9234</port>

</server>

</raft_configuration>

</keeper_server>

<remote_servers>

<default>

<shard>

<internal_replication>1</internal_replication>

<replica>

<host>192.168.20.7</host>

<port>8124</port>

</replica>

<replica>

<host>192.168.20.8</host>

<port>8124</port>

</replica>

<replica>

<host>192.168.20.10</host>

<port>8124</port>

</replica>

</shard>

</default>

</remote_servers>

<macros>

<shard>1</shard>

<replica>192.168.20.8</replica>

</macros>

<interserver_http_host>192.168.20.8</interserver_http_host>

</clickhouse>

修改了server_id,macros.replica、interserver_http_host三个配置项

3. 在192.168.20.10上操作

<clickhouse>

<keeper_server>

<tcp_port>9181</tcp_port>

<server_id>3</server_id>

<log_storage_path>/data/clickhouse/coordination/log</log_storage_path>

<snapshot_storage_path>/data/clickhouse/coordination/snapshots</snapshot_storage_path>

<coordination_settings>

<operation_timeout_ms>10000</operation_timeout_ms>

<session_timeout_ms>30000</session_timeout_ms>

<raft_logs_level>trace</raft_logs_level>

</coordination_settings>

<raft_configuration>

<server>

<id>1</id>

<hostname>192.168.20.7</hostname>

<port>9234</port>

</server>

<server>

<id>2</id>

<hostname>192.168.20.8</hostname>

<port>9234</port>

</server>

<server>

<id>3</id>

<hostname>192.168.20.10</hostname>

<port>9234</port>

</server>

</raft_configuration>

</keeper_server>

<remote_servers>

<default>

<shard>

<internal_replication>1</internal_replication>

<replica>

<host>192.168.20.7</host>

<port>8124</port>

</replica>

<replica>

<host>192.168.20.8</host>

<port>8124</port>

</replica>

<replica>

<host>192.168.20.10</host>

<port>8124</port>

</replica>

</shard>

</default>

</remote_servers>

<macros>

<shard>1</shard>

<replica>192.168.20.10</replica>

</macros>

<interserver_http_host>192.168.20.10</interserver_http_host>

</clickhouse>

修改了server_id,macros.replica、interserver_http_host三个配置项

4. 启动

在三台机器上分别执行systemctl start clickhouse-server,启动clickhouse服务

启动成功之后,登录clickhouse执行语句,select * from system.clusters;可以看到三个节点,代表集群建立成功。

5.建表

clickhouse集群,需要在每个节点上分别建表,表结构如下所示

CREATE TABLE test.test1

(

`id` Int64

)

ENGINE = ReplicatedMergeTree('/clickhouse/tables/{shard}/test/test1', '{replica}')

ORDER BY id

SETTINGS index_granularity = 8192;

表建好之后,在其中一台节点执行insert后,在其它节点都可以看到该条数据。

相关推荐
Andy杨2 小时前
20250718-1-Kubernetes 应用程序生命周期管理-应用部署、升级、弹性_笔记
linux·docker·容器
2301_780789664 小时前
UDP和TCP的主要区别是什么
服务器·网络协议·web安全·网络安全·udp
写写闲篇儿5 小时前
Python+MongoDB高效开发组合
linux·python·mongodb
一个龙的传说7 小时前
linux 常用命令
linux·服务器·zookeeper
Ching·9 小时前
esp32使用ESP-IDF在Linux下的升级步骤,和遇到的坑Traceback (most recent call last):,及解决
linux·python·esp32·esp_idf升级
斯是 陋室9 小时前
在CentOS7.9服务器上安装.NET 8.0 SDK
运维·服务器·开发语言·c++·c#·云计算·.net
子柒s9 小时前
Linux 基础
linux
MC皮蛋侠客10 小时前
Ubuntu安装Mongodb
linux·mongodb·ubuntu
2201_7534369510 小时前
ubuntu基础搭建
linux·运维·ubuntu