使用DockerCompose进行部署
RocketMQ的部署方式以及各自的特点
-
单master模式
只有一个 master 节点,如果master节点挂掉了,会导致整个服务不可用,线上不宜使用,适合个人学习使用。
-
多master模式
和kafka不一样,RocketMQ中并没有 master 选举功能,在RocketMQ集群中,1台机器只能要么是Master,要么是Slave,这个在初始的机器配置里面,就定死了的。
不会像kafka那样存在master动态选举,所以通过配置多个master节点来保证RocketMQ的高可用。
多个 master 节点组成集群,单个 master 节点宕机或者重启对应用没有影响。
**优点:**高可用,所有模式中性能最高
**缺点:**可能会有少量消息丢失(配置相关),单台机器重启或宕机期间,该机器下未被消费的消息在机器恢复前不可订阅,影响消息实时性
**注意:**使用同步刷盘可以保证消息不丢失,同时 Topic 相对应的 queue 应该分布在集群中各个 master节点,而不是只在某各 master 节点上,否则,该节点宕机会对订阅该 topic 的应用造成影响。
-
多master多slave异步复制模式
在多 master 模式的基础上,每个 master 节点都有至少一个对应的 slave。
master 节点可读可写,但是 slave 只能读不能写,类似于 mysql 的主备模式。
优点: 在 master 宕机时,消费者可以从 slave 读取消息,消息的实时性不会受影响,性能几乎和多 master 一样。
**缺点:**使用异步复制的同步方式有可能会有消息丢失的问题。
-
多master多slave 同步复制模式
同多 master 多 slave 异步复制模式类似,区别在于 master 和 slave 之间的数据同步方式。
**优点:**同步双写的同步模式能保证数据不丢失。
**缺点:**发送单个消息 RT 会略长,性能相比异步复制低10%左右。
**刷盘策略:**同步刷盘和异步刷盘(指的是节点自身数据是同步还是异步存储)
注意:要保证数据可靠,需采用同步刷盘和同步双写的方式,但性能会较其他方式低。
单Master模式进行部署
部署列表
组件 | 端口 | 数量 |
---|---|---|
nameserver | 9876 | 1 |
master | 10911、10912 | 1 |
rocketmq-console | 9001 | 1 |
配置文件
broker.conf
properties
brokerClusterName = rocketmq-cluster
brokerName = broker-strand
brokerId = 0
#当前 broker 监听的 IP,broker所在宿主机的IP
brokerIP1 = 172.16.15.220
#一天删除一次文件的操作,代表凌晨四点进行删除
deleteWhen = 04
#在磁盘上保存消息的时长 48小时
fileReservedTime = 48
# 当前broker所处的角色
brokerRole = ASYNC_MASTER
# 磁盘的刷新方式
flushDiskType = ASYNC_FLUSH
# namesrvAddr的地址
namesrvAddr = 172.16.15.220:9876
autoCreateTopicEnable = true
#接受客户端连接的监听端⼝,要跟后面docker-compose.yml中的ports配置匹配
listenPort = 10911
#haListenPort
#haListenPort是haService中使用
#默认值为:listenPort + 1
DockerCompose部署文件
yml
version: '3.5'
services:
nameserver-stand:
image: apacherocketmq/rocketmq:4.6.0
container_name: nameserver-stand
ports:
- 9876:9876
environment:
TZ: Asia/Shanghai
JAVA_OPTS: "-Duser.home=/opt"
JAVA_OPT_EXT: "-server -Xms128m -Xmx256m"
volumes:
- ./nameserver/logs:/home/rocketmq/logs
- ./nameserver/store:/home/rocketmq/store
command: sh mqnamesrv
networks:
rmq:
aliases:
- nameserver
broker-stand:
image: apacherocketmq/rocketmq:4.6.0
container_name: broker-stand
user:
root:root
ports:
- 10911:10911
- 10912:10912
depends_on:
- nameservera
volumes:
- ./broker/logs:/root/logs
- ./broker/store:/root/store
- ./broker/conf/broker.conf:/opt/rocketmq-4.6.0/conf/broker.conf
environment:
TZ: Asia/Shanghai
NAMESRV_ADDR: "nameserver:9876"
JAVA_OPTS: "-Duser.home=/opt"
JAVA_OPT_EXT: "-server -Xms128m -Xmx256m"
command: sh mqbroker -c /opt/rocketmq-4.6.0/conf/broker.conf autoCreateTopicEnable=true &
links:
- nameserver:nameserver
networks:
rmq:
aliases:
- rmqbroker
rmqconsole-stand:
image: styletang/rocketmq-console-ng
container_name: rmqconsole-stand
ports:
- 9001:9001
environment:
TZ: Asia/Shanghai
JAVA_OPTS: "-Duser.home=/opt"
JAVA_OPT_EXT: "-server -Xms128m -Xmx256m"
environment:
JAVA_OPTS: "-Drocketmq.namesrv.addr=172.16.15.220:9876 -Dcom.rocketmq.sendMessageWithVIPChannel=false -Dserver.port=9001"
networks:
rmq:
aliases:
- rmqconsole
networks:
rmq:
name: rmq
driver: bridge
多Master多Slave的Async形式部署
部署列表
组件 | 角色 | 端口 | 个数 |
---|---|---|---|
nameserver | 注册中心 | 9876、9877 | 2 |
master-a | 主节点a | 10911、10912 | 1 |
slave-1-a | 主节点a的从节点 | 10921、10922 | 1 |
master-b | 主节点b | 10931、10932 | 1 |
slave-1-b | 主节点b的从节点 | 10941、10942 | 1 |
配置文件
搭建集群模式,需要注意以下配置文件的配置项
brokerName:将各个master进行区分,名称一样的master和slave归属一类
brokerId:master与slave区分,为0代表master,>0代表slave
brokerRole:代表master与slave之间的数据传输方式,以及定位该broker是否为slave
ASYNC_MASTER 异步复制;SYNC_MASTER 同步复制;从节点必须配置为SLAVE
-
master-a.conf
propertiesbrokerClusterName = rocketmq-cluster brokerName = broker-a brokerId = 0 #当前 broker 监听的 IP,broker所在宿主机的IP brokerIP1 = 172.16.15.220 deleteWhen = 04 fileReservedTime = 48 # 控制master与slave的通信模式,ASYNC_MASTER 异步复制;SYNC_MASTER 同步复制;SLAVE 表示从节点 brokerRole = ASYNC_MASTER flushDiskType = ASYNC_FLUSH #namesrvAddr的地址 namesrvAddr = 172.16.15.220:9876;172.16.15.220:9877 autoCreateTopicEnable = true #接受客户端连接的监听端⼝,要跟后面docker-compose.yml中的ports配置匹配 listenPort = 10911 #haListenPort #haListenPort是haService中使用 #默认值为:listenPort + 1
-
slave-1-a.conf
propertiesbrokerClusterName = rocketmq-cluster brokerName = broker-a brokerId = 1 #当前 broker 监听的 IP,broker所在宿主机的IP brokerIP1 = 172.16.15.220 deleteWhen = 04 fileReservedTime = 48 brokerRole = SLAVE flushDiskType = ASYNC_FLUSH #namesrvAddr的地址 namesrvAddr = 172.16.15.220:9876;172.16.15.220:9877 autoCreateTopicEnable = true #接受客户端连接的监听端⼝,要跟后面docker-compose.yml中的ports配置匹配 listenPort = 10921 #haListenPort #haListenPort是haService中使用 #默认值为:listenPort + 1
-
master-b.conf
propertiesbrokerClusterName = rocketmq-cluster brokerName = broker-b brokerId = 0 #当前 broker 监听的 IP,broker所在宿主机的IP brokerIP1 = 172.16.15.220 deleteWhen = 04 fileReservedTime = 48 # 控制master与slave的通信模式,ASYNC_MASTER 异步复制;SYNC_MASTER 同步复制;SLAVE 表示从节点 brokerRole = ASYNC_MASTER flushDiskType = ASYNC_FLUSH #namesrvAddr的地址 namesrvAddr = 172.16.15.220:9876;172.16.15.220:9877 autoCreateTopicEnable = true #接受客户端连接的监听端⼝,要跟后面docker-compose.yml中的ports配置匹配 listenPort = 10931 #haListenPort #haListenPort是haService中使用 #默认值为:listenPort + 1
-
slave-1-b.conf
propertiesbrokerClusterName = rocketmq-cluster brokerName = broker-b brokerId = 1 #当前 broker 监听的 IP,broker所在宿主机的IP brokerIP1 = 172.16.15.220 deleteWhen = 04 fileReservedTime = 48 brokerRole = SLAVE flushDiskType = ASYNC_FLUSH #namesrvAddr的地址 namesrvAddr = 172.16.15.220:9876;172.16.15.220:9877 autoCreateTopicEnable = true #接受客户端连接的监听端⼝,要跟后面docker-compose.yml中的ports配置匹配 listenPort = 10941 #haListenPort #haListenPort是haService中使用 #默认值为:listenPort + 1
DockerCompose部署文件参考
yml
version: '3.5'
services:
nameserver-a:
image: apacherocketmq/rocketmq:4.6.0
container_name: nameserver-a
ports:
- 9876:9876
environment:
TZ: Asia/Shanghai
JAVA_OPTS: "-Duser.home=/opt"
JAVA_OPT_EXT: "-server -Xms128m -Xmx256m"
volumes:
- ./nameserver-a/logs:/home/rocketmq/logs
- ./nameserver-a/store:/home/rocketmq/store
command: sh mqnamesrv
networks:
rmq-cluster:
aliases:
- nameserver-a
nameserver-b:
image: apacherocketmq/rocketmq:4.6.0
container_name: nameserver-b
ports:
- 9877:9876
volumes:
- ./nameserver-b/logs:/home/rocketmq/logs
- ./nameserver-b/store:/home/rocketmq/store
command: sh mqnamesrv
environment:
TZ: Asia/Shanghai
JAVA_OPTS: "-Duser.home=/opt"
JAVA_OPT_EXT: "-server -Xms128m -Xmx256m"
networks:
rmq-cluster:
aliases:
- nameserver-a
broker-a:
image: apacherocketmq/rocketmq:4.6.0
container_name: broker-a
user:
root:root
ports:
- 10911:10911
- 10912:10912
volumes:
- ./broker-a/logs:/root/logs
- ./broker-a/store:/root/store
- ./broker-a/conf/broker-a.conf:/opt/rocketmq-4.6.0/conf/broker.conf
environment:
TZ: Asia/Shanghai
NAMESRV_ADDR: "nameserver-a:9876;nameserver-b:9876"
JAVA_OPTS: "-Duser.home=/opt"
JAVA_OPT_EXT: "-server -Xms128m -Xmx256m"
command: sh mqbroker -c /opt/rocketmq-4.6.0/conf/broker.conf autoCreateTopicEnable=true &
links:
- nameserver-a:nameserver-a
- nameserver-b:nameserver-b
networks:
rmq-cluster:
aliases:
- broker-a
slave-1-a:
image: apacherocketmq/rocketmq:4.6.0
container_name: slave-1-a
user:
root:root
ports:
- 10921:10921
- 10922:10922
volumes:
- ./slave-1-a/logs:/root/logs
- ./slave-1-a/store:/root/store
- ./slave-1-a/conf/slave-1-a.conf:/opt/rocketmq-4.6.0/conf/broker.conf
environment:
TZ: Asia/Shanghai
NAMESRV_ADDR: "nameserver-a:9876;nameserver-b:9876"
JAVA_OPTS: "-Duser.home=/opt"
JAVA_OPT_EXT: "-server -Xms128m -Xmx256m"
command: sh mqbroker -c /opt/rocketmq-4.6.0/conf/broker.conf autoCreateTopicEnable=true &
links:
- nameserver-a:nameserver-a
- nameserver-b:nameserver-b
networks:
rmq-cluster:
aliases:
- slave-1-a
broker-b:
image: apacherocketmq/rocketmq:4.6.0
container_name: broker-b
user:
root:root
ports:
- 10931:10931
- 10932:10932
volumes:
- ./broker-b/logs:/root/logs
- ./broker-b/store:/root/store
- ./broker-b/conf/broker-b.conf:/opt/rocketmq-4.6.0/conf/broker.conf
environment:
TZ: Asia/Shanghai
NAMESRV_ADDR: "nameserver-a:9876;nameserver-b:9876"
JAVA_OPTS: "-Duser.home=/opt"
JAVA_OPT_EXT: "-server -Xms256m -Xmx512m -Xmn128m"
command: sh mqbroker -c /opt/rocketmq-4.6.0/conf/broker.conf autoCreateTopicEnable=true &
links:
- nameserver-a:nameserver-a
- nameserver-b:nameserver-b
networks:
rmq-cluster:
aliases:
- broker-b
slave-1-b:
image: apacherocketmq/rocketmq:4.6.0
container_name: slave-1-b
user:
root:root
ports:
- 10941:10941
- 10942:10942
volumes:
- ./slave-1-b/logs:/root/logs
- ./slave-1-b/store:/root/store
- ./slave-1-b/conf/slave-1-b.conf:/opt/rocketmq-4.6.0/conf/broker.conf
environment:
TZ: Asia/Shanghai
NAMESRV_ADDR: "nameserver-a:9876;nameserver-b:9876"
JAVA_OPTS: "-Duser.home=/opt"
JAVA_OPT_EXT: "-server -Xms256m -Xmx512m -Xmn128m"
command: sh mqbroker -c /opt/rocketmq-4.6.0/conf/broker.conf autoCreateTopicEnable=true &
links:
- nameserver-a:nameserver-a
- nameserver-b:nameserver-b
networks:
rmq-cluster:
aliases:
- slave-1-b
rmqconsole:
image: styletang/rocketmq-console-ng
container_name: rmqconsole
ports:
- 9001:9001
environment:
TZ: Asia/Shanghai
JAVA_OPTS: "-Duser.home=/opt"
JAVA_OPT_EXT: "-server -Xms128m -Xmx256m"
environment:
JAVA_OPTS: "-Drocketmq.namesrv.addr=172.16.15.220:9876;172.16.15.220:9877 -Dcom.rocketmq.sendMessageWithVIPChannel=false -Dserver.port=9001"
networks:
rmq-cluster:
aliases:
- rmqconsole
networks:
rmq-cluster:
name: rmq-cluster
driver: bridge
部署问题
使用docker部署broker时,启动失败报:253错误码
解决方案:
- 使用用户标签:user,将用户指向root
- 映射容器存储路径改为:/root/store