借助消息队列之后,咱们的消息得发出去才能开始有价值,一般用得上消息队列说明这个量已经到位了,那么、面对这么多数据量,这消息发出去如果发的特别慢,这应该怎么办呢?
- 首先不要傻傻等待,现实世界没有什么一万年
- 其次你可以坚持尝试几次,但是一直撞南墙就不礼貌了,有死信队列还是用上吧dlq
- 最后监控是很必要的,人不是永动机,不难看24*7不停歇,别说一万年但凡能熬一个周期都是年轻人👴
| 消息队列 | 发送慢主因 | 关键配置 | 解决方案 |
|---|---|---|---|
| Kafka | 等待 ISR ack 超时 | acks, delivery.timeout.ms |
降级 acks=1 + 死信队列 |
| RabbitMQ | Publisher Confirm 阻塞 | confirm timeout, flow control |
异步 confirm + 批量发送 |
| RocketMQ | Broker 写磁盘慢 / 同步刷盘 | flushDiskType, sendLatencyFaultEnable |
切异步刷盘 + 故障规避 |
| Pulsar | BookKeeper 写入延迟 | bookkeeperAckQuorum, sendTimeoutMs |
调整 quorum + 客户端超时 |
下面会写健康检查,虽然还是要提示一下:all检查使用独立的topic,也不要太频繁检查,根据业务设置不同的告警级别,也可以集合普罗米修斯,英文忘了怎么写了,先这样
一、Kafka:卡在"等副本确认"
根因(Kafka 特有)
- 当
acks=all(看业务、朋友),Producer 必须等待 所有 ISR 副本写入成功 - 若某 Follower 同步慢(如磁盘 IO 高、网络抖动等),ISR 缩减 →
min.insync.replicas不满足 → Producer 永久阻塞直到超时
📌 关键点 :Kafka 的"慢"本质是 强一致性语义导致的同步等待。
正确处理(Kafka 2.1+)
props.put("delivery.timeout.ms", 120_000); // Kafka 2.1 前无这个属性,总超时(含重试)
props.put("request.timeout.ms", 30_000); // 单次请求超时
步骤二、死信队列
try {
producer.send(record).get(); // 同步发送
} catch (ExecutionException e) {
if (e.getCause() instanceof TimeoutException) {
dlqProducer.send(buildDlqRecord(record)); // 转存 DLQ
}
}
看业务说话
| 场景 | 配置 | 风险 |
|---|---|---|
| 金融交易 | acks=all + 超时告警 |
可能丢消息(超时后放弃) |
| 日志/监控 | acks=1 |
副本未同步时 Broker 宕机 → 丢消息 |
| 高可用优先 | acks=1 + 异步复制监控 |
接受短暂不一致 |
健康检查
import io.micrometer.core.instrument.MeterRegistry;
import io.micrometer.core.instrument.Timer;
import io.micrometer.prometheus.PrometheusConfig;
import io.micrometer.prometheus.PrometheusMeterRegistry;
import org.apache.kafka.clients.producer.*;
import java.util.Properties;
import java.util.concurrent.*;
/**
* Kafka Producer 健康检查器
*
* 设计原则:
* 1. 使用 acks=1(非 all)避免因副本同步慢导致健康检查误报
* 2. 必须设置 delivery.timeout.ms(Kafka 2.1+ 关键配置)
* 3. 显式调用 Future.get(timeout) 防止线程永久阻塞
*
* 若不设超时:网络分区时 send() 可能 hang 死整个应用线程
*/
public class KafkaHealthChecker {
private final Producer<String, String> producer;
private final String topic;
private final MeterRegistry meterRegistry;
/**
* 构造函数:初始化 Kafka Producer
*
* @param bootstrapServers Kafka 集群地址(如 "kafka1:9092,kafka2:9092")
* @param topic 用于健康检查的 Topic(建议专用,避免污染业务数据)
* @param registry Micrometer 注册表(用于暴露 Prometheus 指标)
*/
public KafkaHealthChecker(String bootstrapServers, String topic, MeterRegistry registry) {
this.topic = topic;
this.meterRegistry = registry;
Properties props = new Properties();
props.put("bootstrap.servers", bootstrapServers);
props.put("key.serializer", "org.apache.kafka.common.serialization.StringSerializer");
props.put("value.serializer", "org.apache.kafka.common.serialization.StringSerializer");
// 【关键】健康检查用 acks=1,避免因 ISR 同步慢导致误判
// 生产环境业务 Producer 可用 acks=all,但健康检查必须快速返回
props.put("acks", "1");
// 【必须】Kafka 2.1+ 引入,总发送超时(含重试)
// 若不设置,Producer 可能在网络问题时无限重试
props.put("delivery.timeout.ms", 5000);
// 单次请求超时(应 < delivery.timeout.ms)
props.put("request.timeout.ms", 3000);
// 【安全】关闭自动重试(健康检查只需一次尝试)
props.put("retries", 0);
this.producer = new KafkaProducer<>(props);
}
/**
* 执行健康检查:发送一条探测消息并等待响应
*
* @return HealthResult 包含健康状态和详细信息
*/
public HealthResult check() {
// 开始计时(用于上报延迟指标)
Timer.Sample sample = Timer.start(meterRegistry);
// 构造唯一探测消息(便于追踪)
String msg = "health-check-" + System.currentTimeMillis();
try {
// 【关键】显式指定超时时间(单位:秒)
// Future.get() 若无超时参数,可能永久阻塞!
RecordMetadata meta = producer.send(
new ProducerRecord<>(topic, "health", msg)
).get(5, TimeUnit.SECONDS);
// 上报成功延迟
sample.stop(Metrics.timer("kafka.producer.send.latency"));
return HealthResult.healthy("Sent to partition " + meta.partition());
} catch (TimeoutException e) {
// 超时:网络或 Broker 响应慢
sample.stop(Metrics.timer("kafka.producer.send.failure"));
Metrics.counter("kafka.producer.send.errors").increment();
return HealthResult.unhealthy("Send timeout: " + e.getMessage());
} catch (ExecutionException e) {
// 执行异常:如 LeaderNotAvailable、NotEnoughReplicas
sample.stop(Metrics.timer("kafka.producer.send.failure"));
Metrics.counter("kafka.producer.send.errors").increment();
return HealthResult.unhealthy("Send failed: " + e.getCause().getMessage());
} catch (InterruptedException e) {
// 线程中断(如应用关闭)
Thread.currentThread().interrupt();
return HealthResult.unhealthy("Interrupted");
}
}
/**
* 健康检查结果封装类
*/
public static class HealthResult {
public final boolean healthy; // 是否健康
public final String message; // 详细信息(用于日志/告警)
private HealthResult(boolean healthy, String msg) {
this.healthy = healthy;
this.message = msg;
}
public static HealthResult healthy(String msg) {
return new HealthResult(true, msg);
}
public static HealthResult unhealthy(String msg) {
return new HealthResult(false, msg);
}
}
}
// 初始化
MeterRegistry registry = new PrometheusMeterRegistry(PrometheusConfig.DEFAULT);
KafkaHealthChecker checker = new KafkaHealthChecker("kafka:9092", "health-check-topic", registry);
// 定时检查
ScheduledExecutorService scheduler = Executors.newScheduledThreadPool(1);
scheduler.scheduleAtFixedRate(() -> {
HealthResult result = checker.check();
if (!result.healthy) {
// 触发告警(如写日志、调 webhook)
System.err.println("KAFKA UNHEALTHY: " + result.message);
}
}, 0, 30, TimeUnit.SECONDS);
二、RabbitMQ:卡在"Publisher Confirm"或"Flow Control"
- Publisher Confirm 模式 :开启后,Producer 等待 broker 返回
basic.ack- 若 broker 忙(如磁盘满、内存高),ack 延迟 → Producer 阻塞
- Flow Control(流控) :当内存/磁盘超限,RabbitMQ 主动暂停接收新消息
- TCP 层 backpressure → send() 系统调用阻塞
📌 关键点 :RabbitMQ 的"慢"是 资源过载触发的主动限流。
解决方案
一、异步+超时
channel.confirmSelect();
channel.addConfirmListener(
(seq, mult) -> {/* ack */},
(seq, mult) -> {/* nack → DLQ */}
);
// 发送后立即返回,不 wait
channel.basicPublish(...);
二、批量,减少confirm开销
for (int i = 0; i < 1000; i++) {
channel.basicPublish(...);
}
channel.waitForConfirms(5000); // 5秒内等所有 ack
三、监控flow control
RabbitMQ Management API:/api/nodes 中 mem_used、disk_free
告警:flow_control = true
四、防范意识
# rabbitmq.conf 增加阈值
vm_memory_high_watermark.relative = 0.8
disk_free_limit.absolute = 2GB
次要信息关闭持久化:MessageProperties.NON_PERSISTENT
健康检查
"""
RabbitMQ Producer 健康检查器
设计原则:
1. 使用 non-persistent 消息(避免触发磁盘刷盘)
2. 开启 confirm 模式但异步等待(避免阻塞)
3. 显式设置 basic_publish 的 mandatory=True(检测路由失败)
!!!若不用 confirm:无法知道消息是否真正入队
!!! 若用持久化:磁盘慢会导致健康检查误报
"""
import pika
import time
import json
from prometheus_client import Counter, Histogram, start_http_server
# Prometheus 指标定义
SEND_LATENCY = Histogram(
'rabbitmq_producer_send_latency_seconds',
'Time spent sending a health check message'
)
SEND_ERRORS = Counter(
'rabbitmq_producer_send_errors_total',
'Total number of send errors'
)
class RabbitMQHealthChecker:
"""
RabbitMQ 健康检查器
:param url: AMQP 连接 URL(如 "amqp://user:pass@host:5672/vhost")
:param exchange: 交换机名称(建议专用 health-exchange)
:param routing_key: 路由键(需确保有队列绑定)
"""
def __init__(self, url: str, exchange: str, routing_key: str):
self.url = url
self.exchange = exchange
self.routing_key = routing_key
self.connection = None
self.channel = None
def _connect(self):
"""
建立 RabbitMQ 连接(幂等)
注意:pika.BlockingConnection 在连接失败时会抛异常,
由外层 check() 捕获并计入错误指标
"""
# 如果已有有效连接,直接复用
if self.connection and not self.connection.is_closed:
return
# 创建新连接
params = pika.URLParameters(self.url)
self.connection = pika.BlockingConnection(params)
self.channel = self.connection.channel()
# 【关键】开启 Publisher Confirm 模式
# 不开启则无法确认消息是否入队
self.channel.confirm_delivery()
def check(self) -> dict:
"""
执行健康检查
:return: dict with keys 'healthy' (bool) and 'message' (str)
"""
start = time.time()
try:
# 建立连接(可能抛异常)
self._connect()
# 构造探测消息
msg = f"health-{int(time.time())}"
# 【关键配置】
# delivery_mode=1 → non-persistent(内存队列,不落盘)
# mandatory=True → 若路由失败(无匹配队列),broker 返回 basic.return
self.channel.basic_publish(
exchange=self.exchange,
routing_key=self.routing_key,
body=msg,
properties=pika.BasicProperties(delivery_mode=1),
mandatory=True
)
# 计算延迟并上报
latency = time.time() - start
SEND_LATENCY.observe(latency)
return {"healthy": True, "message": f"Latency: {latency:.3f}s"}
except pika.exceptions.UnroutableError as e:
# mandatory=True 且无匹配队列时触发
SEND_ERRORS.inc()
return {"healthy": False, "message": f"Message unroutable: {e}"}
except Exception as e:
# 其他异常:连接失败、confirm 超时等
SEND_ERRORS.inc()
return {"healthy": False, "message": f"Send failed: {str(e)}"}
def close(self):
"""关闭连接(应用退出时调用)"""
if self.connection and not self.connection.is_closed:
self.connection.close()
RocketMQ:卡在"同步刷盘"或"Broker 故障"
- 同步刷盘(SYNC_FLUSH) :Producer 等待消息落盘才返回
- 磁盘慢 → TPS 骤降
- Broker 故障未隔离:默认不启用故障规避,会持续向慢节点发消息
📌 关键点 :RocketMQ 的"慢"常源于 存储策略过于保守。
处理方案
一、异步
// broker.conf
flushDiskType = ASYNC_FLUSH
二、规避
DefaultMQProducer producer = new DefaultMQProducer();
producer.setSendLatencyFaultEnable(true); // 开启
producer.setNotAvailableDuration(new long[]{0, 0, 30000, 60000}); // 故障后 30s 不选
三、超时
producer.setSendMsgTimeout(4000); // 默认 3s,看业务可适当增大
四、预防针
1、监控 putMessageDistributeTime(写入耗时)
2、主从架构:至少 1 主 1 从,避免单点
健康检查
/**
* RocketMQ Producer 健康检查器
*
* 设计原则:
* 1. 启用 sendLatencyFaultEnable(故障规避)
* 2. 设置 sendMsgTimeout(防 Broker 响应慢)
* 3. 使用专用 Topic(避免影响业务)
*
* 若不启用故障规避:Producer 会持续向慢 Broker 发消息,导致 TPS 骤降
*/
import org.apache.rocketmq.client.exception.MQClientException;
import org.apache.rocketmq.client.producer.DefaultMQProducer;
import org.apache.rocketmq.common.message.Message;
import org.apache.rocketmq.remoting.exception.RemotingException;
public class RocketMQHealthChecker {
private final DefaultMQProducer producer;
private final String topic;
/**
* 构造函数
*
* @param namesrvAddr NameServer 地址(如 "ns1:9876;ns2:9876")
* @param topic 健康检查专用 Topic
*/
public RocketMQHealthChecker(String namesrvAddr, String topic) {
this.topic = topic;
this.producer = new DefaultMQProducer("HealthCheckerGroup");
producer.setNamesrvAddr(namesrvAddr);
// 【关键】发送超时(毫秒)
// 默认 3000ms,若 Broker 磁盘慢可能不够
producer.setSendMsgTimeout(3000);
// 【必须】启用故障规避
// 当某 Broker 响应慢,自动避开一段时间
producer.setSendLatencyFaultEnable(true);
// 可选:自定义故障规避时长(单位:毫秒)
// producer.setLatencyMax(new long[]{50L, 100L, 200L, 500L, 1000L, 2000L, 5000L});
// producer.setNotAvailableDuration(new long[]{0L, 0L, 30000L, 60000L, 120000L, 180000L, 600000L});
try {
producer.start();
} catch (MQClientException e) {
throw new RuntimeException("Failed to start RocketMQ producer", e);
}
}
/**
* 执行健康检查
*
* @return HealthResult
*/
public HealthResult check() {
try {
long start = System.currentTimeMillis();
// 构造探测消息
Message msg = new Message(
topic,
"HealthTag",
("health-" + start).getBytes()
);
// 【同步发送】健康检查只需一次尝试
// 异步发送无法捕获异常
producer.send(msg);
long latency = System.currentTimeMillis() - start;
return HealthResult.healthy("Latency: " + latency + "ms");
} catch (Exception e) {
// 捕获所有异常:BrokerNotAvailable, Timeout, Remoting 等
return HealthResult.unhealthy("Send failed: " + e.getMessage());
}
}
/**
* 关闭 Producer(应用退出时调用)
*/
public void shutdown() {
producer.shutdown();
}
public static class HealthResult {
public final boolean healthy;
public final String message;
private HealthResult(boolean healthy, String msg) {
this.healthy = healthy; this.message = msg;
}
public static HealthResult healthy(String msg) { return new HealthResult(true, msg); }
public static HealthResult unhealthy(String msg) { return new HealthResult(false, msg); }
}
}
Pulsar:卡在"BookKeeper 写入"
- Pulsar 存储分离:Broker 只负责路由,数据写入 BookKeeper
- 若 BookKeeper ensemble 中多数节点响应慢(如磁盘、网络),则写入超时
📌 关键点 :Pulsar 的"慢"是 存储层(BookKeeper)瓶颈。
处理
一、调整 BookKeeperQuorum
// client.conf
bookkeeperAckQuorum=2 // 默认 2(3 节点 ensemble)
bookkeeperEnsembleSize=3
二、超时
producerBuilder.sendTimeout(30, TimeUnit.SECONDS);
三、监控
指标:bookie_write_bytes、journal_sync_time
告警:journal_sync_time > 10ms
四、💉
SSD 部署 BookKeeper
分离 Journal 和 Ledger 磁盘
健康检查
"""
Pulsar Producer 健康检查器
设计原则:
1. 设置 send_timeout_millis(防 BookKeeper 写入慢)
2. 使用 block_if_queue_full=True(避免内存溢出)
3. 专用 Topic(避免污染业务)
若不设 send_timeout:BookKeeper ensemble 响应慢会导致 send() 永久阻塞
"""
import pulsar
from prometheus_client import Histogram, Counter
import time
# Prometheus 指标
SEND_LATENCY = Histogram(
'pulsar_producer_send_latency_seconds',
'Time spent sending a health check message'
)
SEND_ERRORS = Counter(
'pulsar_producer_send_errors_total',
'Total number of send errors'
)
class PulsarHealthChecker:
"""
Pulsar 健康检查器
:param service_url: Pulsar 服务 URL(如 "pulsar://broker:6650")
:param topic: 健康检查专用 Topic(如 "persistent://public/default/health")
"""
def __init__(self, service_url: str, topic: str):
# 创建 Pulsar 客户端
self.client = pulsar.Client(service_url)
# 【关键配置】
# send_timeout_millis: 发送超时(毫秒),默认 30000(30秒)太长!
# block_if_queue_full: 当内部队列满时阻塞(而非丢弃),防 OOM
self.producer = self.client.create_producer(
topic,
send_timeout_millis=5000, # 5秒超时(根据业务调整)
block_if_queue_full=True
)
def check(self) -> dict:
"""
执行健康检查
:return: dict with 'healthy' and 'message'
"""
start = time.time()
try:
# 构造探测消息
msg = f"health-{int(time.time())}"
# 【同步发送】send() 是阻塞调用,依赖 send_timeout_millis 超时
self.producer.send(msg.encode('utf-8'))
# 上报延迟
latency = time.time() - start
SEND_LATENCY.observe(latency)
return {"healthy": True, "message": f"Latency: {latency:.3f}s"}
except Exception as e:
# 捕获所有异常:Timeout, ConnectionError, Schema 等
SEND_ERRORS.inc()
return {"healthy": False, "message": f"Send failed: {str(e)}"}
def close(self):
"""关闭资源"""
self.producer.close()
self.client.close()
最后
| 维度 | Kafka | RabbitMQ | RocketMQ | Pulsar |
|---|---|---|---|---|
| 慢的根源 | 等副本 ack | 流控 / confirm | 同步刷盘 | BookKeeper 延迟 |
| 核心解法 | 降级 acks + 超时 | 异步 confirm | 异步刷盘 + 故障规避 | 调整 quorum |
| 是否需 DLQ | ✅ 必须 | ✅ 必须 | ✅ 必须 | ✅ 必须 |
| 监控重点 | UnderReplicatedPartitions | flow_control | putMessageDistributeTime | journal_sync_time |
"Kafka 在等'兄弟点头',
RabbitMQ 被'管家拦门',
RocketMQ '写字太工整',
Pulsar '快递中转站堵了'。
但所有 MQ 都在说同一句话:朋友、'别让我无限等------设超时,走 DLQ,留条活路!'"