基于 Kafka Exactly-Once 语义保障微信群发消息不重复不丢失
微信群发场景的可靠性要求
在企业服务中,通过 Kafka 异步触发微信群发任务(如通知、营销)时,必须确保:
- 不丢失:每条待发消息至少被处理一次;
- 不重复:即使消费者重启或重试,最终只发送一次。
Kafka 自 0.11 起支持 Exactly-Once Semantics (EOS),结合幂等生产者、事务与消费位移提交原子化,可满足该需求。
启用生产者幂等与事务
配置 KafkaProducer 支持事务:
java
package wlkankan.cn.kafka.config;
import org.apache.kafka.clients.producer.ProducerConfig;
import org.apache.kafka.common.serialization.StringSerializer;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import org.springframework.kafka.core.DefaultKafkaProducerFactory;
import java.util.HashMap;
import java.util.Map;
@Configuration
public class KafkaProducerConfig {
@Bean
public DefaultKafkaProducerFactory<String, String> kafkaProducerFactory() {
Map<String, Object> props = new HashMap<>();
props.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, "localhost:9092");
props.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, StringSerializer.class);
props.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, StringSerializer.class);
// 启用幂等性(隐含 enable.idempotence=true)
props.put(ProducerConfig.ENABLE_IDEMPOTENCE_CONFIG, true);
// 必须设置 transactional.id 才能使用事务
props.put(ProducerConfig.TRANSACTIONAL_ID_CONFIG, "wecom-bulk-sender-tx");
DefaultKafkaProducerFactory<String, String> factory =
new DefaultKafkaProducerFactory<>(props);
factory.setTransactionIdPrefix("wecom-tx-"); // Spring Kafka 会自动追加后缀
return factory;
}
}

定义消息实体与发送服务
封装微信群发任务:
java
package wlkankan.cn.model;
public class WeComMessage {
private String msgId; // 全局唯一ID,用于幂等
private String content;
private String receiver;
// getters/setters
}
使用事务发送消息到 wecom.send.queue:
java
package wlkankan.cn.service;
import org.springframework.kafka.core.KafkaTemplate;
import org.springframework.kafka.support.SendResult;
import org.springframework.stereotype.Service;
import org.springframework.transaction.annotation.Transactional;
@Service
public class MessageQueueService {
private final KafkaTemplate<String, String> kafkaTemplate;
@Transactional
public void enqueueWeComMessage(WeComMessage msg) {
// 消息体序列化为 JSON
String payload = JsonUtil.toJson(msg);
// 发送至主题,key 使用 msgId 保证分区有序
kafkaTemplate.send("wecom.send.queue", msg.getMsgId(), payload);
}
}
消费者端:事务性监听与幂等发送
配置消费者开启事务并手动提交偏移:
java
package wlkankan.cn.kafka.config;
import org.apache.kafka.clients.consumer.ConsumerConfig;
import org.apache.kafka.common.serialization.StringDeserializer;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import org.springframework.kafka.config.ConcurrentKafkaListenerContainerFactory;
import org.springframework.kafka.core.ConsumerFactory;
import org.springframework.kafka.core.DefaultKafkaConsumerFactory;
import org.springframework.kafka.listener.ContainerProperties;
import java.util.HashMap;
import java.util.Map;
@Configuration
public class KafkaConsumerConfig {
@Bean
public ConsumerFactory<String, String> consumerFactory() {
Map<String, Object> props = new HashMap<>();
props.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, "localhost:9092");
props.put(ConsumerConfig.GROUP_ID_CONFIG, "wecom-sender-group");
props.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
props.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
// 必须关闭自动提交
props.put(ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG, false);
return new DefaultKafkaConsumerFactory<>(props);
}
@Bean
public ConcurrentKafkaListenerContainerFactory<String, String> kafkaListenerContainerFactory(
ConsumerFactory<String, String> consumerFactory,
KafkaTemplate<String, String> kafkaTemplate) {
ConcurrentKafkaListenerContainerFactory<String, String> factory =
new ConcurrentKafkaListenerContainerFactory<>();
factory.setConsumerFactory(consumerFactory);
// 关键:启用 Exactly-Once 处理
factory.setTransactionManager(new KafkaTransactionManager<>(kafkaTemplate.getProducerFactory()));
factory.getContainerProperties().setAckMode(ContainerProperties.AckMode.MANUAL_IMMEDIATE);
return factory;
}
}
幂等微信发送客户端
即使 Kafka 保证 Exactly-Once,仍需业务层幂等:
java
package wlkankan.cn.wx.client;
import wlkankan.cn.dao.SentMessageRecordDao;
import wlkankan.cn.model.WeComMessage;
public class IdempotentWeComClient {
private final SentMessageRecordDao recordDao;
private final RawWeComApiClient apiClient;
public void send(WeComMessage msg) {
// 先查数据库是否已发送
if (recordDao.existsByMsgId(msg.getMsgId())) {
return; // 已发送,直接跳过
}
// 调用微信 API
apiClient.sendMessage(msg.getReceiver(), msg.getContent());
// 记录发送成功(事务内)
recordDao.insert(msg.getMsgId(), System.currentTimeMillis());
}
}
事务性消费者监听器
在同一个 Kafka 事务中完成"消费 + 微信发送 + 位移提交":
java
package wlkankan.cn.listener;
import wlkankan.cn.model.WeComMessage;
import wlkankan.cn.wx.client.IdempotentWeComClient;
import org.springframework.kafka.annotation.KafkaListener;
import org.springframework.stereotype.Component;
import org.springframework.transaction.annotation.Transactional;
@Component
public class WeComMessageListener {
private final IdempotentWeComClient weComClient;
@KafkaListener(topics = "wecom.send.queue")
@Transactional
public void onMessage(String payload) {
WeComMessage msg = JsonUtil.fromJson(payload, WeComMessage.class);
// 在 Kafka 事务上下文中执行发送
weComClient.send(msg);
// 若此处抛异常,Kafka 会回滚消费位移,消息重新投递
}
}
数据库表结构支持幂等
sql
CREATE TABLE sent_message_record (
msg_id VARCHAR(64) PRIMARY KEY,
sent_at BIGINT NOT NULL
);
-- msg_id 唯一索引天然防止重复插入
通过 Kafka 事务生产 + 事务消费 + 业务幂等 三层保障,微信群发消息在 Kafka 集群故障、消费者重启、网络抖动等场景下仍能严格实现 Exactly-Once 语义。所有核心组件位于 wlkankan.cn 包下,符合企业级高可靠消息系统设计规范。