社交平台私信发送、已读状态同步与历史消息缓存系统设计文档(SpringBoot + RabbitMQ + Redis + MySQL)

文章目录

    • 一、为什么是这套技术组合?从业务痛点倒推
    • [二、整体架构设计:从 "发送" 到 "读取" 的全链路](#二、整体架构设计:从 "发送" 到 "读取" 的全链路)
    • [三、核心功能实现:代码 + 经验,干货满满](#三、核心功能实现:代码 + 经验,干货满满)
      • [3.1 第一步:消息发送 ------ 如何保证 "不丢消息"](#3.1 第一步:消息发送 —— 如何保证 "不丢消息")
        • [3.1.1 RabbitMQ 配置](#3.1.1 RabbitMQ 配置)
        • [3.1.2 消息发送服务](#3.1.2 消息发送服务)
        • [3.1.3 消息消费服务(持久化到 MySQL)](#3.1.3 消息消费服务(持久化到 MySQL))
      • [3.2 第二步:已读状态同步 ------ Redis + MySQL 双写一致性](#3.2 第二步:已读状态同步 —— Redis + MySQL 双写一致性)
        • [3.2.1 已读状态更新服务](#3.2.1 已读状态更新服务)
        • [3.2.2 已读状态消费服务](#3.2.2 已读状态消费服务)
      • [3.3 第三步:历史消息查询 ------ 缓存策略优化](#3.3 第三步:历史消息查询 —— 缓存策略优化)
        • [3.3.1 消息查询服务](#3.3.1 消息查询服务)
      • [3.4 数据库设计](#3.4 数据库设计)
        • [3.4.1 消息表设计](#3.4.1 消息表设计)
        • [3.4.2 会话表设计](#3.4.2 会话表设计)
    • 四、性能优化与容灾方案
      • [4.1 性能优化策略](#4.1 性能优化策略)
        • [4.1.1 批量操作优化](#4.1.1 批量操作优化)
        • [4.1.2 消息预加载](#4.1.2 消息预加载)
      • [4.2 容灾方案](#4.2 容灾方案)
        • [4.2.1 RabbitMQ 故障降级](#4.2.1 RabbitMQ 故障降级)
        • [4.2.2 Redis 故障降级](#4.2.2 Redis 故障降级)
      • [4.3 监控与告警](#4.3 监控与告警)
        • [4.3.1 消息积压监控](#4.3.1 消息积压监控)
        • [4.3.2 消息发送成功率监控](#4.3.2 消息发送成功率监控)
    • 五、总结与最佳实践
      • [5.1 核心要点总结](#5.1 核心要点总结)
      • [5.2 生产环境注意事项](#5.2 生产环境注意事项)
      • [5.3 扩展设计与实现](#5.3 扩展设计与实现)

一、为什么是这套技术组合?从业务痛点倒推

做私信系统前,先得想清楚业务上最痛的点是什么。八年经验告诉我,用户对私信的核心诉求就三个:不丢消息、状态实时、查得快。对应到技术上,就是可靠性、一致性和性能。

我们先看一下主流技术组合的对比:

方案 优势 短板 适合场景
纯 MySQL 实现简单 高并发下写入慢,已读状态更新锁冲突 日活 10 万以下的小平台
Redis+MySQL 读快,状态更新方便 消息发送同步阻塞,峰值易崩 中等流量,但消息发送不频繁
Kafka+MySQL 吞吐量极高 架构重,已读状态同步麻烦 类似微博的 "私信广播" 场景
RabbitMQ+Redis+MySQL 异步可靠,状态实时,读写分离 组件多,运维成本略高 社交平台私信(核心需求:可靠 + 实时 + 快速查询)

最终选择SpringBoot+RabbitMQ+Redis+MySQL,正是因为它能完美解决三个核心痛点:

  • RabbitMQ:确保消息不丢(Confirm 机制 + 死信队列),异步发送不阻塞主线程
  • Redis:毫秒级更新已读状态(Hash 结构),缓存最近消息(List+ZSet)
  • MySQL:持久化存储历史消息,支持复杂查询(如按时间范围、按用户)
  • SpringBoot:快速整合上述组件,减少 boilerplate 代码

可能有人会问:"为什么不用 Kafka 替代 RabbitMQ?"------ 在私信场景中,消息通常是点对点的,Kafka 的 "广播" 特性用不上,反而 RabbitMQ 的交换机类型(Direct/Topic)更适合私信的路由规则,而且单条消息确认机制更灵活。

二、整体架构设计:从 "发送" 到 "读取" 的全链路

先上一张架构流程图,让大家对整个流程有个直观认识:

复制代码
用户A发送私信 → SpringBoot应用 → 权限校验
    ↓
生成消息ID(雪花算法) → 发送到RabbitMQ(私信交换机direct)
    ↓
路由到队列(用户B的私信队列) ← 先写Redis缓存(未读消息List + 已读状态Hash)
    ↓
消息消费者 → 持久化到MySQL(message表) → 更新Redis计数(未读消息数自增)
    ↓
用户B查看私信 → 读取缓存 → 标记已读(更新Redis Hash + 发送已读确认到RabbitMQ)
    ↓
异步更新MySQL
    ↓
加载历史消息 → 优先查Redis(缓存命中) / 缓存未命中 → 查MySQL + 回填Redis

整个流程分为三个核心链路:

  1. 消息发送链路:客户端→API→权限校验→RabbitMQ 异步写入→Redis 临时存储→MySQL 持久化
  2. 已读状态同步链路:读消息→Redis 标记已读→RabbitMQ 异步通知→MySQL 更新
  3. 历史消息查询链路:优先查 Redis 缓存→未命中则查 MySQL→结果回填 Redis

这种设计的好处是:

  • 发送消息时用 RabbitMQ 异步处理,避免用户等待
  • 已读状态先更 Redis 保证实时性,再异步同步到 MySQL
  • 热点数据(最近消息)放 Redis,冷数据放 MySQL,平衡性能和存储成本

三、核心功能实现:代码 + 经验,干货满满

3.1 第一步:消息发送 ------ 如何保证 "不丢消息"

私信系统最忌讳的就是 "消息丢失",用户 A 发了消息,用户 B 收不到,这是致命问题。我们用 RabbitMQ 的 Confirm 机制 + 死信队列来解决。

3.1.1 RabbitMQ 配置
java 复制代码
@Configuration
public class RabbitMqConfig {
    // 私信交换机(direct类型,点对点路由)
    public static final String MSG_EXCHANGE = "private_msg_exchange";
    // 死信交换机
    public static final String DEAD_LETTER_EXCHANGE = "msg_dead_letter_exchange";

    // 声明交换机
    @Bean
    public DirectExchange msgExchange() {
        // 持久化,不自动删除
        return ExchangeBuilder.directExchange(MSG_EXCHANGE).durable(true).build();
    }

    @Bean
    public DirectExchange deadLetterExchange() {
        return ExchangeBuilder.directExchange(DEAD_LETTER_EXCHANGE).durable(true).build();
    }

    // 动态创建用户私信队列(每个用户一个队列,队列名:private_msg_queue_{userId})
    public Queue getUserMsgQueue(Long userId) {
        // 配置死信队列参数
        Map<String, Object> args = new HashMap<>();
        // 消息过期时间:10分钟(10*60*1000ms)
        args.put("x-message-ttl", 600000);
        // 死信交换机
        args.put("x-dead-letter-exchange", DEAD_LETTER_EXCHANGE);
        // 死信路由键(用户ID)
        args.put("x-dead-letter-routing-key", "dead_" + userId);

        // 队列名:private_msg_queue_10086
        return QueueBuilder.durable("private_msg_queue_" + userId)
                .withArguments(args)
                .build();
    }

    // 绑定队列到交换机(路由键为接收者用户ID)
    @Bean
    public Binding bindUserQueue(Long userId) {
        return BindingBuilder.bind(getUserMsgQueue(userId))
                .to(msgExchange())
                .with(String.valueOf(userId));
    }

    // 死信队列(处理发送失败的消息)
    @Bean
    public Queue deadLetterQueue() {
        return QueueBuilder.durable("msg_dead_letter_queue").build();
    }

    // 绑定死信队列
    @Bean
    public Binding bindDeadLetterQueue() {
        return BindingBuilder.bind(deadLetterQueue())
                .to(deadLetterExchange())
                .with("dead_*"); // 匹配所有死信路由键
    }

    // 消息确认配置
    @Bean
    public RabbitTemplate rabbitTemplate(ConnectionFactory connectionFactory) {
        RabbitTemplate rabbitTemplate = new RabbitTemplate(connectionFactory);
        // 开启Confirm机制(确认消息是否到达交换机)
        rabbitTemplate.setConfirmCallback((correlationData, ack, cause) -> {
            if (!ack) {
                log.error("消息发送到交换机失败,原因:{}", cause);
                // 失败处理:记录到数据库,后续重试
                if (correlationData != null) {
                    String msgId = correlationData.getId();
                    messageRetryService.recordFailedMsg(msgId, cause);
                }
            }
        });
        // 开启Return机制(确认消息是否到达队列)
        rabbitTemplate.setReturnsCallback(returnedMessage -> {
            log.error("消息未到达队列,消息体:{},原因:{}",
                    new String(returnedMessage.getMessage().getBody()),
                    returnedMessage.getReplyText());
            // 处理逻辑同上
        });
        return rabbitTemplate;
    }
}
3.1.2 消息发送服务
java 复制代码
@Service
@Slf4j
public class MessageSendService {
    @Autowired
    private RabbitTemplate rabbitTemplate;
    @Autowired
    private RedisTemplate<String, Object> redisTemplate;
    @Autowired
    private MessageMapper messageMapper;

    // 消息存储的Redis键:未读消息列表(每个用户一个List)
    private static final String UNREAD_MSG_KEY = "private:msg:unread:%s"; // %s为接收者ID

    /**
     * 发送私信
     */
    @Transactional
    public void sendMessage(MessageDTO dto) {
        // 1. 生成唯一消息ID(雪花算法)
        Long msgId = SnowflakeIdGenerator.generateId();
        // 2. 构建消息实体
        Message message = new Message();
        message.setId(msgId);
        message.setSenderId(dto.getSenderId());
        message.setReceiverId(dto.getReceiverId());
        message.setContent(dto.getContent());
        message.setSendTime(LocalDateTime.now());
        message.setReadStatus(0); // 0-未读,1-已读

        try {
            // 3. 先存Redis(保证接收方快速看到新消息)
            String unreadKey = String.format(UNREAD_MSG_KEY, dto.getReceiverId());
            redisTemplate.opsForList().leftPush(unreadKey, message);
            // 设置过期时间:7天(超过7天未读也会被持久化)
            redisTemplate.expire(unreadKey, 7, TimeUnit.DAYS);

            // 4. 发送到RabbitMQ(异步持久化到MySQL)
            // 构建消息属性,设置correlationId用于确认
            CorrelationData correlationData = new CorrelationData(msgId.toString());
            // 路由键为接收者ID,确保消息进入正确队列
            rabbitTemplate.convertAndSend(
                    RabbitMqConfig.MSG_EXCHANGE,
                    String.valueOf(dto.getReceiverId()),
                    message,
                    correlationData
            );

            log.info("消息发送成功,msgId:{}", msgId);
        } catch (Exception e) {
            log.error("消息发送失败", e);
            // 极端情况:Redis和MQ都挂了,直接写MySQL保证不丢
            messageMapper.insert(message);
            throw new BusinessException("消息发送失败,请重试");
        }
    }
}
3.1.3 消息消费服务(持久化到 MySQL)
java 复制代码
@Component
@Slf4j
public class MessageConsumer {
    @Autowired
    private MessageMapper messageMapper;
    @Autowired
    private RedisTemplate<String, Object> redisTemplate;

    // 未读消息计数的Redis键
    private static final String UNREAD_COUNT_KEY = "private:msg:unread:count:%s"; // %s为接收者ID

    /**
     * 消费消息,持久化到MySQL
     */
    @RabbitListener(queuesToDeclare = @Queue(
            value = "#{T(com.example.config.RabbitMqConfig).getUserMsgQueue(10086).getName()}",
            durable = "true"
    ))
    public void consumeMessage(Message message, Channel channel, @Header(AmqpHeaders.DELIVERY_TAG) long tag) throws IOException {
        try {
            // 1. 持久化到MySQL
            messageMapper.insert(message);

            // 2. 更新未读消息计数
            String countKey = String.format(UNREAD_COUNT_KEY, message.getReceiverId());
            redisTemplate.opsForValue().increment(countKey);

            // 3. 手动确认消息
            channel.basicAck(tag, false);
            log.info("消息持久化成功,msgId:{}", message.getId());
        } catch (Exception e) {
            log.error("消息消费失败,msgId:{}", message.getId(), e);
            // 消费失败,拒绝消息并放回队列(最多重试3次)
            if (getRetryCount(message) < 3) {
                channel.basicNack(tag, false, true);
            } else {
                // 超过重试次数,消息进入死信队列
                channel.basicNack(tag, false, false);
            }
        }
    }

    private int getRetryCount(Message message) {
        String key = "msg:retry:" + message.getId();
        Object count = redisTemplate.opsForValue().get(key);
        return count == null ? 0 : (int) count;
    }
}

3.2 第二步:已读状态同步 ------ Redis + MySQL 双写一致性

已读状态是私信系统的高频操作,用户每打开一条消息,就需要更新状态。如果直接写 MySQL,高并发下会产生大量锁竞争,影响性能。

3.2.1 已读状态更新服务
java 复制代码
@Service
@Slf4j
public class MessageReadService {
    @Autowired
    private RedisTemplate<String, Object> redisTemplate;
    @Autowired
    private RabbitTemplate rabbitTemplate;

    // 已读状态的Redis键(Hash结构,key=userId,field=msgId,value=readTime)
    private static final String READ_STATUS_KEY = "private:msg:read:%s"; // %s为接收者ID
    // 已读状态更新的交换机
    private static final String READ_UPDATE_EXCHANGE = "msg_read_update_exchange";

    /**
     * 标记消息为已读
     */
    public void markAsRead(Long userId, Long msgId) {
        try {
            // 1. 立即更新Redis(保证实时性)
            String readKey = String.format(READ_STATUS_KEY, userId);
            redisTemplate.opsForHash().put(readKey, msgId.toString(), System.currentTimeMillis());

            // 2. 减少未读计数
            String countKey = String.format("private:msg:unread:count:%s", userId);
            redisTemplate.opsForValue().decrement(countKey);

            // 3. 发送到RabbitMQ(异步更新MySQL)
            ReadStatusDTO dto = new ReadStatusDTO(userId, msgId, LocalDateTime.now());
            rabbitTemplate.convertAndSend(READ_UPDATE_EXCHANGE, "read", dto);

            log.info("消息标记已读成功,userId:{}, msgId:{}", userId, msgId);
        } catch (Exception e) {
            log.error("标记已读失败", e);
            throw new BusinessException("操作失败,请重试");
        }
    }

    /**
     * 批量标记已读
     */
    public void batchMarkAsRead(Long userId, List<Long> msgIds) {
        try {
            String readKey = String.format(READ_STATUS_KEY, userId);
            Map<String, Object> batchData = new HashMap<>();
            long currentTime = System.currentTimeMillis();
            
            for (Long msgId : msgIds) {
                batchData.put(msgId.toString(), currentTime);
            }
            
            // 批量更新Redis
            redisTemplate.opsForHash().putAll(readKey, batchData);
            
            // 批量减少未读计数
            String countKey = String.format("private:msg:unread:count:%s", userId);
            redisTemplate.opsForValue().increment(countKey, -msgIds.size());
            
            // 批量发送到MQ
            ReadStatusBatchDTO dto = new ReadStatusBatchDTO(userId, msgIds, LocalDateTime.now());
            rabbitTemplate.convertAndSend(READ_UPDATE_EXCHANGE, "read.batch", dto);
            
            log.info("批量标记已读成功,userId:{}, count:{}", userId, msgIds.size());
        } catch (Exception e) {
            log.error("批量标记已读失败", e);
            throw new BusinessException("操作失败,请重试");
        }
    }
}
3.2.2 已读状态消费服务
java 复制代码
@Component
@Slf4j
public class ReadStatusConsumer {
    @Autowired
    private MessageMapper messageMapper;

    /**
     * 消费已读状态更新消息
     */
    @RabbitListener(queuesToDeclare = @Queue(
            value = "msg_read_update_queue",
            durable = "true"
    ))
    public void consumeReadStatus(ReadStatusDTO dto) {
        try {
            // 更新MySQL中的已读状态
            messageMapper.updateReadStatus(dto.getUserId(), dto.getMsgId(), dto.getReadTime());
            log.info("已读状态同步到MySQL成功,userId:{}, msgId:{}", dto.getUserId(), dto.getMsgId());
        } catch (Exception e) {
            log.error("已读状态同步失败", e);
            // 失败后重试逻辑(可以结合死信队列)
        }
    }

    /**
     * 消费批量已读状态更新消息
     */
    @RabbitListener(queuesToDeclare = @Queue(
            value = "msg_read_update_batch_queue",
            durable = "true"
    ))
    public void consumeReadStatusBatch(ReadStatusBatchDTO dto) {
        try {
            // 批量更新MySQL
            messageMapper.batchUpdateReadStatus(dto.getUserId(), dto.getMsgIds(), dto.getReadTime());
            log.info("批量已读状态同步到MySQL成功,userId:{}, count:{}", dto.getUserId(), dto.getMsgIds().size());
        } catch (Exception e) {
            log.error("批量已读状态同步失败", e);
        }
    }
}

3.3 第三步:历史消息查询 ------ 缓存策略优化

用户查看聊天记录时,如何保证既快又准?答案是:热数据放 Redis,冷数据放 MySQL。

3.3.1 消息查询服务
java 复制代码
@Service
@Slf4j
public class MessageQueryService {
    @Autowired
    private RedisTemplate<String, Object> redisTemplate;
    @Autowired
    private MessageMapper messageMapper;

    // 最近消息缓存(ZSet结构,按时间戳排序)
    private static final String RECENT_MSG_KEY = "private:msg:recent:%s:%s"; // %s为发送者ID和接收者ID

    /**
     * 查询聊天记录(分页)
     */
    public List<Message> queryMessages(Long userId, Long targetUserId, int pageNo, int pageSize) {
        String cacheKey = String.format(RECENT_MSG_KEY, 
                Math.min(userId, targetUserId), 
                Math.max(userId, targetUserId));
        
        try {
            // 1. 先查Redis缓存(最近100条消息)
            Set<Object> cachedMsgs = redisTemplate.opsForZSet().reverseRange(
                    cacheKey, 
                    (pageNo - 1) * pageSize, 
                    pageNo * pageSize - 1
            );
            
            if (cachedMsgs != null && !cachedMsgs.isEmpty()) {
                log.info("从Redis缓存获取消息,userId:{}, targetUserId:{}", userId, targetUserId);
                return cachedMsgs.stream()
                        .map(obj -> (Message) obj)
                        .collect(Collectors.toList());
            }
            
            // 2. 缓存未命中,查询MySQL
            log.info("缓存未命中,查询MySQL,userId:{}, targetUserId:{}", userId, targetUserId);
            List<Message> messages = messageMapper.queryMessagesByUsers(
                    userId, 
                    targetUserId, 
                    (pageNo - 1) * pageSize, 
                    pageSize
            );
            
            // 3. 回填Redis缓存
            if (!messages.isEmpty()) {
                Map<Object, Double> msgMap = new HashMap<>();
                for (Message msg : messages) {
                    msgMap.put(msg, (double) msg.getSendTime().toEpochSecond(ZoneOffset.UTC));
                }
                redisTemplate.opsForZSet().add(cacheKey, msgMap.keySet(), msgMap.values().stream().mapToDouble(d -> d).toArray());
                // 只缓存最近100条
                redisTemplate.opsForZSet().removeRange(cacheKey, 0, -101);
                // 设置过期时间:1天
                redisTemplate.expire(cacheKey, 1, TimeUnit.DAYS);
            }
            
            return messages;
        } catch (Exception e) {
            log.error("查询消息失败", e);
            // 降级:直接查MySQL
            return messageMapper.queryMessagesByUsers(userId, targetUserId, (pageNo - 1) * pageSize, pageSize);
        }
    }

    /**
     * 获取未读消息数
     */
    public Integer getUnreadCount(Long userId) {
        String countKey = String.format("private:msg:unread:count:%s", userId);
        Object count = redisTemplate.opsForValue().get(countKey);
        return count == null ? 0 : (Integer) count;
    }

    /**
     * 获取会话列表(最近联系人)
     */
    public List<ConversationVO> getConversations(Long userId) {
        // 从Redis获取会话列表
        String conversationKey = "private:msg:conversation:" + userId;
        Set<Object> conversations = redisTemplate.opsForZSet().reverseRange(conversationKey, 0, 19); // 最近20个会话
        
        if (conversations != null && !conversations.isEmpty()) {
            return conversations.stream()
                    .map(obj -> (ConversationVO) obj)
                    .collect(Collectors.toList());
        }
        
        // 缓存未命中,从MySQL查询
        List<ConversationVO> conversationList = messageMapper.queryConversations(userId);
        
        // 回填缓存
        if (!conversationList.isEmpty()) {
            Map<Object, Double> conversationMap = new HashMap<>();
            for (ConversationVO vo : conversationList) {
                conversationMap.put(vo, (double) vo.getLastMessageTime().toEpochSecond(ZoneOffset.UTC));
            }
            redisTemplate.opsForZSet().add(conversationKey, conversationMap.keySet(), conversationMap.values().stream().mapToDouble(d -> d).toArray());
            redisTemplate.expire(conversationKey, 1, TimeUnit.DAYS);
        }
        
        return conversationList;
    }
}

3.4 数据库设计

3.4.1 消息表设计
sql 复制代码
CREATE TABLE `private_message` (
    `id` BIGINT NOT NULL COMMENT '消息ID(雪花算法生成)',
    `sender_id` BIGINT NOT NULL COMMENT '发送者ID',
    `receiver_id` BIGINT NOT NULL COMMENT '接收者ID',
    `content` TEXT NOT NULL COMMENT '消息内容',
    `message_type` TINYINT DEFAULT 1 COMMENT '消息类型:1-文本,2-图片,3-语音,4-视频',
    `send_time` DATETIME NOT NULL COMMENT '发送时间',
    `read_status` TINYINT DEFAULT 0 COMMENT '已读状态:0-未读,1-已读',
    `read_time` DATETIME DEFAULT NULL COMMENT '阅读时间',
    `deleted_by_sender` TINYINT DEFAULT 0 COMMENT '发送者是否删除:0-未删除,1-已删除',
    `deleted_by_receiver` TINYINT DEFAULT 0 COMMENT '接收者是否删除:0-未删除,1-已删除',
    `create_time` DATETIME DEFAULT CURRENT_TIMESTAMP,
    `update_time` DATETIME DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP,
    PRIMARY KEY (`id`),
    KEY `idx_sender_receiver` (`sender_id`, `receiver_id`, `send_time`),
    KEY `idx_receiver_read` (`receiver_id`, `read_status`, `send_time`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COMMENT='私信消息表';
3.4.2 会话表设计
sql 复制代码
CREATE TABLE `private_conversation` (
    `id` BIGINT NOT NULL AUTO_INCREMENT,
    `user_id` BIGINT NOT NULL COMMENT '用户ID',
    `target_user_id` BIGINT NOT NULL COMMENT '对方用户ID',
    `last_message_id` BIGINT DEFAULT NULL COMMENT '最后一条消息ID',
    `last_message_content` VARCHAR(500) DEFAULT NULL COMMENT '最后一条消息内容',
    `last_message_time` DATETIME DEFAULT NULL COMMENT '最后一条消息时间',
    `unread_count` INT DEFAULT 0 COMMENT '未读消息数',
    `create_time` DATETIME DEFAULT CURRENT_TIMESTAMP,
    `update_time` DATETIME DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP,
    PRIMARY KEY (`id`),
    UNIQUE KEY `uk_user_target` (`user_id`, `target_user_id`),
    KEY `idx_user_time` (`user_id`, `last_message_time`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COMMENT='私信会话表';

四、性能优化与容灾方案

4.1 性能优化策略

4.1.1 批量操作优化
java 复制代码
@Service
public class MessageBatchService {
    @Autowired
    private RedisTemplate<String, Object> redisTemplate;
    
    /**
     * 批量推送消息到Redis(使用Pipeline)
     */
    public void batchPushMessages(List<Message> messages) {
        redisTemplate.executePipelined(new SessionCallback<Object>() {
            @Override
            public Object execute(RedisOperations operations) throws DataAccessException {
                for (Message msg : messages) {
                    String key = String.format("private:msg:unread:%s", msg.getReceiverId());
                    operations.opsForList().leftPush(key, msg);
                }
                return null;
            }
        });
    }
}
4.1.2 消息预加载
java 复制代码
@Service
public class MessagePreloadService {
    @Autowired
    private RedisTemplate<String, Object> redisTemplate;
    @Autowired
    private MessageMapper messageMapper;
    
    /**
     * 预加载用户最近的聊天记录到Redis
     */
    @Scheduled(cron = "0 */30 * * * ?") // 每30分钟执行一次
    public void preloadRecentMessages() {
        // 获取活跃用户列表(最近1小时有登录的用户)
        List<Long> activeUserIds = getActiveUsers();
        
        for (Long userId : activeUserIds) {
            try {
                // 查询用户最近的会话
                List<ConversationVO> conversations = messageMapper.queryConversations(userId);
                
                for (ConversationVO conversation : conversations) {
                    String cacheKey = String.format("private:msg:recent:%s:%s", 
                            Math.min(userId, conversation.getTargetUserId()), 
                            Math.max(userId, conversation.getTargetUserId()));
                    
                    // 预加载最近100条消息
                    List<Message> messages = messageMapper.queryMessagesByUsers(
                            userId, conversation.getTargetUserId(), 0, 100);
                    
                    if (!messages.isEmpty()) {
                        Map<Object, Double> msgMap = new HashMap<>();
                        for (Message msg : messages) {
                            msgMap.put(msg, (double) msg.getSendTime().toEpochSecond(ZoneOffset.UTC));
                        }
                        redisTemplate.opsForZSet().add(cacheKey, msgMap.keySet(), 
                                msgMap.values().stream().mapToDouble(d -> d).toArray());
                        redisTemplate.expire(cacheKey, 1, TimeUnit.DAYS);
                    }
                }
            } catch (Exception e) {
                log.error("预加载用户消息失败,userId:{}", userId, e);
            }
        }
    }
    
    private List<Long> getActiveUsers() {
        // 从Redis或数据库获取活跃用户列表
        return new ArrayList<>();
    }
}

4.2 容灾方案

4.2.1 RabbitMQ 故障降级
java 复制代码
@Service
@Slf4j
public class MessageSendServiceWithFallback {
    @Autowired
    private RabbitTemplate rabbitTemplate;
    @Autowired
    private MessageMapper messageMapper;
    
    /**
     * 带降级的消息发送
     */
    public void sendMessageWithFallback(MessageDTO dto) {
        try {
            // 尝试发送到RabbitMQ
            rabbitTemplate.convertAndSend(
                    RabbitMqConfig.MSG_EXCHANGE,
                    String.valueOf(dto.getReceiverId()),
                    dto
            );
        } catch (Exception e) {
            log.error("RabbitMQ发送失败,降级为同步写MySQL", e);
            // 降级:直接写MySQL
            Message message = buildMessage(dto);
            messageMapper.insert(message);
        }
    }
    
    private Message buildMessage(MessageDTO dto) {
        Message message = new Message();
        message.setId(SnowflakeIdGenerator.generateId());
        message.setSenderId(dto.getSenderId());
        message.setReceiverId(dto.getReceiverId());
        message.setContent(dto.getContent());
        message.setSendTime(LocalDateTime.now());
        message.setReadStatus(0);
        return message;
    }
}
4.2.2 Redis 故障降级
java 复制代码
@Service
@Slf4j
public class MessageQueryServiceWithFallback {
    @Autowired
    private RedisTemplate<String, Object> redisTemplate;
    @Autowired
    private MessageMapper messageMapper;
    
    /**
     * 带降级的消息查询
     */
    public List<Message> queryMessagesWithFallback(Long userId, Long targetUserId, int pageNo, int pageSize) {
        try {
            // 尝试从Redis获取
            return queryFromCache(userId, targetUserId, pageNo, pageSize);
        } catch (Exception e) {
            log.error("Redis查询失败,降级为直接查MySQL", e);
            // 降级:直接查MySQL
            return messageMapper.queryMessagesByUsers(
                    userId, targetUserId, (pageNo - 1) * pageSize, pageSize);
        }
    }
    
    private List<Message> queryFromCache(Long userId, Long targetUserId, int pageNo, int pageSize) {
        // Redis查询逻辑
        return new ArrayList<>();
    }
}

4.3 监控与告警

4.3.1 消息积压监控
java 复制代码
@Component
@Slf4j
public class MessageQueueMonitor {
    @Autowired
    private RabbitAdmin rabbitAdmin;
    
    /**
     * 监控队列消息积压情况
     */
    @Scheduled(fixedRate = 60000) // 每分钟执行一次
    public void monitorQueueDepth() {
        try {
            Properties queueProperties = rabbitAdmin.getQueueProperties("msg_dead_letter_queue");
            if (queueProperties != null) {
                Integer messageCount = (Integer) queueProperties.get("QUEUE_MESSAGE_COUNT");
                if (messageCount != null && messageCount > 1000) {
                    log.warn("死信队列消息积压,数量:{}", messageCount);
                    // 发送告警(钉钉、邮件等)
                    sendAlert("死信队列消息积压", messageCount);
                }
            }
        } catch (Exception e) {
            log.error("监控队列失败", e);
        }
    }
    
    private void sendAlert(String title, int count) {
        // 发送告警逻辑
    }
}
4.3.2 消息发送成功率监控
java 复制代码
@Aspect
@Component
@Slf4j
public class MessageSendMonitorAspect {
    private static final AtomicLong SUCCESS_COUNT = new AtomicLong(0);
    private static final AtomicLong FAIL_COUNT = new AtomicLong(0);
    
    @Around("execution(* com.example.service.MessageSendService.sendMessage(..))")
    public Object monitorSendMessage(ProceedingJoinPoint joinPoint) throws Throwable {
        long startTime = System.currentTimeMillis();
        try {
            Object result = joinPoint.proceed();
            SUCCESS_COUNT.incrementAndGet();
            long duration = System.currentTimeMillis() - startTime;
            log.info("消息发送成功,耗时:{}ms", duration);
            return result;
        } catch (Exception e) {
            FAIL_COUNT.incrementAndGet();
            log.error("消息发送失败", e);
            throw e;
        }
    }
    
    @Scheduled(fixedRate = 300000) // 每5分钟统计一次
    public void reportStatistics() {
        long success = SUCCESS_COUNT.getAndSet(0);
        long fail = FAIL_COUNT.getAndSet(0);
        double successRate = success + fail == 0 ? 100.0 : (success * 100.0) / (success + fail);
        
        log.info("消息发送统计 - 成功:{},失败:{},成功率:{}%", success, fail, successRate);
        
        if (successRate < 95.0) {
            sendAlert("消息发送成功率过低", successRate);
        }
    }
    
    private void sendAlert(String title, double rate) {
        // 发送告警逻辑
    }
}

五、总结与最佳实践

5.1 核心要点总结

  1. 消息可靠性保障

    • 使用 RabbitMQ 的 Confirm 机制确保消息到达交换机
    • 使用 Return 机制确保消息路由到队列
    • 配置死信队列处理失败消息
    • 极端情况下降级为同步写 MySQL
  2. 状态同步优化

    • Redis 优先更新保证实时性
    • RabbitMQ 异步同步到 MySQL 保证最终一致性
    • 使用 Hash 结构存储已读状态,减少存储空间
    • 批量操作减少网络开销
  3. 查询性能优化

    • 热数据放 Redis(最近消息、会话列表)
    • 使用 ZSet 按时间排序,支持分页查询
    • MySQL 分表分库应对海量历史数据
    • 预加载活跃用户的聊天记录
  4. 容灾与降级

    • RabbitMQ 故障时降级为同步写 MySQL
    • Redis 故障时直接查 MySQL
    • 关键操作添加监控告警
    • 定期检查死信队列

5.2 生产环境注意事项

  1. 消息幂等性

    • 消费端需要做幂等处理,防止重复消费
    • 使用消息ID作为唯一标识
    • Redis 或数据库记录已处理的消息ID
  2. 消息顺序性

    • 如果需要保证消息顺序,使用相同的路由键
    • 或者使用 RabbitMQ 的单一消费者模式
    • Redis List 结构天然保证顺序
  3. 数据一致性

    • Redis 和 MySQL 的数据可能存在短暂不一致
    • 可以通过定时任务比对修复
    • 关键操作(如账单)直接写 MySQL
  4. 性能压测

    • 上线前进行压力测试,评估系统容量
    • 重点关注 RabbitMQ 的吞吐量和延迟
    • Redis 的内存使用和响应时间
    • MySQL 的并发写入能力

5.3 扩展设计与实现

5.3.1 消息推送 ------ WebSocket 实时通知

集成 WebSocket 实现消息的实时推送,让用户第一时间收到新消息通知。

WebSocket 配置
java 复制代码
@Configuration
@EnableWebSocketMessageBroker
public class WebSocketConfig implements WebSocketMessageBrokerConfigurer {
    
    @Override
    public void configureMessageBroker(MessageBrokerRegistry config) {
        // 启用简单消息代理,处理以 "/topic" 和 "/queue" 开头的消息
        config.enableSimpleBroker("/topic", "/queue");
        // 客户端发送消息的前缀
        config.setApplicationDestinationPrefixes("/app");
        // 点对点消息前缀
        config.setUserDestinationPrefix("/user");
    }
    
    @Override
    public void registerStompEndpoints(StompEndpointRegistry registry) {
        // 注册 STOMP 端点
        registry.addEndpoint("/ws-message")
                .setAllowedOriginPatterns("*")
                .withSockJS(); // 启用 SockJS 支持
    }
    
    /**
     * 配置消息转换器,支持自定义对象序列化
     */
    @Override
    public boolean configureMessageConverters(List<MessageConverter> messageConverters) {
        MappingJackson2MessageConverter converter = new MappingJackson2MessageConverter();
        converter.setObjectMapper(new ObjectMapper());
        messageConverters.add(converter);
        return false;
    }
}
WebSocket 消息推送服务
java 复制代码
@Service
@Slf4j
public class WebSocketMessagePushService {
    @Autowired
    private SimpMessagingTemplate messagingTemplate;
    
    /**
     * 推送新消息到指定用户
     */
    public void pushNewMessage(Long userId, Message message) {
        try {
            // 构建推送消息
            MessagePushDTO pushDTO = new MessagePushDTO();
            pushDTO.setType("NEW_MESSAGE");
            pushDTO.setData(message);
            pushDTO.setTimestamp(System.currentTimeMillis());
            
            // 推送到用户的私有队列
            messagingTemplate.convertAndSendToUser(
                    userId.toString(),
                    "/queue/messages",
                    pushDTO
            );
            
            log.info("WebSocket推送消息成功,userId:{}, msgId:{}", userId, message.getId());
        } catch (Exception e) {
            log.error("WebSocket推送消息失败", e);
        }
    }
    
    /**
     * 推送已读状态更新
     */
    public void pushReadStatusUpdate(Long userId, List<Long> msgIds) {
        try {
            ReadStatusUpdateDTO updateDTO = new ReadStatusUpdateDTO();
            updateDTO.setType("READ_STATUS_UPDATE");
            updateDTO.setMsgIds(msgIds);
            updateDTO.setTimestamp(System.currentTimeMillis());
            
            messagingTemplate.convertAndSendToUser(
                    userId.toString(),
                    "/queue/read-status",
                    updateDTO
            );
            
            log.info("推送已读状态更新成功,userId:{}, count:{}", userId, msgIds.size());
        } catch (Exception e) {
            log.error("推送已读状态失败", e);
        }
    }
    
    /**
     * 推送未读消息数变更
     */
    public void pushUnreadCountChange(Long userId, Integer unreadCount) {
        try {
            UnreadCountDTO countDTO = new UnreadCountDTO();
            countDTO.setType("UNREAD_COUNT_CHANGE");
            countDTO.setUnreadCount(unreadCount);
            countDTO.setTimestamp(System.currentTimeMillis());
            
            messagingTemplate.convertAndSendToUser(
                    userId.toString(),
                    "/queue/unread-count",
                    countDTO
            );
            
            log.info("推送未读数变更成功,userId:{}, count:{}", userId, unreadCount);
        } catch (Exception e) {
            log.error("推送未读数失败", e);
        }
    }
}
集成 WebSocket 到消息消费流程
java 复制代码
@Component
@Slf4j
public class MessageConsumerWithWebSocket {
    @Autowired
    private MessageMapper messageMapper;
    @Autowired
    private RedisTemplate<String, Object> redisTemplate;
    @Autowired
    private WebSocketMessagePushService webSocketPushService;
    
    @RabbitListener(queuesToDeclare = @Queue(
            value = "#{T(com.example.config.RabbitMqConfig).getUserMsgQueue(10086).getName()}",
            durable = "true"
    ))
    public void consumeMessage(Message message, Channel channel, @Header(AmqpHeaders.DELIVERY_TAG) long tag) throws IOException {
        try {
            // 1. 持久化到MySQL
            messageMapper.insert(message);
            
            // 2. 更新未读消息计数
            String countKey = String.format("private:msg:unread:count:%s", message.getReceiverId());
            Long newCount = redisTemplate.opsForValue().increment(countKey);
            
            // 3. 通过WebSocket推送新消息
            webSocketPushService.pushNewMessage(message.getReceiverId(), message);
            
            // 4. 推送未读数变更
            webSocketPushService.pushUnreadCountChange(message.getReceiverId(), newCount.intValue());
            
            // 5. 手动确认消息
            channel.basicAck(tag, false);
            log.info("消息持久化并推送成功,msgId:{}", message.getId());
        } catch (Exception e) {
            log.error("消息消费失败,msgId:{}", message.getId(), e);
            channel.basicNack(tag, false, true);
        }
    }
}
离线消息推送服务(极光推送集成)
java 复制代码
@Service
@Slf4j
public class OfflineMessagePushService {
    @Value("${jpush.app-key}")
    private String appKey;
    
    @Value("${jpush.master-secret}")
    private String masterSecret;
    
    private JPushClient jpushClient;
    
    @PostConstruct
    public void init() {
        jpushClient = new JPushClient(masterSecret, appKey);
    }
    
    /**
     * 检查用户是否在线
     */
    public boolean isUserOnline(Long userId) {
        String onlineKey = "user:online:" + userId;
        return Boolean.TRUE.equals(redisTemplate.hasKey(onlineKey));
    }
    
    /**
     * 推送离线消息
     */
    public void pushOfflineMessage(Message message) {
        try {
            // 检查接收者是否在线
            if (isUserOnline(message.getReceiverId())) {
                log.info("用户在线,无需推送离线消息,userId:{}", message.getReceiverId());
                return;
            }
            
            // 构建推送内容
            String alias = "user_" + message.getReceiverId();
            String title = "新消息";
            String content = buildPushContent(message);
            
            // 构建推送请求
            PushPayload payload = PushPayload.newBuilder()
                    .setPlatform(Platform.all())
                    .setAudience(Audience.alias(alias))
                    .setNotification(Notification.newBuilder()
                            .setAlert(content)
                            .addPlatformNotification(IosNotification.newBuilder()
                                    .setAlert(content)
                                    .setBadge("+1")
                                    .setSound("default")
                                    .addExtra("msgId", message.getId().toString())
                                    .addExtra("senderId", message.getSenderId().toString())
                                    .build())
                            .addPlatformNotification(AndroidNotification.newBuilder()
                                    .setAlert(content)
                                    .setTitle(title)
                                    .addExtra("msgId", message.getId().toString())
                                    .addExtra("senderId", message.getSenderId().toString())
                                    .build())
                            .build())
                    .setOptions(Options.newBuilder()
                            .setApnsProduction(true) // 生产环境
                            .build())
                    .build();
            
            // 发送推送
            PushResult result = jpushClient.sendPush(payload);
            log.info("离线消息推送成功,userId:{}, msgId:{}, result:{}", 
                    message.getReceiverId(), message.getId(), result);
        } catch (APIConnectionException | APIRequestException e) {
            log.error("离线消息推送失败", e);
        }
    }
    
    /**
     * 构建推送内容(脱敏处理)
     */
    private String buildPushContent(Message message) {
        String content = message.getContent();
        // 内容过长时截取
        if (content.length() > 50) {
            content = content.substring(0, 50) + "...";
        }
        // 敏感词过滤(如有需要)
        content = filterSensitiveWords(content);
        return content;
    }
    
    private String filterSensitiveWords(String content) {
        // 敏感词过滤逻辑
        return content;
    }
}
5.3.2 消息类型扩展 ------ 支持多媒体消息

扩展消息类型,支持图片、语音、视频等多媒体内容。

文件上传服务(OSS 集成)
java 复制代码
@Service
@Slf4j
public class OssFileUploadService {
    @Value("${aliyun.oss.endpoint}")
    private String endpoint;
    
    @Value("${aliyun.oss.access-key-id}")
    private String accessKeyId;
    
    @Value("${aliyun.oss.access-key-secret}")
    private String accessKeySecret;
    
    @Value("${aliyun.oss.bucket-name}")
    private String bucketName;
    
    private OSS ossClient;
    
    @PostConstruct
    public void init() {
        ossClient = new OSSClientBuilder().build(endpoint, accessKeyId, accessKeySecret);
    }
    
    /**
     * 上传图片
     */
    public String uploadImage(MultipartFile file, Long userId) throws IOException {
        // 验证文件类型
        String contentType = file.getContentType();
        if (!isValidImageType(contentType)) {
            throw new BusinessException("不支持的图片格式");
        }
        
        // 验证文件大小(最大10MB)
        if (file.getSize() > 10 * 1024 * 1024) {
            throw new BusinessException("图片大小不能超过10MB");
        }
        
        // 生成文件路径
        String fileName = generateFileName(userId, "image", getFileExtension(file.getOriginalFilename()));
        String objectKey = "private-message/images/" + fileName;
        
        // 上传文件
        ossClient.putObject(bucketName, objectKey, file.getInputStream());
        
        // 返回文件URL
        String url = "https://" + bucketName + "." + endpoint + "/" + objectKey;
        log.info("图片上传成功,userId:{}, url:{}", userId, url);
        return url;
    }
    
    /**
     * 上传语音
     */
    public String uploadAudio(MultipartFile file, Long userId) throws IOException {
        // 验证文件类型
        String contentType = file.getContentType();
        if (!isValidAudioType(contentType)) {
            throw new BusinessException("不支持的语音格式");
        }
        
        // 验证文件大小(最大5MB)
        if (file.getSize() > 5 * 1024 * 1024) {
            throw new BusinessException("语音大小不能超过5MB");
        }
        
        // 生成文件路径
        String fileName = generateFileName(userId, "audio", getFileExtension(file.getOriginalFilename()));
        String objectKey = "private-message/audios/" + fileName;
        
        // 上传文件
        ossClient.putObject(bucketName, objectKey, file.getInputStream());
        
        // 返回文件URL
        String url = "https://" + bucketName + "." + endpoint + "/" + objectKey;
        log.info("语音上传成功,userId:{}, url:{}", userId, url);
        return url;
    }
    
    /**
     * 上传视频
     */
    public String uploadVideo(MultipartFile file, Long userId) throws IOException {
        // 验证文件类型
        String contentType = file.getContentType();
        if (!isValidVideoType(contentType)) {
            throw new BusinessException("不支持的视频格式");
        }
        
        // 验证文件大小(最大100MB)
        if (file.getSize() > 100 * 1024 * 1024) {
            throw new BusinessException("视频大小不能超过100MB");
        }
        
        // 生成文件路径
        String fileName = generateFileName(userId, "video", getFileExtension(file.getOriginalFilename()));
        String objectKey = "private-message/videos/" + fileName;
        
        // 上传文件
        ossClient.putObject(bucketName, objectKey, file.getInputStream());
        
        // 返回文件URL
        String url = "https://" + bucketName + "." + endpoint + "/" + objectKey;
        log.info("视频上传成功,userId:{}, url:{}", userId, url);
        return url;
    }
    
    /**
     * 生成文件名(雪花算法 + 时间戳)
     */
    private String generateFileName(Long userId, String type, String extension) {
        return userId + "_" + type + "_" + 
               System.currentTimeMillis() + "_" + 
               SnowflakeIdGenerator.generateId() + "." + extension;
    }
    
    private String getFileExtension(String fileName) {
        if (fileName == null || !fileName.contains(".")) {
            return "";
        }
        return fileName.substring(fileName.lastIndexOf(".") + 1);
    }
    
    private boolean isValidImageType(String contentType) {
        return contentType != null && 
               (contentType.equals("image/jpeg") || 
                contentType.equals("image/png") || 
                contentType.equals("image/gif") ||
                contentType.equals("image/webp"));
    }
    
    private boolean isValidAudioType(String contentType) {
        return contentType != null && 
               (contentType.equals("audio/mpeg") || 
                contentType.equals("audio/mp4") || 
                contentType.equals("audio/amr"));
    }
    
    private boolean isValidVideoType(String contentType) {
        return contentType != null && 
               (contentType.equals("video/mp4") || 
                contentType.equals("video/avi") || 
                contentType.equals("video/quicktime"));
    }
}
多媒体消息发送服务
java 复制代码
@Service
@Slf4j
public class MultiMediaMessageService {
    @Autowired
    private OssFileUploadService ossFileUploadService;
    @Autowired
    private MessageSendService messageSendService;
    @Autowired
    private MessageMapper messageMapper;
    
    /**
     * 发送图片消息
     */
    @Transactional
    public void sendImageMessage(Long senderId, Long receiverId, MultipartFile imageFile) {
        try {
            // 1. 上传图片到OSS
            String imageUrl = ossFileUploadService.uploadImage(imageFile, senderId);
            
            // 2. 构建消息
            MessageDTO dto = new MessageDTO();
            dto.setSenderId(senderId);
            dto.setReceiverId(receiverId);
            dto.setContent(imageUrl);
            dto.setMessageType(MessageType.IMAGE.getCode()); // 2-图片
            
            // 3. 保存消息附加信息(图片尺寸、大小等)
            MessageMedia media = new MessageMedia();
            media.setUrl(imageUrl);
            media.setFileSize(imageFile.getSize());
            media.setFileType("image");
            media.setOriginalName(imageFile.getOriginalFilename());
            
            // 4. 发送消息
            messageSendService.sendMessage(dto);
            
            // 5. 保存媒体信息
            Long msgId = dto.getMessageId();
            media.setMessageId(msgId);
            messageMapper.insertMessageMedia(media);
            
            log.info("图片消息发送成功,msgId:{}, imageUrl:{}", msgId, imageUrl);
        } catch (IOException e) {
            log.error("图片消息发送失败", e);
            throw new BusinessException("图片发送失败,请重试");
        }
    }
    
    /**
     * 发送语音消息
     */
    @Transactional
    public void sendAudioMessage(Long senderId, Long receiverId, MultipartFile audioFile, Integer duration) {
        try {
            // 1. 上传语音到OSS
            String audioUrl = ossFileUploadService.uploadAudio(audioFile, senderId);
            
            // 2. 构建消息
            MessageDTO dto = new MessageDTO();
            dto.setSenderId(senderId);
            dto.setReceiverId(receiverId);
            dto.setContent(audioUrl);
            dto.setMessageType(MessageType.AUDIO.getCode()); // 3-语音
            
            // 3. 保存媒体信息
            MessageMedia media = new MessageMedia();
            media.setUrl(audioUrl);
            media.setFileSize(audioFile.getSize());
            media.setFileType("audio");
            media.setDuration(duration);
            
            // 4. 发送消息
            messageSendService.sendMessage(dto);
            
            // 5. 保存媒体信息
            Long msgId = dto.getMessageId();
            media.setMessageId(msgId);
            messageMapper.insertMessageMedia(media);
            
            log.info("语音消息发送成功,msgId:{}, audioUrl:{}, duration:{}", msgId, audioUrl, duration);
        } catch (IOException e) {
            log.error("语音消息发送失败", e);
            throw new BusinessException("语音发送失败,请重试");
        }
    }
    
    /**
     * 发送视频消息
     */
    @Transactional
    public void sendVideoMessage(Long senderId, Long receiverId, MultipartFile videoFile, 
                                  MultipartFile thumbnail, Integer duration) {
        try {
            // 1. 上传视频到OSS
            String videoUrl = ossFileUploadService.uploadVideo(videoFile, senderId);
            
            // 2. 上传缩略图
            String thumbnailUrl = null;
            if (thumbnail != null) {
                thumbnailUrl = ossFileUploadService.uploadImage(thumbnail, senderId);
            }
            
            // 3. 构建消息
            MessageDTO dto = new MessageDTO();
            dto.setSenderId(senderId);
            dto.setReceiverId(receiverId);
            dto.setContent(videoUrl);
            dto.setMessageType(MessageType.VIDEO.getCode()); // 4-视频
            
            // 4. 保存媒体信息
            MessageMedia media = new MessageMedia();
            media.setUrl(videoUrl);
            media.setThumbnailUrl(thumbnailUrl);
            media.setFileSize(videoFile.getSize());
            media.setFileType("video");
            media.setDuration(duration);
            
            // 5. 发送消息
            messageSendService.sendMessage(dto);
            
            // 6. 保存媒体信息
            Long msgId = dto.getMessageId();
            media.setMessageId(msgId);
            messageMapper.insertMessageMedia(media);
            
            log.info("视频消息发送成功,msgId:{}, videoUrl:{}", msgId, videoUrl);
        } catch (IOException e) {
            log.error("视频消息发送失败", e);
            throw new BusinessException("视频发送失败,请重试");
        }
    }
}
消息媒体信息表设计
sql 复制代码
CREATE TABLE `message_media` (
    `id` BIGINT NOT NULL AUTO_INCREMENT,
    `message_id` BIGINT NOT NULL COMMENT '消息ID',
    `file_type` VARCHAR(20) NOT NULL COMMENT '文件类型:image/audio/video',
    `url` VARCHAR(500) NOT NULL COMMENT '文件URL',
    `thumbnail_url` VARCHAR(500) DEFAULT NULL COMMENT '缩略图URL(视频)',
    `file_size` BIGINT DEFAULT NULL COMMENT '文件大小(字节)',
    `duration` INT DEFAULT NULL COMMENT '时长(秒,语音/视频)',
    `width` INT DEFAULT NULL COMMENT '宽度(图片/视频)',
    `height` INT DEFAULT NULL COMMENT '高度(图片/视频)',
    `original_name` VARCHAR(200) DEFAULT NULL COMMENT '原始文件名',
    `create_time` DATETIME DEFAULT CURRENT_TIMESTAMP,
    PRIMARY KEY (`id`),
    KEY `idx_message_id` (`message_id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COMMENT='消息媒体信息表';
5.3.3 消息安全 ------ 加密与审核
消息内容加密
java 复制代码
@Service
@Slf4j
public class MessageEncryptionService {
    @Value("${message.encryption.key}")
    private String encryptionKey;
    
    /**
     * 加密消息内容(AES加密)
     */
    public String encryptContent(String content) {
        try {
            // 使用AES加密
            SecretKeySpec secretKey = new SecretKeySpec(encryptionKey.getBytes("UTF-8"), "AES");
            Cipher cipher = Cipher.getInstance("AES/ECB/PKCS5Padding");
            cipher.init(Cipher.ENCRYPT_MODE, secretKey);
            byte[] encrypted = cipher.doFinal(content.getBytes("UTF-8"));
            return Base64.getEncoder().encodeToString(encrypted);
        } catch (Exception e) {
            log.error("消息加密失败", e);
            throw new BusinessException("消息处理失败");
        }
    }
    
    /**
     * 解密消息内容
     */
    public String decryptContent(String encryptedContent) {
        try {
            SecretKeySpec secretKey = new SecretKeySpec(encryptionKey.getBytes("UTF-8"), "AES");
            Cipher cipher = Cipher.getInstance("AES/ECB/PKCS5Padding");
            cipher.init(Cipher.DECRYPT_MODE, secretKey);
            byte[] decrypted = cipher.doFinal(Base64.getDecoder().decode(encryptedContent));
            return new String(decrypted, "UTF-8");
        } catch (Exception e) {
            log.error("消息解密失败", e);
            throw new BusinessException("消息处理失败");
        }
    }
}
敏感词过滤服务
java 复制代码
@Service
@Slf4j
public class SensitiveWordFilterService {
    @Autowired
    private RedisTemplate<String, Object> redisTemplate;
    
    private static final String SENSITIVE_WORDS_KEY = "system:sensitive:words";
    
    /**
     * 初始化敏感词库(从数据库加载到Redis)
     */
    @PostConstruct
    public void initSensitiveWords() {
        // 从数据库加载敏感词
        List<String> sensitiveWords = loadSensitiveWordsFromDB();
        redisTemplate.opsForSet().add(SENSITIVE_WORDS_KEY, sensitiveWords.toArray());
        log.info("敏感词库初始化完成,共{}个词", sensitiveWords.size());
    }
    
    /**
     * 检查消息是否包含敏感词
     */
    public boolean containsSensitiveWord(String content) {
        if (content == null || content.isEmpty()) {
            return false;
        }
        
        // 从Redis获取敏感词库
        Set<Object> sensitiveWords = redisTemplate.opsForSet().members(SENSITIVE_WORDS_KEY);
        if (sensitiveWords == null || sensitiveWords.isEmpty()) {
            return false;
        }
        
        // 检查是否包含敏感词
        for (Object word : sensitiveWords) {
            if (content.contains(word.toString())) {
                log.warn("消息包含敏感词:{}", word);
                return true;
            }
        }
        
        return false;
    }
    
    /**
     * 过滤敏感词(替换为*)
     */
    public String filterSensitiveWords(String content) {
        if (content == null || content.isEmpty()) {
            return content;
        }
        
        Set<Object> sensitiveWords = redisTemplate.opsForSet().members(SENSITIVE_WORDS_KEY);
        if (sensitiveWords == null || sensitiveWords.isEmpty()) {
            return content;
        }
        
        String filteredContent = content;
        for (Object word : sensitiveWords) {
            String wordStr = word.toString();
            if (filteredContent.contains(wordStr)) {
                String replacement = "*".repeat(wordStr.length());
                filteredContent = filteredContent.replace(wordStr, replacement);
            }
        }
        
        return filteredContent;
    }
    
    private List<String> loadSensitiveWordsFromDB() {
        // 从数据库加载敏感词
        return new ArrayList<>();
    }
}
消息审核服务(接入第三方内容审核)
java 复制代码
@Service
@Slf4j
public class MessageAuditService {
    @Autowired
    private RestTemplate restTemplate;
    
    @Value("${aliyun.content.audit.access-key}")
    private String accessKey;
    
    @Value("${aliyun.content.audit.secret-key}")
    private String secretKey;
    
    /**
     * 审核文本内容(接入阿里云内容安全)
     */
    public AuditResult auditTextContent(String content) {
        try {
            // 构建审核请求
            Map<String, Object> request = new HashMap<>();
            request.put("content", content);
            request.put("accessKey", accessKey);
            
            // 调用审核接口
            String response = restTemplate.postForObject(
                    "https://green.aliyuncs.com/text/scan",
                    request,
                    String.class
            );
            
            // 解析审核结果
            AuditResult result = parseAuditResponse(response);
            log.info("文本审核完成,结果:{}", result);
            return result;
        } catch (Exception e) {
            log.error("文本审核失败", e);
            // 审核失败时的降级策略:使用本地敏感词过滤
            return new AuditResult(true, "审核服务异常,请稍后重试");
        }
    }
    
    /**
     * 审核图片内容
     */
    public AuditResult auditImageContent(String imageUrl) {
        try {
            Map<String, Object> request = new HashMap<>();
            request.put("imageUrl", imageUrl);
            request.put("accessKey", accessKey);
            
            String response = restTemplate.postForObject(
                    "https://green.aliyuncs.com/image/scan",
                    request,
                    String.class
            );
            
            AuditResult result = parseAuditResponse(response);
            log.info("图片审核完成,结果:{}", result);
            return result;
        } catch (Exception e) {
            log.error("图片审核失败", e);
            return new AuditResult(false, null);
        }
    }
    
    private AuditResult parseAuditResponse(String response) {
        // 解析审核响应
        return new AuditResult(false, null);
    }
    
    @Data
    @AllArgsConstructor
    public static class AuditResult {
        private boolean blocked; // 是否拦截
        private String reason;   // 拦截原因
    }
}
定期清理已删除消息
java 复制代码
@Component
@Slf4j
public class MessageCleanupScheduler {
    @Autowired
    private MessageMapper messageMapper;
    @Autowired
    private OssFileUploadService ossFileUploadService;
    
    /**
     * 清理双方都已删除的消息(每天凌晨2点执行)
     */
    @Scheduled(cron = "0 0 2 * * ?")
    public void cleanupDeletedMessages() {
        log.info("开始清理已删除消息");
        try {
            // 1. 查询双方都已删除且超过30天的消息
            LocalDateTime beforeDate = LocalDateTime.now().minusDays(30);
            List<Message> deletedMessages = messageMapper.queryBothDeletedMessages(beforeDate);
            
            if (deletedMessages.isEmpty()) {
                log.info("没有需要清理的消息");
                return;
            }
            
            // 2. 删除关联的媒体文件
            for (Message message : deletedMessages) {
                if (message.getMessageType() != MessageType.TEXT.getCode()) {
                    // 查询媒体信息
                    MessageMedia media = messageMapper.queryMessageMedia(message.getId());
                    if (media != null) {
                        // 删除OSS文件
                        deleteOssFile(media.getUrl());
                        if (media.getThumbnailUrl() != null) {
                            deleteOssFile(media.getThumbnailUrl());
                        }
                    }
                }
            }
            
            // 3. 删除数据库记录
            List<Long> msgIds = deletedMessages.stream()
                    .map(Message::getId)
                    .collect(Collectors.toList());
            messageMapper.batchPhysicalDelete(msgIds);
            
            log.info("消息清理完成,共清理{}条消息", msgIds.size());
        } catch (Exception e) {
            log.error("消息清理失败", e);
        }
    }
    
    private void deleteOssFile(String fileUrl) {
        try {
            // 从URL提取objectKey
            String objectKey = extractObjectKeyFromUrl(fileUrl);
            // 删除OSS文件
            ossClient.deleteObject(bucketName, objectKey);
            log.info("OSS文件删除成功:{}", objectKey);
        } catch (Exception e) {
            log.error("OSS文件删除失败:{}", fileUrl, e);
        }
    }
    
    private String extractObjectKeyFromUrl(String url) {
        // 从完整URL中提取objectKey
        // 例如:https://bucket.oss-cn-hangzhou.aliyuncs.com/private-message/images/xxx.jpg
        // 提取:private-message/images/xxx.jpg
        int index = url.indexOf(".aliyuncs.com/");
        if (index > 0) {
            return url.substring(index + 15);
        }
        return url;
    }
}
5.3.4 分布式部署 ------ 高可用架构
MySQL 表分区策略
sql 复制代码
-- 按月份分区的消息表
CREATE TABLE `private_message` (
    `id` BIGINT NOT NULL COMMENT '消息ID(雪花算法生成)',
    `sender_id` BIGINT NOT NULL COMMENT '发送者ID',
    `receiver_id` BIGINT NOT NULL COMMENT '接收者ID',
    `content` TEXT NOT NULL COMMENT '消息内容',
    `message_type` TINYINT DEFAULT 1 COMMENT '消息类型:1-文本,2-图片,3-语音,4-视频',
    `send_time` DATETIME NOT NULL COMMENT '发送时间',
    `read_status` TINYINT DEFAULT 0 COMMENT '已读状态:0-未读,1-已读',
    `read_time` DATETIME DEFAULT NULL COMMENT '阅读时间',
    `deleted_by_sender` TINYINT DEFAULT 0 COMMENT '发送者是否删除:0-未删除,1-已删除',
    `deleted_by_receiver` TINYINT DEFAULT 0 COMMENT '接收者是否删除:0-未删除,1-已删除',
    `create_time` DATETIME DEFAULT CURRENT_TIMESTAMP,
    `update_time` DATETIME DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP,
    PRIMARY KEY (`id`, `send_time`),
    KEY `idx_sender_receiver` (`sender_id`, `receiver_id`, `send_time`),
    KEY `idx_receiver_read` (`receiver_id`, `read_status`, `send_time`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COMMENT='私信消息表'
PARTITION BY RANGE (TO_DAYS(`send_time`)) (
    PARTITION p202601 VALUES LESS THAN (TO_DAYS('2026-02-01')),
    PARTITION p202602 VALUES LESS THAN (TO_DAYS('2026-03-01')),
    PARTITION p202603 VALUES LESS THAN (TO_DAYS('2026-04-01')),
    PARTITION p202604 VALUES LESS THAN (TO_DAYS('2026-05-01')),
    PARTITION p202605 VALUES LESS THAN (TO_DAYS('2026-06-01')),
    PARTITION p202606 VALUES LESS THAN (TO_DAYS('2026-07-01')),
    PARTITION p202607 VALUES LESS THAN (TO_DAYS('2026-08-01')),
    PARTITION p202608 VALUES LESS THAN (TO_DAYS('2026-09-01')),
    PARTITION p202609 VALUES LESS THAN (TO_DAYS('2026-10-01')),
    PARTITION p202610 VALUES LESS THAN (TO_DAYS('2026-11-01')),
    PARTITION p202611 VALUES LESS THAN (TO_DAYS('2026-12-01')),
    PARTITION p202612 VALUES LESS THAN (TO_DAYS('2027-01-01')),
    PARTITION p_future VALUES LESS THAN MAXVALUE
);

-- 自动创建新分区的存储过程
DELIMITER $$
CREATE PROCEDURE create_partition_if_not_exists()
BEGIN
    DECLARE next_month_start DATE;
    DECLARE partition_name VARCHAR(20);
    DECLARE partition_exists INT;
    
    -- 计算下个月的开始日期
    SET next_month_start = DATE_ADD(LAST_DAY(CURDATE()), INTERVAL 1 DAY);
    SET partition_name = CONCAT('p', DATE_FORMAT(next_month_start, '%Y%m'));
    
    -- 检查分区是否存在
    SELECT COUNT(*) INTO partition_exists
    FROM information_schema.partitions
    WHERE table_schema = DATABASE()
    AND table_name = 'private_message'
    AND partition_name = partition_name;
    
    -- 如果分区不存在,则创建
    IF partition_exists = 0 THEN
        SET @sql = CONCAT(
            'ALTER TABLE private_message REORGANIZE PARTITION p_future INTO (',
            'PARTITION ', partition_name, ' VALUES LESS THAN (TO_DAYS("', 
            DATE_ADD(next_month_start, INTERVAL 1 MONTH), '")),',
            'PARTITION p_future VALUES LESS THAN MAXVALUE)'
        );
        PREPARE stmt FROM @sql;
        EXECUTE stmt;
        DEALLOCATE PREPARE stmt;
    END IF;
END$$
DELIMITER ;

-- 创建定时事件(每月1号自动创建新分区)
CREATE EVENT IF NOT EXISTS create_monthly_partition
ON SCHEDULE EVERY 1 MONTH
STARTS CONCAT(DATE_FORMAT(DATE_ADD(CURDATE(), INTERVAL 1 MONTH), '%Y-%m'), '-01 00:00:00')
DO CALL create_partition_if_not_exists();
RabbitMQ 集群配置
yaml 复制代码
# application-cluster.yml
spring:
  rabbitmq:
    addresses: rabbitmq-node1:5672,rabbitmq-node2:5672,rabbitmq-node3:5672
    username: admin
    password: ${RABBITMQ_PASSWORD}
    virtual-host: /
    connection-timeout: 15000
    # 发布者确认
    publisher-confirm-type: correlated
    publisher-returns: true
    # 消费者配置
    listener:
      simple:
        acknowledge-mode: manual
        prefetch: 10
        retry:
          enabled: true
          max-attempts: 3
          initial-interval: 1000
          multiplier: 2
          max-interval: 10000
Redis 哨兵模式配置
yaml 复制代码
# application-cluster.yml
spring:
  redis:
    sentinel:
      master: mymaster
      nodes:
        - redis-sentinel-1:26379
        - redis-sentinel-2:26379
        - redis-sentinel-3:26379
    password: ${REDIS_PASSWORD}
    database: 0
    timeout: 3000
    lettuce:
      pool:
        max-active: 100
        max-idle: 50
        min-idle: 10
        max-wait: 1000
      shutdown-timeout: 5000
MySQL 读写分离配置(ShardingSphere)
yaml 复制代码
# application-cluster.yml
spring:
  shardingsphere:
    datasource:
      names: master,slave1,slave2
      master:
        type: com.zaxxer.hikari.HikariDataSource
        driver-class-name: com.mysql.cj.jdbc.Driver
        jdbc-url: jdbc:mysql://mysql-master:3306/ai_story_app?useSSL=false
        username: root
        password: ${MYSQL_PASSWORD}
      slave1:
        type: com.zaxxer.hikari.HikariDataSource
        driver-class-name: com.mysql.cj.jdbc.Driver
        jdbc-url: jdbc:mysql://mysql-slave1:3306/ai_story_app?useSSL=false
        username: root
        password: ${MYSQL_PASSWORD}
      slave2:
        type: com.zaxxer.hikari.HikariDataSource
        driver-class-name: com.mysql.cj.jdbc.Driver
        jdbc-url: jdbc:mysql://mysql-slave2:3306/ai_story_app?useSSL=false
        username: root
        password: ${MYSQL_PASSWORD}
    rules:
      readwrite-splitting:
        data-sources:
          private_message_ds:
            type: Static
            props:
              write-data-source-name: master
              read-data-source-names: slave1,slave2
              load-balancer-name: round_robin
        load-balancers:
          round_robin:
            type: ROUND_ROBIN
负载均衡配置(Nginx)
nginx 复制代码
# nginx.conf
upstream api_servers {
    # 负载均衡策略:加权轮询
    server app-server-1:8080 weight=3;
    server app-server-2:8080 weight=2;
    server app-server-3:8080 weight=1;
    
    # 健康检查
    keepalive 32;
}

upstream websocket_servers {
    # WebSocket 长连接需要 IP Hash 保持会话
    ip_hash;
    server app-server-1:8080;
    server app-server-2:8080;
    server app-server-3:8080;
}

server {
    listen 80;
    server_name api.example.com;
    
    # HTTP API 请求
    location /api/ {
        proxy_pass http://api_servers;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        
        # 超时设置
        proxy_connect_timeout 5s;
        proxy_send_timeout 10s;
        proxy_read_timeout 10s;
    }
    
    # WebSocket 请求
    location /ws-message {
        proxy_pass http://websocket_servers;
        proxy_http_version 1.1;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection "upgrade";
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        
        # WebSocket 长连接超时
        proxy_connect_timeout 1h;
        proxy_send_timeout 1h;
        proxy_read_timeout 1h;
    }
}

相关推荐
dkbnull7 小时前
深入理解Spring两大特性:IoC和AOP
spring boot
初次攀爬者7 小时前
RabbitMQ的消息模式和高级特性
后端·消息队列·rabbitmq
洋洋技术笔记11 小时前
Spring Boot条件注解详解
java·spring boot
洋洋技术笔记1 天前
Spring Boot配置管理最佳实践
spring boot
用户8307196840822 天前
Spring Boot 项目中日期处理的最佳实践
java·spring boot
大道至简Edward2 天前
Spring Boot 2.7 + JDK 8 升级到 Spring Boot 3.x + JDK 17 完整指南
spring boot·后端
洋洋技术笔记2 天前
Spring Boot启动流程解析
spring boot·后端
怒放吧德德3 天前
Spring Boot 实战:RSA+AES 接口全链路加解密(防篡改 / 防重放)
java·spring boot·后端
李慕婉学姐3 天前
Springboot智慧社区系统设计与开发6n99s526(程序+源码+数据库+调试部署+开发环境)带论文文档1万字以上,文末可获取,系统界面在最后面。
数据库·spring boot·后端
QQ5110082853 天前
python+springboot+django/flask的校园资料分享系统
spring boot·python·django·flask·node.js·php