Java学习第32天 - 性能优化与架构设计

学习目标

掌握性能优化方法,深入理解分布式系统设计,熟练使用消息队列和缓存,掌握监控与运维技术,学习领域驱动设计和事件驱动架构。


1. 性能优化深入

1.1 代码层面优化

代码优化实践:

java 复制代码
// 字符串优化
@Service
@Slf4j
public class StringOptimizationService {
    
    // 使用StringBuilder替代字符串拼接
    public String buildStringWithBuilder(List<String> items) {
        StringBuilder sb = new StringBuilder();
        for (String item : items) {
            sb.append(item).append(",");
        }
        return sb.toString();
    }
    
    // 预分配StringBuilder容量
    public String buildStringWithCapacity(List<String> items) {
        int estimatedSize = items.size() * 10; // 估算每个字符串平均长度
        StringBuilder sb = new StringBuilder(estimatedSize);
        for (String item : items) {
            sb.append(item).append(",");
        }
        return sb.toString();
    }
    
    // 使用StringUtils.join(Apache Commons)
    public String joinStrings(List<String> items) {
        return StringUtils.join(items, ",");
    }
    
    // 避免在循环中使用字符串拼接
    public String badStringConcatenation(List<String> items) {
        String result = "";
        for (String item : items) {
            result += item + ","; // 性能差,每次创建新对象
        }
        return result;
    }
}

// 集合优化
@Service
@Slf4j
public class CollectionOptimizationService {
    
    // 预分配ArrayList容量
    public List<String> createListWithCapacity(int expectedSize) {
        return new ArrayList<>(expectedSize);
    }
    
    // 使用合适的集合类型
    public void useAppropriateCollection() {
        // 需要去重,使用Set
        Set<String> uniqueItems = new HashSet<>();
        
        // 需要保持顺序,使用LinkedHashSet
        Set<String> orderedUniqueItems = new LinkedHashSet<>();
        
        // 需要排序,使用TreeSet
        Set<String> sortedItems = new TreeSet<>();
        
        // 频繁随机访问,使用ArrayList
        List<String> randomAccessList = new ArrayList<>();
        
        // 频繁插入删除,使用LinkedList
        List<String> frequentModifyList = new LinkedList<>();
    }
    
    // 使用Stream API优化集合操作
    public List<String> optimizeWithStream(List<String> items) {
        return items.stream()
            .filter(item -> item != null && !item.isEmpty())
            .map(String::toUpperCase)
            .distinct()
            .sorted()
            .collect(Collectors.toList());
    }
    
    // 并行流处理大数据集
    public List<String> parallelStreamProcess(List<String> items) {
        return items.parallelStream()
            .filter(item -> item.length() > 5)
            .map(String::toUpperCase)
            .collect(Collectors.toList());
    }
}

// 对象创建优化
@Service
@Slf4j
public class ObjectCreationOptimizationService {
    
    // 对象池模式
    private final ThreadLocal<SimpleDateFormat> dateFormatPool = 
        ThreadLocal.withInitial(() -> new SimpleDateFormat("yyyy-MM-dd"));
    
    public String formatDate(Date date) {
        return dateFormatPool.get().format(date);
    }
    
    // 使用不可变对象
    @Immutable
    public static class ImmutableValue {
        private final String value;
        private final int number;
        
        public ImmutableValue(String value, int number) {
            this.value = value;
            this.number = number;
        }
        
        public String getValue() {
            return value;
        }
        
        public int getNumber() {
            return number;
        }
    }
    
    // 避免不必要的对象创建
    public boolean checkString(String str) {
        // 错误:每次都创建新对象
        // return str.equals(new String("test"));
        
        // 正确:使用字面量
        return "test".equals(str);
    }
}

1.2 JVM调优实战

JVM参数调优:

java 复制代码
// JVM监控与调优服务
@Service
@Slf4j
public class JVMTuningService {
    
    // 获取JVM参数
    public JVMParameters getJVMParameters() {
        RuntimeMXBean runtimeBean = ManagementFactory.getRuntimeMXBean();
        List<String> jvmArgs = runtimeBean.getInputArguments();
        
        JVMParameters params = new JVMParameters();
        params.setJvmArgs(jvmArgs);
        params.setVmName(runtimeBean.getVmName());
        params.setVmVersion(runtimeBean.getVmVersion());
        params.setStartTime(runtimeBean.getStartTime());
        params.setUptime(runtimeBean.getUptime());
        
        return params;
    }
    
    // 堆内存调优建议
    public HeapTuningAdvice getHeapTuningAdvice() {
        MemoryMXBean memoryBean = ManagementFactory.getMemoryMXBean();
        MemoryUsage heapUsage = memoryBean.getHeapMemoryUsage();
        
        HeapTuningAdvice advice = new HeapTuningAdvice();
        
        long used = heapUsage.getUsed();
        long max = heapUsage.getMax();
        double usageRate = (double) used / max * 100;
        
        if (usageRate > 80) {
            advice.setSuggestion("堆内存使用率过高,建议增加堆内存");
            advice.setRecommendedXmx("建议设置 -Xmx4g -Xms4g");
        } else if (usageRate < 30) {
            advice.setSuggestion("堆内存使用率较低,可以适当减少堆内存");
            advice.setRecommendedXmx("建议设置 -Xmx2g -Xms2g");
        } else {
            advice.setSuggestion("堆内存使用率正常");
            advice.setRecommendedXmx("当前配置合理");
        }
        
        return advice;
    }
    
    // GC调优建议
    public GCTuningAdvice getGCTuningAdvice() {
        List<GarbageCollectorMXBean> gcBeans = ManagementFactory.getGarbageCollectorMXBeans();
        GCTuningAdvice advice = new GCTuningAdvice();
        
        long totalGCTime = 0;
        long totalGCCount = 0;
        
        for (GarbageCollectorMXBean gcBean : gcBeans) {
            totalGCTime += gcBean.getCollectionTime();
            totalGCCount += gcBean.getCollectionCount();
        }
        
        if (totalGCTime > 10000) { // GC时间超过10秒
            advice.setSuggestion("GC时间过长,建议优化");
            advice.setRecommendedGC("-XX:+UseG1GC -XX:MaxGCPauseMillis=200");
        } else if (totalGCCount > 1000) {
            advice.setSuggestion("GC频率过高,建议增加堆内存或优化代码");
            advice.setRecommendedGC("-XX:+UseG1GC -XX:InitiatingHeapOccupancyPercent=45");
        } else {
            advice.setSuggestion("GC表现正常");
            advice.setRecommendedGC("当前配置合理");
        }
        
        return advice;
    }
    
    // 线程监控
    public ThreadInfo getThreadInfo() {
        ThreadMXBean threadBean = ManagementFactory.getThreadMXBean();
        ThreadInfo info = new ThreadInfo();
        
        info.setThreadCount(threadBean.getThreadCount());
        info.setPeakThreadCount(threadBean.getPeakThreadCount());
        info.setTotalStartedThreadCount(threadBean.getTotalStartedThreadCount());
        info.setDaemonThreadCount(threadBean.getDaemonThreadCount());
        
        return info;
    }
}

// JVM参数
@Data
public class JVMParameters {
    private List<String> jvmArgs;
    private String vmName;
    private String vmVersion;
    private long startTime;
    private long uptime;
}

// 堆内存调优建议
@Data
public class HeapTuningAdvice {
    private String suggestion;
    private String recommendedXmx;
}

// GC调优建议
@Data
public class GCTuningAdvice {
    private String suggestion;
    private String recommendedGC;
}

// 线程信息
@Data
public class ThreadInfo {
    private int threadCount;
    private int peakThreadCount;
    private long totalStartedThreadCount;
    private int daemonThreadCount;
}

1.3 数据库优化

数据库查询优化:

java 复制代码
// 数据库优化服务
@Service
@Slf4j
public class DatabaseOptimizationService {
    
    @Autowired
    private OrderMapper orderMapper;
    
    // 使用索引优化查询
    public List<Order> findOrdersByUserId(Long userId) {
        // 确保user_id字段有索引
        return orderMapper.selectByUserId(userId);
    }
    
    // 分页查询优化
    public PageResult<Order> findOrdersWithPagination(PageRequest request) {
        // 使用limit和offset,避免全表扫描
        int offset = (request.getPageNum() - 1) * request.getPageSize();
        return orderMapper.selectWithPagination(offset, request.getPageSize());
    }
    
    // 批量操作优化
    @Transactional
    public void batchInsertOrders(List<Order> orders) {
        // 使用批量插入,减少数据库交互次数
        orderMapper.batchInsert(orders);
    }
    
    // 避免N+1查询问题
    public List<OrderDTO> findOrdersWithDetails(Long userId) {
        // 使用JOIN查询,一次性获取所有数据
        return orderMapper.selectOrdersWithDetails(userId);
    }
    
    // 使用连接池优化
    @Configuration
    public static class DataSourceConfig {
        
        @Bean
        public DataSource dataSource() {
            HikariConfig config = new HikariConfig();
            config.setJdbcUrl("jdbc:mysql://localhost:3306/test");
            config.setUsername("root");
            config.setPassword("password");
            
            // 连接池优化参数
            config.setMaximumPoolSize(20); // 最大连接数
            config.setMinimumIdle(5); // 最小空闲连接
            config.setConnectionTimeout(30000); // 连接超时
            config.setIdleTimeout(600000); // 空闲超时
            config.setMaxLifetime(1800000); // 连接最大生命周期
            
            return new HikariDataSource(config);
        }
    }
    
    // 查询缓存
    @Cacheable(value = "orders", key = "#userId")
    public List<Order> findOrdersCached(Long userId) {
        return orderMapper.selectByUserId(userId);
    }
}

// 分页请求
@Data
public class PageRequest {
    private Integer pageNum = 1;
    private Integer pageSize = 10;
}

// 分页结果
@Data
public class PageResult<T> {
    private List<T> data;
    private Long total;
    private Integer pageNum;
    private Integer pageSize;
}

2. 分布式系统高级主题

2.1 分布式事务深入

Seata分布式事务:

java 复制代码
// Seata分布式事务服务
@Service
@Slf4j
public class DistributedTransactionService {
    
    @Autowired
    private OrderService orderService;
    
    @Autowired
    private InventoryService inventoryService;
    
    @Autowired
    private AccountService accountService;
    
    // 使用@GlobalTransactional注解
    @GlobalTransactional(rollbackFor = Exception.class)
    public void createOrderWithDistributedTransaction(CreateOrderRequest request) {
        // 1. 创建订单
        Order order = orderService.createOrder(request);
        
        // 2. 扣减库存
        inventoryService.deductInventory(request.getProductId(), request.getQuantity());
        
        // 3. 扣减账户余额
        accountService.deductBalance(request.getUserId(), order.getAmount());
        
        // 如果任何一步失败,所有操作都会回滚
    }
    
    // TCC模式实现
    @Service
    @Slf4j
    public static class OrderTCCService {
        
        // Try阶段:尝试执行
        public void tryCreateOrder(CreateOrderRequest request) {
            // 预创建订单(状态为待确认)
            Order order = new Order();
            order.setStatus(OrderStatus.PENDING);
            orderMapper.insert(order);
        }
        
        // Confirm阶段:确认执行
        public void confirmCreateOrder(Long orderId) {
            // 确认订单(状态改为已确认)
            Order order = orderMapper.selectById(orderId);
            order.setStatus(OrderStatus.CONFIRMED);
            orderMapper.updateById(order);
        }
        
        // Cancel阶段:取消执行
        public void cancelCreateOrder(Long orderId) {
            // 取消订单(删除或标记为已取消)
            orderMapper.deleteById(orderId);
        }
    }
    
    // Saga模式实现
    @Service
    @Slf4j
    public static class OrderSagaService {
        
        // 正向操作
        public void createOrderStep(CreateOrderRequest request) {
            orderService.createOrder(request);
        }
        
        // 补偿操作
        public void cancelOrderStep(Long orderId) {
            orderService.cancelOrder(orderId);
        }
    }
}

// 订单状态枚举
public enum OrderStatus {
    PENDING,    // 待确认
    CONFIRMED,  // 已确认
    CANCELLED   // 已取消
}

2.2 分布式锁实现

Redis分布式锁:

java 复制代码
// Redis分布式锁服务
@Service
@Slf4j
public class DistributedLockService {
    
    @Autowired
    private StringRedisTemplate redisTemplate;
    
    private static final String LOCK_PREFIX = "lock:";
    private static final long DEFAULT_EXPIRE_TIME = 30; // 默认30秒
    
    // 获取锁
    public boolean tryLock(String key, long expireTime) {
        String lockKey = LOCK_PREFIX + key;
        String value = UUID.randomUUID().toString();
        
        Boolean result = redisTemplate.opsForValue()
            .setIfAbsent(lockKey, value, expireTime, TimeUnit.SECONDS);
        
        return Boolean.TRUE.equals(result);
    }
    
    // 释放锁(使用Lua脚本保证原子性)
    public void releaseLock(String key, String value) {
        String lockKey = LOCK_PREFIX + key;
        String luaScript = 
            "if redis.call('get', KEYS[1]) == ARGV[1] then " +
            "return redis.call('del', KEYS[1]) " +
            "else return 0 end";
        
        DefaultRedisScript<Long> script = new DefaultRedisScript<>();
        script.setScriptText(luaScript);
        script.setResultType(Long.class);
        
        redisTemplate.execute(script, Collections.singletonList(lockKey), value);
    }
    
    // 可重入锁实现
    public boolean tryReentrantLock(String key, String threadId, long expireTime) {
        String lockKey = LOCK_PREFIX + key;
        String lockValue = lockKey + ":" + threadId;
        
        // 检查是否已持有锁
        String existingValue = redisTemplate.opsForValue().get(lockKey);
        if (lockValue.equals(existingValue)) {
            // 重入,增加计数
            redisTemplate.opsForValue().increment(lockKey + ":count");
            return true;
        }
        
        // 尝试获取锁
        Boolean result = redisTemplate.opsForValue()
            .setIfAbsent(lockKey, lockValue, expireTime, TimeUnit.SECONDS);
        
        if (Boolean.TRUE.equals(result)) {
            redisTemplate.opsForValue().set(lockKey + ":count", "1", expireTime, TimeUnit.SECONDS);
            return true;
        }
        
        return false;
    }
    
    // 使用分布式锁的业务方法
    public void processWithLock(String businessKey) {
        String lockValue = UUID.randomUUID().toString();
        
        try {
            // 尝试获取锁
            if (tryLock(businessKey, DEFAULT_EXPIRE_TIME)) {
                // 执行业务逻辑
                doBusinessLogic(businessKey);
            } else {
                throw new RuntimeException("获取锁失败");
            }
        } finally {
            // 释放锁
            releaseLock(businessKey, lockValue);
        }
    }
    
    private void doBusinessLogic(String businessKey) {
        // 业务逻辑
        log.info("执行业务逻辑: {}", businessKey);
    }
    
    // Redisson分布式锁(推荐使用)
    @Autowired(required = false)
    private RedissonClient redissonClient;
    
    public void processWithRedissonLock(String businessKey) {
        RLock lock = redissonClient.getLock(LOCK_PREFIX + businessKey);
        
        try {
            // 尝试获取锁,最多等待10秒,锁定后30秒自动释放
            if (lock.tryLock(10, 30, TimeUnit.SECONDS)) {
                try {
                    // 执行业务逻辑
                    doBusinessLogic(businessKey);
                } finally {
                    lock.unlock();
                }
            } else {
                throw new RuntimeException("获取锁失败");
            }
        } catch (InterruptedException e) {
            Thread.currentThread().interrupt();
            throw new RuntimeException("获取锁被中断", e);
        }
    }
}

2.3 分布式ID生成

分布式ID生成器:

java 复制代码
// 雪花算法ID生成器
@Component
@Slf4j
public class SnowflakeIdGenerator {
    
    // 起始时间戳(2020-01-01)
    private static final long START_TIMESTAMP = 1577836800000L;
    
    // 机器ID占用的位数
    private static final long WORKER_ID_BITS = 5L;
    
    // 数据中心ID占用的位数
    private static final long DATACENTER_ID_BITS = 5L;
    
    // 序列号占用的位数
    private static final long SEQUENCE_BITS = 12L;
    
    // 机器ID最大值
    private static final long MAX_WORKER_ID = ~(-1L << WORKER_ID_BITS);
    
    // 数据中心ID最大值
    private static final long MAX_DATACENTER_ID = ~(-1L << DATACENTER_ID_BITS);
    
    // 序列号最大值
    private static final long MAX_SEQUENCE = ~(-1L << SEQUENCE_BITS);
    
    // 机器ID左移位数
    private static final long WORKER_ID_SHIFT = SEQUENCE_BITS;
    
    // 数据中心ID左移位数
    private static final long DATACENTER_ID_SHIFT = SEQUENCE_BITS + WORKER_ID_BITS;
    
    // 时间戳左移位数
    private static final long TIMESTAMP_SHIFT = SEQUENCE_BITS + WORKER_ID_BITS + DATACENTER_ID_BITS;
    
    private final long workerId;
    private final long datacenterId;
    private long sequence = 0L;
    private long lastTimestamp = -1L;
    
    public SnowflakeIdGenerator(@Value("${snowflake.worker-id:1}") long workerId,
                                @Value("${snowflake.datacenter-id:1}") long datacenterId) {
        if (workerId > MAX_WORKER_ID || workerId < 0) {
            throw new IllegalArgumentException("workerId must be between 0 and " + MAX_WORKER_ID);
        }
        if (datacenterId > MAX_DATACENTER_ID || datacenterId < 0) {
            throw new IllegalArgumentException("datacenterId must be between 0 and " + MAX_DATACENTER_ID);
        }
        this.workerId = workerId;
        this.datacenterId = datacenterId;
    }
    
    public synchronized long nextId() {
        long timestamp = System.currentTimeMillis();
        
        // 如果当前时间小于上一次ID生成的时间戳,说明系统时钟回退
        if (timestamp < lastTimestamp) {
            throw new RuntimeException("系统时钟回退,拒绝生成ID");
        }
        
        // 如果是同一毫秒内生成的,则进行毫秒内序列
        if (timestamp == lastTimestamp) {
            sequence = (sequence + 1) & MAX_SEQUENCE;
            // 毫秒内序列溢出
            if (sequence == 0) {
                // 阻塞到下一个毫秒,获得新的时间戳
                timestamp = tilNextMillis(lastTimestamp);
            }
        } else {
            // 时间戳改变,毫秒内序列重置
            sequence = 0L;
        }
        
        // 上次生成ID的时间戳
        lastTimestamp = timestamp;
        
        // 移位并通过或运算拼到一起组成64位的ID
        return ((timestamp - START_TIMESTAMP) << TIMESTAMP_SHIFT)
            | (datacenterId << DATACENTER_ID_SHIFT)
            | (workerId << WORKER_ID_SHIFT)
            | sequence;
    }
    
    private long tilNextMillis(long lastTimestamp) {
        long timestamp = System.currentTimeMillis();
        while (timestamp <= lastTimestamp) {
            timestamp = System.currentTimeMillis();
        }
        return timestamp;
    }
}

// Redis分布式ID生成器
@Service
@Slf4j
public class RedisIdGenerator {
    
    @Autowired
    private StringRedisTemplate redisTemplate;
    
    private static final String ID_KEY_PREFIX = "id:generator:";
    
    // 基于Redis的分布式ID生成
    public Long generateId(String businessKey) {
        String key = ID_KEY_PREFIX + businessKey;
        Long id = redisTemplate.opsForValue().increment(key);
        
        // 设置过期时间,避免key无限增长
        redisTemplate.expire(key, 1, TimeUnit.DAYS);
        
        return id;
    }
    
    // 带步长的ID生成
    public Long generateIdWithStep(String businessKey, int step) {
        String key = ID_KEY_PREFIX + businessKey;
        Long id = redisTemplate.opsForValue().increment(key, step);
        
        redisTemplate.expire(key, 1, TimeUnit.DAYS);
        
        return id;
    }
}

// ID生成服务
@Service
@Slf4j
public class IdGenerationService {
    
    @Autowired
    private SnowflakeIdGenerator snowflakeIdGenerator;
    
    @Autowired
    private RedisIdGenerator redisIdGenerator;
    
    // 生成订单ID
    public Long generateOrderId() {
        return snowflakeIdGenerator.nextId();
    }
    
    // 生成用户ID
    public Long generateUserId() {
        return snowflakeIdGenerator.nextId();
    }
    
    // 生成序列号
    public Long generateSequenceNumber(String businessType) {
        return redisIdGenerator.generateId(businessType);
    }
}

3. 消息队列深入应用

3.1 RabbitMQ高级应用

RabbitMQ高级特性:

java 复制代码
// RabbitMQ配置
@Configuration
@Slf4j
public class RabbitMQConfig {
    
    // 订单交换机
    public static final String ORDER_EXCHANGE = "order.exchange";
    
    // 订单队列
    public static final String ORDER_QUEUE = "order.queue";
    
    // 订单路由键
    public static final String ORDER_ROUTING_KEY = "order.create";
    
    // 死信交换机
    public static final String DLX_EXCHANGE = "dlx.exchange";
    
    // 死信队列
    public static final String DLX_QUEUE = "dlx.queue";
    
    // 声明交换机
    @Bean
    public TopicExchange orderExchange() {
        return new TopicExchange(ORDER_EXCHANGE, true, false);
    }
    
    // 声明队列(带死信队列)
    @Bean
    public Queue orderQueue() {
        Map<String, Object> args = new HashMap<>();
        args.put("x-dead-letter-exchange", DLX_EXCHANGE);
        args.put("x-dead-letter-routing-key", "dlx.order");
        args.put("x-message-ttl", 60000); // 消息TTL 60秒
        return QueueBuilder.durable(ORDER_QUEUE).withArguments(args).build();
    }
    
    // 绑定队列和交换机
    @Bean
    public Binding orderBinding() {
        return BindingBuilder
            .bind(orderQueue())
            .to(orderExchange())
            .with(ORDER_ROUTING_KEY);
    }
    
    // 死信交换机
    @Bean
    public DirectExchange dlxExchange() {
        return new DirectExchange(DLX_EXCHANGE, true, false);
    }
    
    // 死信队列
    @Bean
    public Queue dlxQueue() {
        return QueueBuilder.durable(DLX_QUEUE).build();
    }
    
    // 绑定死信队列
    @Bean
    public Binding dlxBinding() {
        return BindingBuilder
            .bind(dlxQueue())
            .to(dlxExchange())
            .with("dlx.order");
    }
    
    // 配置消息转换器
    @Bean
    public MessageConverter messageConverter() {
        return new Jackson2JsonMessageConverter();
    }
    
    // 配置RabbitTemplate
    @Bean
    public RabbitTemplate rabbitTemplate(ConnectionFactory connectionFactory) {
        RabbitTemplate template = new RabbitTemplate(connectionFactory);
        template.setMessageConverter(messageConverter());
        
        // 确认回调
        template.setConfirmCallback((correlationData, ack, cause) -> {
            if (ack) {
                log.info("消息发送成功: {}", correlationData);
            } else {
                log.error("消息发送失败: {}, 原因: {}", correlationData, cause);
            }
        });
        
        // 返回回调
        template.setReturnCallback((message, replyCode, replyText, exchange, routingKey) -> {
            log.error("消息返回: exchange={}, routingKey={}, replyCode={}, replyText={}",
                     exchange, routingKey, replyCode, replyText);
        });
        
        return template;
    }
}

// RabbitMQ生产者
@Service
@Slf4j
public class RabbitMQProducer {
    
    @Autowired
    private RabbitTemplate rabbitTemplate;
    
    // 发送消息
    public void sendOrderMessage(OrderMessage message) {
        rabbitTemplate.convertAndSend(
            RabbitMQConfig.ORDER_EXCHANGE,
            RabbitMQConfig.ORDER_ROUTING_KEY,
            message,
            new CorrelationData(UUID.randomUUID().toString())
        );
        log.info("发送订单消息: {}", message);
    }
    
    // 延迟消息(使用TTL+死信队列)
    public void sendDelayedMessage(OrderMessage message, long delayMillis) {
        rabbitTemplate.convertAndSend(
            RabbitMQConfig.ORDER_EXCHANGE,
            RabbitMQConfig.ORDER_ROUTING_KEY,
            message,
            messagePostProcessor -> {
                messagePostProcessor.getMessageProperties().setExpiration(String.valueOf(delayMillis));
                return messagePostProcessor;
            }
        );
    }
}

// RabbitMQ消费者
@Component
@Slf4j
public class RabbitMQConsumer {
    
    // 监听订单队列
    @RabbitListener(queues = RabbitMQConfig.ORDER_QUEUE)
    public void handleOrderMessage(OrderMessage message, Channel channel, 
                                  @Header(AmqpHeaders.DELIVERY_TAG) long deliveryTag) {
        try {
            log.info("收到订单消息: {}", message);
            
            // 处理业务逻辑
            processOrder(message);
            
            // 手动确认
            channel.basicAck(deliveryTag, false);
        } catch (Exception e) {
            log.error("处理订单消息失败", e);
            try {
                // 拒绝消息,重新入队
                channel.basicNack(deliveryTag, false, true);
            } catch (IOException ioException) {
                log.error("拒绝消息失败", ioException);
            }
        }
    }
    
    // 监听死信队列
    @RabbitListener(queues = RabbitMQConfig.DLX_QUEUE)
    public void handleDeadLetterMessage(OrderMessage message) {
        log.warn("收到死信消息: {}", message);
        // 处理死信消息(如记录日志、告警等)
    }
    
    private void processOrder(OrderMessage message) {
        // 业务处理逻辑
    }
}

// 订单消息
@Data
public class OrderMessage {
    private Long orderId;
    private Long userId;
    private BigDecimal amount;
    private Date createTime;
}

3.2 Kafka高级应用

Kafka生产者与消费者:

java 复制代码
// Kafka配置
@Configuration
@Slf4j
public class KafkaConfig {
    
    // 生产者配置
    @Bean
    public ProducerFactory<String, Object> producerFactory() {
        Map<String, Object> configProps = new HashMap<>();
        configProps.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, "localhost:9092");
        configProps.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, StringSerializer.class);
        configProps.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, JsonSerializer.class);
        configProps.put(ProducerConfig.ACKS_CONFIG, "all"); // 等待所有副本确认
        configProps.put(ProducerConfig.RETRIES_CONFIG, 3); // 重试次数
        configProps.put(ProducerConfig.ENABLE_IDEMPOTENCE_CONFIG, true); // 幂等性
        return new DefaultKafkaProducerFactory<>(configProps);
    }
    
    @Bean
    public KafkaTemplate<String, Object> kafkaTemplate() {
        return new KafkaTemplate<>(producerFactory());
    }
    
    // 消费者配置
    @Bean
    public ConsumerFactory<String, Object> consumerFactory() {
        Map<String, Object> configProps = new HashMap<>();
        configProps.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, "localhost:9092");
        configProps.put(ConsumerConfig.GROUP_ID_CONFIG, "order-group");
        configProps.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
        configProps.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, JsonDeserializer.class);
        configProps.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest");
        configProps.put(ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG, false); // 手动提交
        return new DefaultKafkaConsumerFactory<>(configProps);
    }
    
    @Bean
    public ConcurrentKafkaListenerContainerFactory<String, Object> kafkaListenerContainerFactory() {
        ConcurrentKafkaListenerContainerFactory<String, Object> factory = 
            new ConcurrentKafkaListenerContainerFactory<>();
        factory.setConsumerFactory(consumerFactory());
        factory.setConcurrency(3); // 并发消费者数量
        return factory;
    }
}

// Kafka生产者
@Service
@Slf4j
public class KafkaProducer {
    
    @Autowired
    private KafkaTemplate<String, Object> kafkaTemplate;
    
    // 发送消息
    public void sendMessage(String topic, Object message) {
        kafkaTemplate.send(topic, message);
        log.info("发送消息到主题 {}: {}", topic, message);
    }
    
    // 发送带key的消息
    public void sendMessageWithKey(String topic, String key, Object message) {
        kafkaTemplate.send(topic, key, message);
    }
    
    // 发送到指定分区
    public void sendToPartition(String topic, int partition, Object message) {
        kafkaTemplate.send(topic, partition, null, message);
    }
    
    // 异步发送带回调
    public void sendMessageAsync(String topic, Object message) {
        ListenableFuture<SendResult<String, Object>> future = 
            kafkaTemplate.send(topic, message);
        
        future.addCallback(
            result -> log.info("消息发送成功: topic={}, offset={}", 
                             topic, result.getRecordMetadata().offset()),
            failure -> log.error("消息发送失败: topic={}", topic, failure)
        );
    }
}

// Kafka消费者
@Component
@Slf4j
public class KafkaConsumer {
    
    // 监听订单主题
    @KafkaListener(topics = "order-topic", groupId = "order-group")
    public void consumeOrderMessage(OrderMessage message,
                                    @Header(KafkaHeaders.RECEIVED_TOPIC) String topic,
                                    @Header(KafkaHeaders.RECEIVED_PARTITION_ID) int partition,
                                    @Header(KafkaHeaders.OFFSET) long offset,
                                    Acknowledgment acknowledgment) {
        try {
            log.info("收到消息: topic={}, partition={}, offset={}, message={}", 
                    topic, partition, offset, message);
            
            // 处理业务逻辑
            processOrder(message);
            
            // 手动提交偏移量
            acknowledgment.acknowledge();
        } catch (Exception e) {
            log.error("处理消息失败", e);
            // 不确认,消息会重新消费
        }
    }
    
    // 批量消费
    @KafkaListener(topics = "order-topic", groupId = "order-group-batch", 
                   containerFactory = "kafkaListenerContainerFactory")
    public void consumeBatch(List<OrderMessage> messages, Acknowledgment acknowledgment) {
        try {
            log.info("批量收到 {} 条消息", messages.size());
            
            for (OrderMessage message : messages) {
                processOrder(message);
            }
            
            acknowledgment.acknowledge();
        } catch (Exception e) {
            log.error("批量处理消息失败", e);
        }
    }
    
    private void processOrder(OrderMessage message) {
        // 业务处理逻辑
    }
}

4. 缓存策略深入

4.1 Redis高级应用

Redis缓存策略:

java 复制代码
// Redis缓存服务
@Service
@Slf4j
public class RedisCacheService {
    
    @Autowired
    private StringRedisTemplate redisTemplate;
    
    // 缓存穿透防护:使用布隆过滤器
    public String getWithBloomFilter(String key) {
        // 先检查布隆过滤器
        if (!bloomFilter.mightContain(key)) {
            return null;
        }
        
        // 从缓存获取
        String value = redisTemplate.opsForValue().get(key);
        
        // 缓存穿透:如果缓存中没有,查询数据库
        if (value == null) {
            value = queryFromDatabase(key);
            if (value != null) {
                // 将数据加入缓存和布隆过滤器
                redisTemplate.opsForValue().set(key, value, 1, TimeUnit.HOURS);
                bloomFilter.put(key);
            } else {
                // 缓存空值,防止缓存穿透
                redisTemplate.opsForValue().set(key, "", 5, TimeUnit.MINUTES);
            }
        }
        
        return value;
    }
    
    // 缓存击穿防护:使用分布式锁
    public String getWithLock(String key) {
        String value = redisTemplate.opsForValue().get(key);
        
        if (value == null) {
            // 获取分布式锁
            String lockKey = "lock:" + key;
            String lockValue = UUID.randomUUID().toString();
            
            try {
                if (tryLock(lockKey, lockValue, 10)) {
                    // 双重检查
                    value = redisTemplate.opsForValue().get(key);
                    if (value == null) {
                        // 查询数据库
                        value = queryFromDatabase(key);
                        if (value != null) {
                            redisTemplate.opsForValue().set(key, value, 1, TimeUnit.HOURS);
                        }
                    }
                } else {
                    // 获取锁失败,等待一段时间后重试
                    Thread.sleep(100);
                    return getWithLock(key);
                }
            } catch (InterruptedException e) {
                Thread.currentThread().interrupt();
            } finally {
                releaseLock(lockKey, lockValue);
            }
        }
        
        return value;
    }
    
    // 缓存雪崩防护:随机过期时间
    public void setWithRandomExpire(String key, String value) {
        // 基础过期时间 + 随机时间(0-300秒)
        long expireTime = 3600 + new Random().nextInt(300);
        redisTemplate.opsForValue().set(key, value, expireTime, TimeUnit.SECONDS);
    }
    
    // 缓存预热
    public void warmUpCache() {
        List<String> hotKeys = getHotKeys();
        for (String key : hotKeys) {
            String value = queryFromDatabase(key);
            if (value != null) {
                redisTemplate.opsForValue().set(key, value, 1, TimeUnit.HOURS);
            }
        }
    }
    
    // 缓存更新策略:先更新数据库,再删除缓存
    public void updateCache(String key, String value) {
        // 1. 更新数据库
        updateDatabase(key, value);
        
        // 2. 删除缓存
        redisTemplate.delete(key);
    }
    
    // 缓存更新策略:先删除缓存,再更新数据库
    public void updateCache2(String key, String value) {
        // 1. 删除缓存
        redisTemplate.delete(key);
        
        // 2. 更新数据库
        updateDatabase(key, value);
    }
    
    // Redis发布订阅
    public void publish(String channel, String message) {
        redisTemplate.convertAndSend(channel, message);
    }
    
    // Redis Lua脚本:原子操作
    public Long incrementWithLimit(String key, long limit) {
        String luaScript = 
            "local current = redis.call('get', KEYS[1]) " +
            "if current == false then " +
            "  current = 0 " +
            "end " +
            "if tonumber(current) < tonumber(ARGV[1]) then " +
            "  return redis.call('incr', KEYS[1]) " +
            "else " +
            "  return -1 " +
            "end";
        
        DefaultRedisScript<Long> script = new DefaultRedisScript<>();
        script.setScriptText(luaScript);
        script.setResultType(Long.class);
        
        return redisTemplate.execute(script, Collections.singletonList(key), String.valueOf(limit));
    }
    
    private boolean tryLock(String lockKey, String lockValue, long expireTime) {
        Boolean result = redisTemplate.opsForValue()
            .setIfAbsent(lockKey, lockValue, expireTime, TimeUnit.SECONDS);
        return Boolean.TRUE.equals(result);
    }
    
    private void releaseLock(String lockKey, String lockValue) {
        String luaScript = 
            "if redis.call('get', KEYS[1]) == ARGV[1] then " +
            "  return redis.call('del', KEYS[1]) " +
            "else return 0 end";
        
        DefaultRedisScript<Long> script = new DefaultRedisScript<>();
        script.setScriptText(luaScript);
        script.setResultType(Long.class);
        
        redisTemplate.execute(script, Collections.singletonList(lockKey), lockValue);
    }
    
    private String queryFromDatabase(String key) {
        // 查询数据库逻辑
        return null;
    }
    
    private void updateDatabase(String key, String value) {
        // 更新数据库逻辑
    }
    
    private List<String> getHotKeys() {
        // 获取热点key
        return new ArrayList<>();
    }
    
    // 布隆过滤器(需要引入依赖)
    private BloomFilter<String> bloomFilter = BloomFilter.create(
        Funnels.stringFunnel(Charset.defaultCharset()),
        1000000, // 预期插入数量
        0.01     // 误判率
    );
}

5. 监控与运维

5.1 APM应用性能监控

APM监控实现:

java 复制代码
// 性能监控切面
@Aspect
@Component
@Slf4j
public class PerformanceMonitorAspect {
    
    @Around("execution(* com.example.service.*.*(..))")
    public Object monitorPerformance(ProceedingJoinPoint joinPoint) throws Throwable {
        long startTime = System.currentTimeMillis();
        String methodName = joinPoint.getSignature().toShortString();
        
        try {
            Object result = joinPoint.proceed();
            long executionTime = System.currentTimeMillis() - startTime;
            
            // 记录性能指标
            recordPerformanceMetrics(methodName, executionTime, true);
            
            return result;
        } catch (Exception e) {
            long executionTime = System.currentTimeMillis() - startTime;
            recordPerformanceMetrics(methodName, executionTime, false);
            throw e;
        }
    }
    
    private void recordPerformanceMetrics(String methodName, long executionTime, boolean success) {
        // 发送到监控系统(如Prometheus、InfluxDB等)
        log.info("方法: {}, 执行时间: {}ms, 成功: {}", methodName, executionTime, success);
    }
}

// 自定义指标收集
@Component
@Slf4j
public class CustomMetricsCollector {
    
    private final MeterRegistry meterRegistry;
    
    public CustomMetricsCollector(MeterRegistry meterRegistry) {
        this.meterRegistry = meterRegistry;
    }
    
    // 记录计数器
    public void incrementCounter(String name, String... tags) {
        Counter.builder(name)
            .tags(tags)
            .register(meterRegistry)
            .increment();
    }
    
    // 记录计时器
    public void recordTimer(String name, long duration, TimeUnit unit) {
        Timer.builder(name)
            .register(meterRegistry)
            .record(duration, unit);
    }
    
    // 记录计量器
    public void recordGauge(String name, double value) {
        Gauge.builder(name, () -> value)
            .register(meterRegistry);
    }
}

5.2 链路追踪

分布式链路追踪:

java 复制代码
// 链路追踪服务
@Service
@Slf4j
public class TraceService {
    
    // 创建Span
    public void createSpan(String operationName) {
        Tracer tracer = GlobalTracer.get();
        Span span = tracer.buildSpan(operationName).start();
        
        try (Scope scope = tracer.activateSpan(span)) {
            // 业务逻辑
            doBusinessLogic();
        } finally {
            span.finish();
        }
    }
    
    // 添加标签
    public void addTags(Span span, Map<String, String> tags) {
        tags.forEach(span::setTag);
    }
    
    // 添加日志
    public void addLog(Span span, String event, Object payload) {
        span.log(event, payload);
    }
    
    private void doBusinessLogic() {
        // 业务逻辑
    }
}

6. 架构设计模式

6.1 领域驱动设计(DDD)

DDD实现:

java 复制代码
// 实体
@Entity
public class Order {
    @Id
    private Long id;
    private Long userId;
    private OrderStatus status;
    private List<OrderItem> items;
    
    // 领域方法
    public void confirm() {
        if (this.status != OrderStatus.PENDING) {
            throw new IllegalStateException("只有待确认订单才能确认");
        }
        this.status = OrderStatus.CONFIRMED;
    }
    
    public void cancel() {
        if (this.status == OrderStatus.COMPLETED) {
            throw new IllegalStateException("已完成订单不能取消");
        }
        this.status = OrderStatus.CANCELLED;
    }
    
    public BigDecimal getTotalAmount() {
        return items.stream()
            .map(OrderItem::getAmount)
            .reduce(BigDecimal.ZERO, BigDecimal::add);
    }
}

// 值对象
@Embeddable
public class Address {
    private String province;
    private String city;
    private String district;
    private String detail;
    
    public Address(String province, String city, String district, String detail) {
        this.province = province;
        this.city = city;
        this.district = district;
        this.detail = detail;
    }
}

// 领域服务
@Service
public class OrderDomainService {
    
    public void transferOrder(Order fromOrder, Order toOrder, OrderItem item) {
        // 领域逻辑
        fromOrder.removeItem(item);
        toOrder.addItem(item);
    }
}

// 仓储接口
public interface OrderRepository {
    Order findById(Long id);
    void save(Order order);
    List<Order> findByUserId(Long userId);
}

// 应用服务
@Service
@Transactional
public class OrderApplicationService {
    
    @Autowired
    private OrderRepository orderRepository;
    
    @Autowired
    private OrderDomainService orderDomainService;
    
    public void createOrder(CreateOrderCommand command) {
        Order order = new Order();
        order.setUserId(command.getUserId());
        // 设置其他属性
        
        orderRepository.save(order);
    }
}

6.2 CQRS模式

CQRS实现:

java 复制代码
// 命令
public class CreateOrderCommand {
    private Long userId;
    private List<OrderItemDTO> items;
}

// 命令处理器
@Service
@Transactional
public class CreateOrderCommandHandler {
    
    @Autowired
    private OrderRepository orderRepository;
    
    public void handle(CreateOrderCommand command) {
        Order order = new Order();
        order.setUserId(command.getUserId());
        // 创建订单逻辑
        
        orderRepository.save(order);
        
        // 发布事件
        eventPublisher.publish(new OrderCreatedEvent(order.getId()));
    }
}

// 查询
public class GetOrderQuery {
    private Long orderId;
}

// 查询处理器
@Service
public class GetOrderQueryHandler {
    
    @Autowired
    private OrderReadRepository orderReadRepository;
    
    public OrderDTO handle(GetOrderQuery query) {
        return orderReadRepository.findById(query.getOrderId());
    }
}
相关推荐
五阿哥永琪2 小时前
Nacos注册/配置中心
java·开发语言
无敌最俊朗@2 小时前
Qt多线程阻塞:为何信号失效?
java·开发语言
__万波__2 小时前
二十三种设计模式(十四)--命令模式
java·设计模式·命令模式
一起养小猫2 小时前
《Java数据结构与算法》第四篇(三)二叉树遍历详解_CSDN文章
java·开发语言·数据结构
少许极端2 小时前
算法奇妙屋(十九)-子序列问题(动态规划)
java·数据结构·算法·动态规划·子序列问题
小满、2 小时前
RabbitMQ:AMQP 原理、Spring AMQP 实战与 Work Queue 模型
java·rabbitmq·java-rabbitmq·spring amqp·amqp 协议·work queue
萧曵 丶2 小时前
Java Stream 实际用法详解
java·stream·lambda
dvlinker2 小时前
动态代理技术实战测评—高效解锁Zillow房价历史
android·java·数据库