Atlas Mapper 教程系列 (8/10):性能优化与最佳实践

🎯 学习目标

通过本篇教程,你将学会:

  • 掌握 Atlas Mapper 的性能优化技巧
  • 理解内存管理和缓存策略
  • 学会生产环境的监控和调优方法
  • 掌握最佳实践和设计模式

📋 概念讲解:性能优化架构

性能优化层次

架构优化 运行时优化 编译时优化 职责分离 分层设计 提升响应速度 异步处理 合理使用资源 资源管理 减少GC压力 对象池化 避免重复计算 缓存策略 提高吞吐量 批量处理 减少反射调用 代码生成优化 编译期错误检测 类型检查优化 生成高效代码 模板优化

性能瓶颈分析

CPU密集 内存密集 IO密集 性能问题 问题类型? 代码优化 内存优化 IO优化 减少计算复杂度 优化算法逻辑 并行处理 对象复用 内存池化 垃圾回收优化 批量操作 异步处理 缓存策略 性能提升

🔧 实现步骤:性能优化详解

步骤 1:编译时优化

优化 Mapper 配置
java 复制代码
/**
 * 高性能 Mapper 配置
 */
@Mapper(
    componentModel = "spring",
    
    // 🔥 性能优化配置
    unmappedTargetPolicy = ReportingPolicy.IGNORE,     // 减少运行时检查
    unmappedSourcePolicy = ReportingPolicy.IGNORE,     // 减少运行时检查
    
    // 🔥 代码生成优化
    suppressGeneratorTimestamp = true,                 // 减少生成代码大小
    suppressGeneratorVersionComment = true,            // 减少生成代码大小
    
    // 🔥 集合映射优化
    collectionMappingStrategy = CollectionMappingStrategy.ACCESSOR_ONLY,
    
    // 🔥 空值处理优化
    nullValueMappingStrategy = NullValueMappingStrategy.RETURN_NULL,
    nullValueCheckStrategy = NullValueCheckStrategy.ON_IMPLICIT_CONVERSION
)
public interface OptimizedUserMapper {
    
    /**
     * 🔥 基础映射 - 最高性能
     */
    UserDto toDto(User user);
    
    /**
     * 🔥 批量映射 - 优化集合处理
     */
    List<UserDto> toDtoList(List<User> users);
    
    /**
     * 🔥 条件映射 - 避免不必要的转换
     */
    @Condition("user != null && user.getId() != null")
    UserDto toDtoIfValid(User user);
    
    /**
     * 🔥 浅层映射 - 避免深度递归
     */
    @Mapping(target = "orders", ignore = true)
    @Mapping(target = "addresses", ignore = true)
    UserDto toShallowDto(User user);
}
生成代码优化分析
java 复制代码
/**
 * 优化前的生成代码(示例)
 */
public class UserMapperImpl implements UserMapper {
    
    @Override
    public UserDto toDto(User user) {
        if (user == null) {
            return null;
        }
        
        UserDto userDto = new UserDto();
        
        // 🔥 优化点1:直接字段访问,避免反射
        userDto.setId(user.getId());
        userDto.setName(user.getName());
        userDto.setEmail(user.getEmail());
        
        // 🔥 优化点2:内联类型转换,避免方法调用
        if (user.getCreatedAt() != null) {
            userDto.setCreatedAt(user.getCreatedAt().format(DateTimeFormatter.ISO_LOCAL_DATE_TIME));
        }
        
        // 🔥 优化点3:条件检查优化
        if (user.getStatus() != null) {
            userDto.setStatusDesc(mapStatus(user.getStatus()));
        }
        
        return userDto;
    }
    
    // 🔥 优化点4:私有方法内联
    private String mapStatus(Integer status) {
        switch (status) {
            case 0: return "待激活";
            case 1: return "已激活";
            case 2: return "已禁用";
            default: return "未知";
        }
    }
}

步骤 2:运行时性能优化

对象池化和复用
java 复制代码
/**
 * 高性能映射服务 - 使用对象池
 */
@Service
public class HighPerformanceMappingService {
    
    private final UserMapper userMapper;
    
    // 🔥 对象池 - 复用 DTO 对象
    private final ObjectPool<UserDto> userDtoPool;
    
    // 🔥 线程本地缓存 - 避免线程竞争
    private final ThreadLocal<List<UserDto>> threadLocalDtoList;
    
    public HighPerformanceMappingService(UserMapper userMapper) {
        this.userMapper = userMapper;
        
        // 初始化对象池
        this.userDtoPool = new GenericObjectPool<>(new UserDtoFactory(), 
                createPoolConfig());
        
        // 初始化线程本地缓存
        this.threadLocalDtoList = ThreadLocal.withInitial(() -> new ArrayList<>(100));
    }
    
    /**
     * 🔥 高性能单对象映射
     */
    public UserDto mapUserWithPool(User user) {
        if (user == null) {
            return null;
        }
        
        UserDto dto = null;
        try {
            // 从对象池获取 DTO
            dto = userDtoPool.borrowObject();
            
            // 重置 DTO 状态
            resetUserDto(dto);
            
            // 手动映射(避免框架开销)
            dto.setId(user.getId());
            dto.setName(user.getName());
            dto.setEmail(user.getEmail());
            dto.setCreatedAt(formatDateTime(user.getCreatedAt()));
            
            return dto;
            
        } catch (Exception e) {
            // 如果对象池出错,回退到普通映射
            return userMapper.toDto(user);
        } finally {
            // 注意:不要在这里归还对象,由调用方负责
        }
    }
    
    /**
     * 🔥 高性能批量映射
     */
    public List<UserDto> mapUsersWithOptimization(List<User> users) {
        if (users == null || users.isEmpty()) {
            return Collections.emptyList();
        }
        
        // 使用线程本地列表避免重复创建
        List<UserDto> dtoList = threadLocalDtoList.get();
        dtoList.clear();
        
        // 预分配容量
        if (dtoList instanceof ArrayList) {
            ((ArrayList<UserDto>) dtoList).ensureCapacity(users.size());
        }
        
        // 批量映射
        for (User user : users) {
            if (user != null) {
                UserDto dto = mapUserWithPool(user);
                if (dto != null) {
                    dtoList.add(dto);
                }
            }
        }
        
        // 返回副本,保持线程本地列表可复用
        return new ArrayList<>(dtoList);
    }
    
    /**
     * 🔥 并行映射 - 适用于大数据集
     */
    public List<UserDto> mapUsersInParallel(List<User> users) {
        if (users == null || users.isEmpty()) {
            return Collections.emptyList();
        }
        
        // 小数据集直接使用串行处理
        if (users.size() < 1000) {
            return userMapper.toDtoList(users);
        }
        
        // 大数据集使用并行流
        return users.parallelStream()
                .filter(Objects::nonNull)
                .map(userMapper::toDto)
                .filter(Objects::nonNull)
                .collect(Collectors.toList());
    }
    
    /**
     * 🔥 分批处理 - 控制内存使用
     */
    public List<UserDto> mapUsersInBatches(List<User> users, int batchSize) {
        if (users == null || users.isEmpty()) {
            return Collections.emptyList();
        }
        
        List<UserDto> result = new ArrayList<>(users.size());
        
        for (int i = 0; i < users.size(); i += batchSize) {
            int endIndex = Math.min(i + batchSize, users.size());
            List<User> batch = users.subList(i, endIndex);
            
            List<UserDto> batchResult = userMapper.toDtoList(batch);
            result.addAll(batchResult);
            
            // 🔥 批次间垃圾回收提示
            if (i % (batchSize * 10) == 0) {
                System.gc();  // 仅在必要时使用
            }
        }
        
        return result;
    }
    
    // 辅助方法
    private GenericObjectPoolConfig<UserDto> createPoolConfig() {
        GenericObjectPoolConfig<UserDto> config = new GenericObjectPoolConfig<>();
        config.setMaxTotal(100);           // 最大对象数
        config.setMaxIdle(50);             // 最大空闲对象数
        config.setMinIdle(10);             // 最小空闲对象数
        config.setTestOnBorrow(false);     // 关闭借用时测试
        config.setTestOnReturn(false);     // 关闭归还时测试
        return config;
    }
    
    private void resetUserDto(UserDto dto) {
        dto.setId(null);
        dto.setName(null);
        dto.setEmail(null);
        dto.setCreatedAt(null);
    }
    
    private String formatDateTime(LocalDateTime dateTime) {
        return dateTime != null ? dateTime.format(DateTimeFormatter.ISO_LOCAL_DATE_TIME) : null;
    }
    
    /**
     * 对象工厂
     */
    private static class UserDtoFactory extends BasePooledObjectFactory<UserDto> {
        @Override
        public UserDto create() {
            return new UserDto();
        }
        
        @Override
        public PooledObject<UserDto> wrap(UserDto dto) {
            return new DefaultPooledObject<>(dto);
        }
    }
}
缓存策略实现
java 复制代码
/**
 * 智能缓存映射服务
 */
@Service
public class CachedMappingService {
    
    private final UserMapper userMapper;
    
    // 🔥 多级缓存策略
    private final Cache<String, UserDto> l1Cache;           // L1: 本地缓存
    private final Cache<String, UserDto> l2Cache;           // L2: 分布式缓存
    
    // 🔥 映射结果缓存
    private final LoadingCache<User, UserDto> mappingCache;
    
    // 🔥 批量映射缓存
    private final Cache<String, List<UserDto>> batchCache;
    
    public CachedMappingService(UserMapper userMapper, CacheManager cacheManager) {
        this.userMapper = userMapper;
        
        // 初始化本地缓存
        this.l1Cache = Caffeine.newBuilder()
                .maximumSize(1000)
                .expireAfterWrite(Duration.ofMinutes(10))
                .recordStats()
                .build();
        
        // 初始化分布式缓存
        this.l2Cache = cacheManager.getCache("user-mapping");
        
        // 初始化映射缓存
        this.mappingCache = Caffeine.newBuilder()
                .maximumSize(500)
                .expireAfterWrite(Duration.ofMinutes(5))
                .build(this::doMapping);
        
        // 初始化批量缓存
        this.batchCache = Caffeine.newBuilder()
                .maximumSize(100)
                .expireAfterWrite(Duration.ofMinutes(3))
                .build();
    }
    
    /**
     * 🔥 带缓存的单对象映射
     */
    public UserDto mapUserWithCache(User user) {
        if (user == null || user.getId() == null) {
            return userMapper.toDto(user);
        }
        
        String cacheKey = "user:" + user.getId() + ":" + user.getUpdatedAt().toEpochSecond(ZoneOffset.UTC);
        
        // L1 缓存查找
        UserDto cached = l1Cache.getIfPresent(cacheKey);
        if (cached != null) {
            return cached;
        }
        
        // L2 缓存查找
        cached = l2Cache.get(cacheKey, UserDto.class);
        if (cached != null) {
            l1Cache.put(cacheKey, cached);  // 回填 L1 缓存
            return cached;
        }
        
        // 执行映射
        UserDto dto = userMapper.toDto(user);
        
        // 写入缓存
        if (dto != null) {
            l1Cache.put(cacheKey, dto);
            l2Cache.put(cacheKey, dto);
        }
        
        return dto;
    }
    
    /**
     * 🔥 智能批量映射缓存
     */
    public List<UserDto> mapUsersWithSmartCache(List<User> users) {
        if (users == null || users.isEmpty()) {
            return Collections.emptyList();
        }
        
        // 生成批量缓存键
        String batchKey = generateBatchKey(users);
        
        // 检查批量缓存
        List<UserDto> cachedResult = batchCache.getIfPresent(batchKey);
        if (cachedResult != null) {
            return cachedResult;
        }
        
        // 分离缓存命中和未命中的对象
        List<UserDto> result = new ArrayList<>(users.size());
        List<User> uncachedUsers = new ArrayList<>();
        Map<Integer, User> indexMap = new HashMap<>();
        
        for (int i = 0; i < users.size(); i++) {
            User user = users.get(i);
            if (user == null) {
                result.add(null);
                continue;
            }
            
            UserDto cached = mapUserWithCache(user);
            if (cached != null) {
                result.add(cached);
            } else {
                result.add(null);  // 占位符
                uncachedUsers.add(user);
                indexMap.put(uncachedUsers.size() - 1, user);
            }
        }
        
        // 批量映射未缓存的对象
        if (!uncachedUsers.isEmpty()) {
            List<UserDto> uncachedResults = userMapper.toDtoList(uncachedUsers);
            
            // 填充结果并更新缓存
            int uncachedIndex = 0;
            for (int i = 0; i < result.size(); i++) {
                if (result.get(i) == null && uncachedIndex < uncachedResults.size()) {
                    UserDto dto = uncachedResults.get(uncachedIndex++);
                    result.set(i, dto);
                    
                    // 更新单对象缓存
                    User user = users.get(i);
                    if (user != null && dto != null) {
                        String cacheKey = "user:" + user.getId() + ":" + user.getUpdatedAt().toEpochSecond(ZoneOffset.UTC);
                        l1Cache.put(cacheKey, dto);
                    }
                }
            }
        }
        
        // 缓存批量结果
        batchCache.put(batchKey, result);
        
        return result;
    }
    
    /**
     * 🔥 预热缓存
     */
    @EventListener(ApplicationReadyEvent.class)
    public void warmUpCache() {
        CompletableFuture.runAsync(() -> {
            try {
                // 预加载热点数据
                List<User> hotUsers = loadHotUsers();
                mapUsersWithSmartCache(hotUsers);
                
                log.info("缓存预热完成,预加载 {} 个用户", hotUsers.size());
            } catch (Exception e) {
                log.warn("缓存预热失败", e);
            }
        });
    }
    
    /**
     * 🔥 缓存统计和监控
     */
    @Scheduled(fixedRate = 60000)  // 每分钟执行一次
    public void reportCacheStats() {
        CacheStats l1Stats = l1Cache.stats();
        
        log.info("L1缓存统计 - 命中率: {:.2f}%, 请求数: {}, 命中数: {}, 未命中数: {}, 驱逐数: {}",
                l1Stats.hitRate() * 100,
                l1Stats.requestCount(),
                l1Stats.hitCount(),
                l1Stats.missCount(),
                l1Stats.evictionCount());
        
        // 发送监控指标
        sendMetrics("cache.l1.hit_rate", l1Stats.hitRate());
        sendMetrics("cache.l1.request_count", l1Stats.requestCount());
    }
    
    // 辅助方法
    private UserDto doMapping(User user) {
        return userMapper.toDto(user);
    }
    
    private String generateBatchKey(List<User> users) {
        return users.stream()
                .filter(Objects::nonNull)
                .map(user -> user.getId() + ":" + user.getUpdatedAt().toEpochSecond(ZoneOffset.UTC))
                .collect(Collectors.joining(",", "batch:", ""));
    }
    
    private List<User> loadHotUsers() {
        // 实现热点用户加载逻辑
        return Collections.emptyList();
    }
    
    private void sendMetrics(String name, double value) {
        // 发送监控指标到监控系统
    }
}

步骤 3:内存优化策略

内存使用分析和优化
java 复制代码
/**
 * 内存优化的映射服务
 */
@Service
public class MemoryOptimizedMappingService {
    
    private final UserMapper userMapper;
    
    // 🔥 内存监控
    private final MemoryMXBean memoryBean;
    private final GarbageCollectorMXBean gcBean;
    
    // 🔥 对象大小估算器
    private final ObjectSizeCalculator sizeCalculator;
    
    public MemoryOptimizedMappingService(UserMapper userMapper) {
        this.userMapper = userMapper;
        this.memoryBean = ManagementFactory.getMemoryMXBean();
        this.gcBean = ManagementFactory.getGarbageCollectorMXBeans().get(0);
        this.sizeCalculator = new ObjectSizeCalculator();
    }
    
    /**
     * 🔥 内存感知的批量映射
     */
    public List<UserDto> mapUsersWithMemoryControl(List<User> users, long maxMemoryMB) {
        if (users == null || users.isEmpty()) {
            return Collections.emptyList();
        }
        
        long maxMemoryBytes = maxMemoryMB * 1024 * 1024;
        long currentMemoryUsage = getCurrentMemoryUsage();
        
        // 如果内存使用已经很高,触发 GC
        if (currentMemoryUsage > maxMemoryBytes * 0.8) {
            System.gc();
            currentMemoryUsage = getCurrentMemoryUsage();
        }
        
        // 估算单个对象的内存使用
        User sampleUser = users.get(0);
        UserDto sampleDto = userMapper.toDto(sampleUser);
        long singleObjectSize = sizeCalculator.calculateObjectSize(sampleDto);
        
        // 计算安全的批次大小
        long availableMemory = maxMemoryBytes - currentMemoryUsage;
        int safeBatchSize = (int) Math.min(users.size(), availableMemory / singleObjectSize / 2);
        
        if (safeBatchSize < 100) {
            safeBatchSize = 100;  // 最小批次大小
        }
        
        log.info("内存控制映射 - 总对象数: {}, 批次大小: {}, 预估单对象大小: {} bytes", 
                users.size(), safeBatchSize, singleObjectSize);
        
        // 分批处理
        return mapUsersInBatchesWithMemoryMonitoring(users, safeBatchSize);
    }
    
    /**
     * 🔥 带内存监控的分批映射
     */
    private List<UserDto> mapUsersInBatchesWithMemoryMonitoring(List<User> users, int batchSize) {
        List<UserDto> result = new ArrayList<>();
        
        for (int i = 0; i < users.size(); i += batchSize) {
            int endIndex = Math.min(i + batchSize, users.size());
            List<User> batch = users.subList(i, endIndex);
            
            // 监控内存使用
            long memoryBefore = getCurrentMemoryUsage();
            
            // 执行批次映射
            List<UserDto> batchResult = userMapper.toDtoList(batch);
            result.addAll(batchResult);
            
            long memoryAfter = getCurrentMemoryUsage();
            long memoryUsed = memoryAfter - memoryBefore;
            
            log.debug("批次 {}-{} 完成,内存使用: {} MB", i, endIndex, memoryUsed / 1024 / 1024);
            
            // 如果内存使用过高,触发 GC
            if (memoryUsed > 50 * 1024 * 1024) {  // 50MB
                System.gc();
                Thread.yield();  // 让 GC 线程有机会运行
            }
        }
        
        return result;
    }
    
    /**
     * 🔥 流式映射 - 减少内存峰值
     */
    public Stream<UserDto> mapUsersAsStream(List<User> users) {
        return users.stream()
                .filter(Objects::nonNull)
                .map(user -> {
                    try {
                        return userMapper.toDto(user);
                    } catch (Exception e) {
                        log.warn("映射用户失败: {}", user.getId(), e);
                        return null;
                    }
                })
                .filter(Objects::nonNull);
    }
    
    /**
     * 🔥 懒加载映射 - 按需映射
     */
    public Iterator<UserDto> mapUsersLazily(List<User> users) {
        return new Iterator<UserDto>() {
            private int index = 0;
            
            @Override
            public boolean hasNext() {
                return index < users.size();
            }
            
            @Override
            public UserDto next() {
                if (!hasNext()) {
                    throw new NoSuchElementException();
                }
                
                User user = users.get(index++);
                return user != null ? userMapper.toDto(user) : null;
            }
        };
    }
    
    /**
     * 🔥 内存使用报告
     */
    public MemoryUsageReport generateMemoryReport() {
        MemoryUsage heapUsage = memoryBean.getHeapMemoryUsage();
        MemoryUsage nonHeapUsage = memoryBean.getNonHeapMemoryUsage();
        
        return MemoryUsageReport.builder()
                .heapUsed(heapUsage.getUsed())
                .heapMax(heapUsage.getMax())
                .heapCommitted(heapUsage.getCommitted())
                .nonHeapUsed(nonHeapUsage.getUsed())
                .nonHeapMax(nonHeapUsage.getMax())
                .gcCollectionCount(gcBean.getCollectionCount())
                .gcCollectionTime(gcBean.getCollectionTime())
                .build();
    }
    
    // 辅助方法
    private long getCurrentMemoryUsage() {
        return memoryBean.getHeapMemoryUsage().getUsed();
    }
    
    /**
     * 内存使用报告
     */
    @Data
    @Builder
    public static class MemoryUsageReport {
        private long heapUsed;
        private long heapMax;
        private long heapCommitted;
        private long nonHeapUsed;
        private long nonHeapMax;
        private long gcCollectionCount;
        private long gcCollectionTime;
        
        public double getHeapUsagePercentage() {
            return heapMax > 0 ? (double) heapUsed / heapMax * 100 : 0;
        }
    }
}

💻 示例代码:生产环境最佳实践

示例 1:性能监控和指标收集

java 复制代码
/**
 * 性能监控服务
 */
@Service
@Component
public class MappingPerformanceMonitor {
    
    private final MeterRegistry meterRegistry;
    private final Timer mappingTimer;
    private final Counter mappingCounter;
    private final Counter errorCounter;
    private final Gauge memoryGauge;
    
    // 🔥 性能统计
    private final AtomicLong totalMappings = new AtomicLong(0);
    private final AtomicLong totalErrors = new AtomicLong(0);
    private final AtomicReference<Double> averageExecutionTime = new AtomicReference<>(0.0);
    
    // 🔥 性能历史记录
    private final CircularFifoQueue<PerformanceRecord> performanceHistory;
    
    public MappingPerformanceMonitor(MeterRegistry meterRegistry) {
        this.meterRegistry = meterRegistry;
        
        // 初始化指标
        this.mappingTimer = Timer.builder("atlas.mapper.execution.time")
                .description("Atlas Mapper execution time")
                .register(meterRegistry);
        
        this.mappingCounter = Counter.builder("atlas.mapper.executions")
                .description("Atlas Mapper execution count")
                .register(meterRegistry);
        
        this.errorCounter = Counter.builder("atlas.mapper.errors")
                .description("Atlas Mapper error count")
                .register(meterRegistry);
        
        this.memoryGauge = Gauge.builder("atlas.mapper.memory.usage")
                .description("Atlas Mapper memory usage")
                .register(meterRegistry, this, MappingPerformanceMonitor::getCurrentMemoryUsage);
        
        // 初始化性能历史
        this.performanceHistory = new CircularFifoQueue<>(1000);
    }
    
    /**
     * 🔥 监控映射执行
     */
    public <T> T monitorMapping(String mapperName, String methodName, Supplier<T> mappingOperation) {
        Timer.Sample sample = Timer.start(meterRegistry);
        long startTime = System.nanoTime();
        
        try {
            T result = mappingOperation.get();
            
            // 记录成功指标
            recordSuccess(mapperName, methodName, startTime);
            
            return result;
            
        } catch (Exception e) {
            // 记录错误指标
            recordError(mapperName, methodName, e, startTime);
            throw e;
            
        } finally {
            sample.stop(mappingTimer);
            mappingCounter.increment();
        }
    }
    
    /**
     * 🔥 批量映射性能监控
     */
    public <T> List<T> monitorBatchMapping(String mapperName, String methodName, 
                                          List<?> sourceList, Supplier<List<T>> mappingOperation) {
        
        int batchSize = sourceList != null ? sourceList.size() : 0;
        Timer.Sample sample = Timer.start(meterRegistry);
        long startTime = System.nanoTime();
        
        try {
            List<T> result = mappingOperation.get();
            
            // 记录批量成功指标
            recordBatchSuccess(mapperName, methodName, batchSize, startTime);
            
            return result;
            
        } catch (Exception e) {
            // 记录批量错误指标
            recordBatchError(mapperName, methodName, batchSize, e, startTime);
            throw e;
            
        } finally {
            sample.stop(mappingTimer);
            mappingCounter.increment(batchSize);
        }
    }
    
    /**
     * 🔥 性能分析报告
     */
    public PerformanceAnalysisReport generatePerformanceReport() {
        List<PerformanceRecord> recentRecords = new ArrayList<>(performanceHistory);
        
        if (recentRecords.isEmpty()) {
            return PerformanceAnalysisReport.empty();
        }
        
        // 计算统计指标
        DoubleSummaryStatistics timeStats = recentRecords.stream()
                .mapToDouble(PerformanceRecord::getExecutionTimeMs)
                .summaryStatistics();
        
        Map<String, Long> mapperUsage = recentRecords.stream()
                .collect(Collectors.groupingBy(
                        PerformanceRecord::getMapperName,
                        Collectors.counting()
                ));
        
        Map<String, Double> mapperAvgTime = recentRecords.stream()
                .collect(Collectors.groupingBy(
                        PerformanceRecord::getMapperName,
                        Collectors.averagingDouble(PerformanceRecord::getExecutionTimeMs)
                ));
        
        // 识别性能瓶颈
        List<String> performanceBottlenecks = identifyBottlenecks(recentRecords);
        
        return PerformanceAnalysisReport.builder()
                .totalMappings(totalMappings.get())
                .totalErrors(totalErrors.get())
                .averageExecutionTime(timeStats.getAverage())
                .minExecutionTime(timeStats.getMin())
                .maxExecutionTime(timeStats.getMax())
                .mapperUsageCount(mapperUsage)
                .mapperAverageTime(mapperAvgTime)
                .performanceBottlenecks(performanceBottlenecks)
                .errorRate(calculateErrorRate())
                .throughput(calculateThroughput())
                .build();
    }
    
    /**
     * 🔥 性能告警检查
     */
    @Scheduled(fixedRate = 30000)  // 每30秒检查一次
    public void checkPerformanceAlerts() {
        PerformanceAnalysisReport report = generatePerformanceReport();
        
        // 检查错误率告警
        if (report.getErrorRate() > 0.05) {  // 错误率超过 5%
            sendAlert("高错误率告警", "映射错误率达到 " + String.format("%.2f%%", report.getErrorRate() * 100));
        }
        
        // 检查平均执行时间告警
        if (report.getAverageExecutionTime() > 100) {  // 平均执行时间超过 100ms
            sendAlert("性能告警", "平均映射时间达到 " + String.format("%.2f ms", report.getAverageExecutionTime()));
        }
        
        // 检查内存使用告警
        double memoryUsage = getCurrentMemoryUsage();
        if (memoryUsage > 0.8) {  // 内存使用超过 80%
            sendAlert("内存告警", "内存使用率达到 " + String.format("%.2f%%", memoryUsage * 100));
        }
    }
    
    // 私有方法
    private void recordSuccess(String mapperName, String methodName, long startTime) {
        long executionTime = System.nanoTime() - startTime;
        double executionTimeMs = executionTime / 1_000_000.0;
        
        PerformanceRecord record = PerformanceRecord.builder()
                .mapperName(mapperName)
                .methodName(methodName)
                .executionTimeMs(executionTimeMs)
                .success(true)
                .timestamp(Instant.now())
                .build();
        
        performanceHistory.add(record);
        totalMappings.incrementAndGet();
        updateAverageExecutionTime(executionTimeMs);
    }
    
    private void recordError(String mapperName, String methodName, Exception error, long startTime) {
        long executionTime = System.nanoTime() - startTime;
        double executionTimeMs = executionTime / 1_000_000.0;
        
        PerformanceRecord record = PerformanceRecord.builder()
                .mapperName(mapperName)
                .methodName(methodName)
                .executionTimeMs(executionTimeMs)
                .success(false)
                .errorMessage(error.getMessage())
                .timestamp(Instant.now())
                .build();
        
        performanceHistory.add(record);
        totalErrors.incrementAndGet();
        
        // 记录错误指标
        errorCounter.increment(Tags.of(
                "mapper", mapperName,
                "method", methodName,
                "error", error.getClass().getSimpleName()
        ));
    }
    
    private void recordBatchSuccess(String mapperName, String methodName, int batchSize, long startTime) {
        long executionTime = System.nanoTime() - startTime;
        double executionTimeMs = executionTime / 1_000_000.0;
        
        PerformanceRecord record = PerformanceRecord.builder()
                .mapperName(mapperName)
                .methodName(methodName)
                .executionTimeMs(executionTimeMs)
                .batchSize(batchSize)
                .success(true)
                .timestamp(Instant.now())
                .build();
        
        performanceHistory.add(record);
        totalMappings.addAndGet(batchSize);
        updateAverageExecutionTime(executionTimeMs);
        
        // 记录批量处理指标
        Timer.builder("atlas.mapper.batch.execution.time")
                .tag("mapper", mapperName)
                .tag("method", methodName)
                .register(meterRegistry)
                .record(executionTime, TimeUnit.NANOSECONDS);
        
        Gauge.builder("atlas.mapper.batch.size")
                .tag("mapper", mapperName)
                .tag("method", methodName)
                .register(meterRegistry, batchSize, size -> size);
    }
    
    private void recordBatchError(String mapperName, String methodName, int batchSize, Exception error, long startTime) {
        recordError(mapperName, methodName, error, startTime);
        
        // 额外记录批量错误
        Counter.builder("atlas.mapper.batch.errors")
                .tag("mapper", mapperName)
                .tag("method", methodName)
                .tag("batch_size", String.valueOf(batchSize))
                .register(meterRegistry)
                .increment();
    }
    
    private void updateAverageExecutionTime(double executionTimeMs) {
        averageExecutionTime.updateAndGet(current -> {
            long count = totalMappings.get();
            return count > 1 ? (current * (count - 1) + executionTimeMs) / count : executionTimeMs;
        });
    }
    
    private List<String> identifyBottlenecks(List<PerformanceRecord> records) {
        List<String> bottlenecks = new ArrayList<>();
        
        // 识别慢映射器
        Map<String, Double> avgTimes = records.stream()
                .collect(Collectors.groupingBy(
                        PerformanceRecord::getMapperName,
                        Collectors.averagingDouble(PerformanceRecord::getExecutionTimeMs)
                ));
        
        avgTimes.entrySet().stream()
                .filter(entry -> entry.getValue() > 50)  // 超过 50ms
                .forEach(entry -> bottlenecks.add("慢映射器: " + entry.getKey() + " (平均 " + 
                        String.format("%.2f ms", entry.getValue()) + ")"));
        
        return bottlenecks;
    }
    
    private double calculateErrorRate() {
        long total = totalMappings.get();
        long errors = totalErrors.get();
        return total > 0 ? (double) errors / total : 0.0;
    }
    
    private double calculateThroughput() {
        // 计算最近1分钟的吞吐量
        Instant oneMinuteAgo = Instant.now().minus(Duration.ofMinutes(1));
        
        long recentMappings = performanceHistory.stream()
                .filter(record -> record.getTimestamp().isAfter(oneMinuteAgo))
                .mapToLong(record -> record.getBatchSize() > 0 ? record.getBatchSize() : 1)
                .sum();
        
        return recentMappings / 60.0;  // 每秒映射数
    }
    
    private double getCurrentMemoryUsage() {
        MemoryMXBean memoryBean = ManagementFactory.getMemoryMXBean();
        MemoryUsage heapUsage = memoryBean.getHeapMemoryUsage();
        return (double) heapUsage.getUsed() / heapUsage.getMax();
    }
    
    private void sendAlert(String title, String message) {
        // 实现告警发送逻辑(邮件、短信、钉钉等)
        log.warn("性能告警 - {}: {}", title, message);
    }
    
    /**
     * 性能记录
     */
    @Data
    @Builder
    public static class PerformanceRecord {
        private String mapperName;
        private String methodName;
        private double executionTimeMs;
        private int batchSize;
        private boolean success;
        private String errorMessage;
        private Instant timestamp;
    }
    
    /**
     * 性能分析报告
     */
    @Data
    @Builder
    public static class PerformanceAnalysisReport {
        private long totalMappings;
        private long totalErrors;
        private double averageExecutionTime;
        private double minExecutionTime;
        private double maxExecutionTime;
        private Map<String, Long> mapperUsageCount;
        private Map<String, Double> mapperAverageTime;
        private List<String> performanceBottlenecks;
        private double errorRate;
        private double throughput;
        
        public static PerformanceAnalysisReport empty() {
            return PerformanceAnalysisReport.builder()
                    .totalMappings(0)
                    .totalErrors(0)
                    .averageExecutionTime(0.0)
                    .minExecutionTime(0.0)
                    .maxExecutionTime(0.0)
                    .mapperUsageCount(Collections.emptyMap())
                    .mapperAverageTime(Collections.emptyMap())
                    .performanceBottlenecks(Collections.emptyList())
                    .errorRate(0.0)
                    .throughput(0.0)
                    .build();
        }
    }
}

示例 2:配置优化和调优

java 复制代码
/**
 * 生产环境配置优化
 */
@Configuration
@EnableConfigurationProperties(AtlasMapperOptimizationProperties.class)
public class AtlasMapperOptimizationConfiguration {
    
    /**
     * 🔥 高性能映射器配置
     */
    @Bean
    @Primary
    public AtlasMapperConfiguration optimizedMapperConfiguration(
            AtlasMapperOptimizationProperties properties) {
        
        AtlasMapperConfiguration config = new AtlasMapperConfiguration();
        
        // 性能优化配置
        config.setUnmappedTargetPolicy(ReportingPolicy.IGNORE);
        config.setSuppressGeneratorTimestamp(true);
        config.setSuppressGeneratorVersionComment(true);
        
        // 集合映射优化
        config.setCollectionMappingStrategy(CollectionMappingStrategy.ACCESSOR_ONLY);
        
        // 空值处理优化
        config.setNullValueMappingStrategy(NullValueMappingStrategy.RETURN_NULL);
        config.setNullValueCheckStrategy(NullValueCheckStrategy.ON_IMPLICIT_CONVERSION);
        
        // 代码生成优化
        config.setBuilderPattern(properties.isUseBuilderPattern());
        config.setInjectionStrategy(InjectionStrategy.FIELD);
        
        return config;
    }
    
    /**
     * 🔥 线程池配置 - 用于并行映射
     */
    @Bean("mappingExecutor")
    public ThreadPoolTaskExecutor mappingExecutor(AtlasMapperOptimizationProperties properties) {
        ThreadPoolTaskExecutor executor = new ThreadPoolTaskExecutor();
        
        // 核心线程数 = CPU 核心数
        executor.setCorePoolSize(Runtime.getRuntime().availableProcessors());
        
        // 最大线程数 = CPU 核心数 * 2
        executor.setMaxPoolSize(Runtime.getRuntime().availableProcessors() * 2);
        
        // 队列容量
        executor.setQueueCapacity(properties.getThreadPoolQueueCapacity());
        
        // 线程名前缀
        executor.setThreadNamePrefix("atlas-mapper-");
        
        // 拒绝策略:调用者运行
        executor.setRejectedExecutionHandler(new ThreadPoolExecutor.CallerRunsPolicy());
        
        // 线程空闲时间
        executor.setKeepAliveSeconds(60);
        
        // 允许核心线程超时
        executor.setAllowCoreThreadTimeOut(true);
        
        executor.initialize();
        return executor;
    }
    
    /**
     * 🔥 缓存配置
     */
    @Bean
    public CacheManager optimizedCacheManager(AtlasMapperOptimizationProperties properties) {
        CaffeineCacheManager cacheManager = new CaffeineCacheManager();
        
        Caffeine<Object, Object> caffeine = Caffeine.newBuilder()
                .maximumSize(properties.getCacheMaxSize())
                .expireAfterWrite(Duration.ofMinutes(properties.getCacheExpireMinutes()))
                .expireAfterAccess(Duration.ofMinutes(properties.getCacheAccessExpireMinutes()))
                .recordStats();
        
        cacheManager.setCaffeine(caffeine);
        return cacheManager;
    }
    
    /**
     * 🔥 对象池配置
     */
    @Bean
    public GenericObjectPoolConfig<Object> objectPoolConfig(AtlasMapperOptimizationProperties properties) {
        GenericObjectPoolConfig<Object> config = new GenericObjectPoolConfig<>();
        
        config.setMaxTotal(properties.getObjectPoolMaxTotal());
        config.setMaxIdle(properties.getObjectPoolMaxIdle());
        config.setMinIdle(properties.getObjectPoolMinIdle());
        
        // 性能优化配置
        config.setTestOnBorrow(false);
        config.setTestOnReturn(false);
        config.setTestWhileIdle(true);
        config.setTestOnCreate(false);
        
        // 驱逐策略
        config.setTimeBetweenEvictionRunsMillis(Duration.ofMinutes(5).toMillis());
        config.setMinEvictableIdleTimeMillis(Duration.ofMinutes(10).toMillis());
        
        return config;
    }
    
    /**
     * 🔥 JVM 优化建议
     */
    @EventListener(ApplicationReadyEvent.class)
    public void printJvmOptimizationSuggestions() {
        Runtime runtime = Runtime.getRuntime();
        long maxMemory = runtime.maxMemory();
        long totalMemory = runtime.totalMemory();
        long freeMemory = runtime.freeMemory();
        
        log.info("=== Atlas Mapper JVM 优化建议 ===");
        log.info("最大内存: {} MB", maxMemory / 1024 / 1024);
        log.info("总内存: {} MB", totalMemory / 1024 / 1024);
        log.info("空闲内存: {} MB", freeMemory / 1024 / 1024);
        
        // 内存建议
        if (maxMemory < 1024 * 1024 * 1024) {  // 小于 1GB
            log.warn("建议增加堆内存大小: -Xmx2g");
        }
        
        // GC 建议
        String gcType = System.getProperty("java.vm.name");
        if (!gcType.contains("G1")) {
            log.info("建议使用 G1 垃圾收集器: -XX:+UseG1GC");
        }
        
        // 其他 JVM 参数建议
        log.info("推荐 JVM 参数:");
        log.info("  -XX:+UseG1GC");
        log.info("  -XX:MaxGCPauseMillis=200");
        log.info("  -XX:+UnlockExperimentalVMOptions");
        log.info("  -XX:+UseCGroupMemoryLimitForHeap");
        log.info("  -XX:+PrintGCDetails");
        log.info("  -XX:+PrintGCTimeStamps");
        log.info("================================");
    }
}

/**
 * 优化配置属性
 */
@ConfigurationProperties(prefix = "atlas.mapper.optimization")
@Data
public class AtlasMapperOptimizationProperties {
    
    /**
     * 是否使用建造者模式
     */
    private boolean useBuilderPattern = false;
    
    /**
     * 线程池队列容量
     */
    private int threadPoolQueueCapacity = 1000;
    
    /**
     * 缓存最大大小
     */
    private long cacheMaxSize = 10000;
    
    /**
     * 缓存过期时间(分钟)
     */
    private int cacheExpireMinutes = 30;
    
    /**
     * 缓存访问过期时间(分钟)
     */
    private int cacheAccessExpireMinutes = 10;
    
    /**
     * 对象池最大总数
     */
    private int objectPoolMaxTotal = 100;
    
    /**
     * 对象池最大空闲数
     */
    private int objectPoolMaxIdle = 50;
    
    /**
     * 对象池最小空闲数
     */
    private int objectPoolMinIdle = 10;
    
    /**
     * 是否启用性能监控
     */
    private boolean enablePerformanceMonitoring = true;
    
    /**
     * 是否启用内存优化
     */
    private boolean enableMemoryOptimization = true;
    
    /**
     * 批量处理阈值
     */
    private int batchProcessingThreshold = 1000;
    
    /**
     * 并行处理阈值
     */
    private int parallelProcessingThreshold = 5000;
}

🎬 效果演示:性能测试和对比

性能基准测试

java 复制代码
/**
 * 性能基准测试
 */
@Component
public class MappingPerformanceBenchmark {
    
    private final UserMapper standardMapper = Mappers.getMapper(UserMapper.class);
    private final HighPerformanceMappingService optimizedService;
    private final CachedMappingService cachedService;
    
    public MappingPerformanceBenchmark(HighPerformanceMappingService optimizedService,
                                     CachedMappingService cachedService) {
        this.optimizedService = optimizedService;
        this.cachedService = cachedService;
    }
    
    /**
     * 🔥 单对象映射性能对比
     */
    public void benchmarkSingleObjectMapping() {
        User testUser = createTestUser();
        int iterations = 100000;
        
        // 标准映射
        long startTime = System.nanoTime();
        for (int i = 0; i < iterations; i++) {
            UserDto dto = standardMapper.toDto(testUser);
        }
        long standardTime = System.nanoTime() - startTime;
        
        // 优化映射
        startTime = System.nanoTime();
        for (int i = 0; i < iterations; i++) {
            UserDto dto = optimizedService.mapUserWithPool(testUser);
        }
        long optimizedTime = System.nanoTime() - startTime;
        
        // 缓存映射
        startTime = System.nanoTime();
        for (int i = 0; i < iterations; i++) {
            UserDto dto = cachedService.mapUserWithCache(testUser);
        }
        long cachedTime = System.nanoTime() - startTime;
        
        // 输出结果
        System.out.println("=== 单对象映射性能对比 (" + iterations + " 次) ===");
        System.out.println("标准映射: " + formatTime(standardTime));
        System.out.println("优化映射: " + formatTime(optimizedTime) + " (提升 " + 
                String.format("%.1f%%", (double)(standardTime - optimizedTime) / standardTime * 100) + ")");
        System.out.println("缓存映射: " + formatTime(cachedTime) + " (提升 " + 
                String.format("%.1f%%", (double)(standardTime - cachedTime) / standardTime * 100) + ")");
    }
    
    /**
     * 🔥 批量映射性能对比
     */
    public void benchmarkBatchMapping() {
        List<User> testUsers = createTestUsers(10000);
        
        // 标准批量映射
        long startTime = System.nanoTime();
        List<UserDto> standardResult = standardMapper.toDtoList(testUsers);
        long standardTime = System.nanoTime() - startTime;
        
        // 优化批量映射
        startTime = System.nanoTime();
        List<UserDto> optimizedResult = optimizedService.mapUsersWithOptimization(testUsers);
        long optimizedTime = System.nanoTime() - startTime;
        
        // 并行映射
        startTime = System.nanoTime();
        List<UserDto> parallelResult = optimizedService.mapUsersInParallel(testUsers);
        long parallelTime = System.nanoTime() - startTime;
        
        // 缓存批量映射
        startTime = System.nanoTime();
        List<UserDto> cachedResult = cachedService.mapUsersWithSmartCache(testUsers);
        long cachedTime = System.nanoTime() - startTime;
        
        // 输出结果
        System.out.println("=== 批量映射性能对比 (" + testUsers.size() + " 个对象) ===");
        System.out.println("标准映射: " + formatTime(standardTime));
        System.out.println("优化映射: " + formatTime(optimizedTime) + " (提升 " + 
                String.format("%.1f%%", (double)(standardTime - optimizedTime) / standardTime * 100) + ")");
        System.out.println("并行映射: " + formatTime(parallelTime) + " (提升 " + 
                String.format("%.1f%%", (double)(standardTime - parallelTime) / standardTime * 100) + ")");
        System.out.println("缓存映射: " + formatTime(cachedTime) + " (提升 " + 
                String.format("%.1f%%", (double)(standardTime - cachedTime) / standardTime * 100) + ")");
        
        // 验证结果一致性
        System.out.println("结果验证: " + 
                (standardResult.size() == optimizedResult.size() && 
                 optimizedResult.size() == parallelResult.size() && 
                 parallelResult.size() == cachedResult.size() ? "✓ 通过" : "✗ 失败"));
    }
    
    /**
     * 🔥 内存使用对比
     */
    public void benchmarkMemoryUsage() {
        List<User> testUsers = createTestUsers(50000);
        Runtime runtime = Runtime.getRuntime();
        
        // 标准映射内存使用
        runtime.gc();
        long memoryBefore = runtime.totalMemory() - runtime.freeMemory();
        List<UserDto> standardResult = standardMapper.toDtoList(testUsers);
        long memoryAfter = runtime.totalMemory() - runtime.freeMemory();
        long standardMemory = memoryAfter - memoryBefore;
        
        // 优化映射内存使用
        runtime.gc();
        memoryBefore = runtime.totalMemory() - runtime.freeMemory();
        List<UserDto> optimizedResult = optimizedService.mapUsersWithOptimization(testUsers);
        memoryAfter = runtime.totalMemory() - runtime.freeMemory();
        long optimizedMemory = memoryAfter - memoryBefore;
        
        // 输出结果
        System.out.println("=== 内存使用对比 (" + testUsers.size() + " 个对象) ===");
        System.out.println("标准映射: " + formatMemory(standardMemory));
        System.out.println("优化映射: " + formatMemory(optimizedMemory) + " (节省 " + 
                String.format("%.1f%%", (double)(standardMemory - optimizedMemory) / standardMemory * 100) + ")");
    }
    
    // 辅助方法
    private User createTestUser() {
        User user = new User();
        user.setId(1L);
        user.setName("测试用户");
        user.setEmail("test@example.com");
        user.setCreatedAt(LocalDateTime.now());
        user.setUpdatedAt(LocalDateTime.now());
        return user;
    }
    
    private List<User> createTestUsers(int count) {
        List<User> users = new ArrayList<>(count);
        for (int i = 0; i < count; i++) {
            User user = new User();
            user.setId((long) i);
            user.setName("用户" + i);
            user.setEmail("user" + i + "@example.com");
            user.setCreatedAt(LocalDateTime.now());
            user.setUpdatedAt(LocalDateTime.now());
            users.add(user);
        }
        return users;
    }
    
    private String formatTime(long nanoTime) {
        return String.format("%.2f ms", nanoTime / 1_000_000.0);
    }
    
    private String formatMemory(long bytes) {
        return String.format("%.2f MB", bytes / 1024.0 / 1024.0);
    }
}

运行性能测试

bash 复制代码
# 运行性能基准测试
curl -X POST http://localhost:8080/api/performance/benchmark/single
curl -X POST http://localhost:8080/api/performance/benchmark/batch
curl -X POST http://localhost:8080/api/performance/benchmark/memory

# 查看性能报告
curl http://localhost:8080/api/performance/report

# 查看缓存统计
curl http://localhost:8080/api/performance/cache-stats

# 查看内存使用
curl http://localhost:8080/api/performance/memory-usage

❓ 常见问题

Q1: 如何选择合适的批次大小?

A: 批次大小选择策略:

java 复制代码
// 动态批次大小计算
public int calculateOptimalBatchSize(int totalSize, long availableMemory) {
    // 基础批次大小
    int baseBatchSize = 1000;
    
    // 根据可用内存调整
    long memoryPerObject = 1024;  // 估算每个对象 1KB
    int memoryBasedBatchSize = (int) (availableMemory / memoryPerObject / 2);
    
    // 根据 CPU 核心数调整
    int cpuBasedBatchSize = Runtime.getRuntime().availableProcessors() * 500;
    
    // 取最小值作为安全批次大小
    int optimalBatchSize = Math.min(baseBatchSize, 
                          Math.min(memoryBasedBatchSize, cpuBasedBatchSize));
    
    // 确保不超过总大小
    return Math.min(optimalBatchSize, totalSize);
}

Q2: 什么时候使用并行映射?

A: 并行映射使用指南:

java 复制代码
public boolean shouldUseParallelMapping(List<?> data) {
    // 数据量小于阈值,使用串行
    if (data.size() < 1000) {
        return false;
    }
    
    // CPU 核心数少于 2,使用串行
    if (Runtime.getRuntime().availableProcessors() < 2) {
        return false;
    }
    
    // 当前系统负载高,使用串行
    if (getCurrentCpuUsage() > 0.8) {
        return false;
    }
    
    // 映射操作复杂度高,使用并行
    return true;
}

Q3: 如何监控映射性能?

A: 性能监控最佳实践:

java 复制代码
// 1. 使用 AOP 进行方法级监控
@Around("@within(io.github.nemoob.atlas.mapper.Mapper)")
public Object monitorMapping(ProceedingJoinPoint joinPoint) throws Throwable {
    return performanceMonitor.monitorMapping(
        joinPoint.getTarget().getClass().getSimpleName(),
        joinPoint.getSignature().getName(),
        () -> {
            try {
                return joinPoint.proceed();
            } catch (Throwable e) {
                throw new RuntimeException(e);
            }
        }
    );
}

// 2. 使用 Micrometer 指标
@Timed(name = "atlas.mapper.execution", description = "Atlas Mapper execution time")
public UserDto mapUser(User user) {
    return userMapper.toDto(user);
}

// 3. 自定义性能收集器
public class CustomPerformanceCollector {
    public void recordMappingMetrics(String mapperName, long executionTime, boolean success) {
        // 发送到监控系统
    }
}

Q4: 如何优化大对象映射?

A: 大对象优化策略:

java 复制代码
// 1. 分段映射
public LargeObjectDto mapLargeObject(LargeObject obj) {
    LargeObjectDto dto = new LargeObjectDto();
    
    // 分段映射基础字段
    mapBasicFields(obj, dto);
    
    // 异步映射复杂字段
    CompletableFuture.runAsync(() -> mapComplexFields(obj, dto));
    
    return dto;
}

// 2. 懒加载映射
public class LazyMappedDto {
    private Supplier<ComplexDto> complexData = 
        Suppliers.memoize(() -> mapper.mapComplex(source.getComplexData()));
    
    public ComplexDto getComplexData() {
        return complexData.get();
    }
}

// 3. 流式映射
public Stream<ItemDto> mapLargeCollection(List<Item> items) {
    return items.stream()
        .map(mapper::toDto)
        .filter(Objects::nonNull);
}

🎯 本章小结

通过本章学习,你应该掌握了:

  1. 编译时优化:Mapper 配置和代码生成优化
  2. 运行时优化:对象池化、缓存策略和批量处理
  3. 内存优化:内存管理和垃圾回收优化
  4. 性能监控:指标收集、性能分析和告警机制
相关推荐
猫林老师2 小时前
HarmonyOS 5 性能优化全攻略:从启动加速到内存管理
华为·性能优化·harmonyos
yinke小琪2 小时前
线程池七宗罪:你以为的优化其实是在埋雷
java·后端·面试
-雷阵雨-2 小时前
数据结构——包装类&&泛型
java·开发语言·数据结构·intellij-idea
我不是混子2 小时前
Spring Boot启动时的小助手:ApplicationRunner和CommandLineRunner
java·后端
惜鸟2 小时前
Java异常处理设计
java
渣哥2 小时前
从 IOC 到多线程:Spring 单例 Bean 的并发安全性全解析
java
程序员果子2 小时前
Kafka 深度剖析:架构演进、核心概念与设计精髓
大数据·运维·分布式·中间件·架构·kafka
慕木沐2 小时前
SpringAI工具调用原理解析
java·spring ai
毕设源码-钟学长2 小时前
【开题答辩全过程】以 基于Java的戏曲网站设计与实现为例,包含答辩的问题和答案
java·开发语言