案例03-附件C-性能优化

📋 概述

本文档详细描述了 Atlas Mapper 在企业级应用中的性能优化策略和实现方案,包括内存管理、并发优化、缓存策略、数据库优化等关键技术。


🚀 性能优化架构

整体性能优化架构图

graph TB subgraph "性能优化层次架构" subgraph "应用层优化" A1[并发处理优化] A2[批量操作优化] A3[异步处理优化] A4[连接池优化] end subgraph "映射层优化" B1[智能缓存策略] B2[对象池管理] B3[内存映射优化] B4[循环引用处理] end subgraph "数据层优化" C1[查询优化] C2[索引策略] C3[分页优化] C4[读写分离] end subgraph "系统层优化" D1[JVM调优] D2[GC优化] D3[线程池调优] D4[监控告警] end subgraph "网络层优化" E1[数据压缩] E2[连接复用] E3[负载均衡] E4[CDN加速] end end A1 --> B1 A2 --> B2 A3 --> B3 A4 --> B4 B1 --> C1 B2 --> C2 B3 --> C3 B4 --> C4 C1 --> D1 C2 --> D2 C3 --> D3 C4 --> D4 D1 --> E1 D2 --> E2 D3 --> E3 D4 --> E4

性能优化数据流图

sequenceDiagram participant Client as 客户端 participant LoadBalancer as 负载均衡器 participant AppServer as 应用服务器 participant CacheLayer as 缓存层 participant MappingEngine as 映射引擎 participant ObjectPool as 对象池 participant Database as 数据库 participant Monitor as 性能监控 Client->>LoadBalancer: 大批量请求 LoadBalancer->>AppServer: 分发请求 AppServer->>Monitor: 记录请求开始 AppServer->>CacheLayer: 检查缓存 alt 缓存命中 CacheLayer-->>AppServer: 返回缓存数据 else 缓存未命中 AppServer->>Database: 批量查询 Database-->>AppServer: 原始数据 AppServer->>ObjectPool: 获取映射器实例 ObjectPool-->>AppServer: 复用实例 AppServer->>MappingEngine: 并行映射处理 MappingEngine->>Monitor: 记录映射性能 MappingEngine-->>AppServer: 映射结果 AppServer->>CacheLayer: 缓存结果 AppServer->>ObjectPool: 归还实例 end AppServer-->>LoadBalancer: 返回结果 LoadBalancer-->>Client: 响应数据 Monitor->>Monitor: 性能分析和优化建议

🧠 内存管理优化

1. 智能对象池管理

java 复制代码
package io.github.nemoob.atlas.mapper.example.performance;

import org.springframework.stereotype.Component;
import lombok.extern.slf4j.Slf4j;

import java.util.concurrent.ConcurrentLinkedQueue;
import java.util.concurrent.atomic.AtomicInteger;
import java.util.concurrent.atomic.AtomicLong;

/**
 * 智能对象池 - 减少对象创建和GC压力
 */
@Component
@Slf4j
public class SmartObjectPool<T> {
    
    private final ConcurrentLinkedQueue<T> pool = new ConcurrentLinkedQueue<>();
    private final ObjectFactory<T> factory;
    private final ObjectValidator<T> validator;
    
    // 池配置参数
    private final int maxPoolSize;
    private final int minPoolSize;
    private final long maxIdleTime;
    
    // 统计信息
    private final AtomicInteger currentSize = new AtomicInteger(0);
    private final AtomicLong borrowCount = new AtomicLong(0);
    private final AtomicLong returnCount = new AtomicLong(0);
    private final AtomicLong createCount = new AtomicLong(0);
    
    public SmartObjectPool(ObjectFactory<T> factory, ObjectValidator<T> validator) {
        this.factory = factory;
        this.validator = validator;
        this.maxPoolSize = 100;
        this.minPoolSize = 10;
        this.maxIdleTime = 300000; // 5分钟
        
        // 预热池
        warmUpPool();
        
        // 启动清理任务
        startCleanupTask();
    }
    
    /**
     * 从池中借用对象
     */
    public T borrowObject() {
        borrowCount.incrementAndGet();
        
        T object = pool.poll();
        if (object != null && validator.isValid(object)) {
            currentSize.decrementAndGet();
            return object;
        }
        
        // 池中没有可用对象,创建新对象
        object = factory.create();
        createCount.incrementAndGet();
        
        log.debug("Created new object, total created: {}", createCount.get());
        return object;
    }
    
    /**
     * 归还对象到池中
     */
    public void returnObject(T object) {
        returnCount.incrementAndGet();
        
        if (object == null || !validator.isValid(object)) {
            return;
        }
        
        // 重置对象状态
        factory.reset(object);
        
        // 检查池大小限制
        if (currentSize.get() < maxPoolSize) {
            pool.offer(object);
            currentSize.incrementAndGet();
        }
    }
    
    /**
     * 预热对象池
     */
    private void warmUpPool() {
        for (int i = 0; i < minPoolSize; i++) {
            T object = factory.create();
            pool.offer(object);
            currentSize.incrementAndGet();
        }
        log.info("Object pool warmed up with {} objects", minPoolSize);
    }
    
    /**
     * 启动清理任务
     */
    private void startCleanupTask() {
        java.util.concurrent.Executors.newSingleThreadScheduledExecutor()
            .scheduleAtFixedRate(this::cleanup, 60, 60, java.util.concurrent.TimeUnit.SECONDS);
    }
    
    /**
     * 清理过期对象
     */
    private void cleanup() {
        int cleaned = 0;
        int currentPoolSize = currentSize.get();
        
        // 保持最小池大小
        while (currentPoolSize > minPoolSize) {
            T object = pool.poll();
            if (object == null) break;
            
            if (!validator.isValid(object)) {
                currentSize.decrementAndGet();
                cleaned++;
                currentPoolSize--;
            } else {
                pool.offer(object); // 放回有效对象
                break;
            }
        }
        
        if (cleaned > 0) {
            log.debug("Cleaned {} expired objects from pool", cleaned);
        }
    }
    
    /**
     * 获取池统计信息
     */
    public PoolStatistics getStatistics() {
        return PoolStatistics.builder()
            .currentSize(currentSize.get())
            .borrowCount(borrowCount.get())
            .returnCount(returnCount.get())
            .createCount(createCount.get())
            .hitRate(calculateHitRate())
            .build();
    }
    
    private double calculateHitRate() {
        long totalBorrow = borrowCount.get();
        long totalCreate = createCount.get();
        return totalBorrow > 0 ? (double)(totalBorrow - totalCreate) / totalBorrow : 0.0;
    }
}

/**
 * 对象工厂接口
 */
public interface ObjectFactory<T> {
    T create();
    void reset(T object);
}

/**
 * 对象验证器接口
 */
public interface ObjectValidator<T> {
    boolean isValid(T object);
}

/**
 * 池统计信息
 */
@lombok.Data
@lombok.Builder
public class PoolStatistics {
    private int currentSize;
    private long borrowCount;
    private long returnCount;
    private long createCount;
    private double hitRate;
}

2. 内存映射优化器

java 复制代码
package io.github.nemoob.atlas.mapper.example.performance;

import org.springframework.stereotype.Component;
import lombok.extern.slf4j.Slf4j;

import java.lang.ref.WeakReference;
import java.util.concurrent.ConcurrentHashMap;
import java.util.concurrent.atomic.AtomicLong;

/**
 * 内存映射优化器 - 优化大对象映射的内存使用
 */
@Component
@Slf4j
public class MemoryMappingOptimizer {
    
    // 弱引用缓存,避免内存泄漏
    private final ConcurrentHashMap<String, WeakReference<Object>> mappingCache = 
        new ConcurrentHashMap<>();
    
    // 内存使用统计
    private final AtomicLong totalMappingMemory = new AtomicLong(0);
    private final AtomicLong peakMappingMemory = new AtomicLong(0);
    
    // 配置参数
    private final long maxCacheSize = 1000;
    private final long memoryThreshold = 500 * 1024 * 1024; // 500MB
    
    /**
     * 优化映射过程的内存使用
     */
    public <S, T> T optimizedMapping(S source, Class<T> targetClass, MappingFunction<S, T> mappingFunction) {
        String cacheKey = buildCacheKey(source, targetClass);
        
        // 检查缓存
        WeakReference<Object> cachedRef = mappingCache.get(cacheKey);
        if (cachedRef != null) {
            @SuppressWarnings("unchecked")
            T cached = (T) cachedRef.get();
            if (cached != null) {
                return cached;
            } else {
                // 清理失效的弱引用
                mappingCache.remove(cacheKey);
            }
        }
        
        // 检查内存使用情况
        long currentMemory = getCurrentMemoryUsage();
        if (currentMemory > memoryThreshold) {
            // 触发内存清理
            performMemoryCleanup();
        }
        
        // 执行映射
        long startTime = System.nanoTime();
        T result = mappingFunction.apply(source);
        long duration = System.nanoTime() - startTime;
        
        // 估算对象大小并更新统计
        long objectSize = estimateObjectSize(result);
        updateMemoryStatistics(objectSize);
        
        // 缓存结果(使用弱引用)
        if (mappingCache.size() < maxCacheSize) {
            mappingCache.put(cacheKey, new WeakReference<>(result));
        }
        
        log.debug("Mapping completed in {}ns, object size: {}bytes", duration, objectSize);
        return result;
    }
    
    /**
     * 批量映射优化
     */
    public <S, T> java.util.List<T> optimizedBatchMapping(
            java.util.List<S> sources, 
            Class<T> targetClass, 
            MappingFunction<S, T> mappingFunction) {
        
        int batchSize = calculateOptimalBatchSize(sources.size());
        java.util.List<T> results = new java.util.ArrayList<>(sources.size());
        
        // 分批处理
        for (int i = 0; i < sources.size(); i += batchSize) {
            int endIndex = Math.min(i + batchSize, sources.size());
            java.util.List<S> batch = sources.subList(i, endIndex);
            
            // 并行处理批次
            java.util.List<T> batchResults = batch.parallelStream()
                .map(source -> optimizedMapping(source, targetClass, mappingFunction))
                .collect(java.util.stream.Collectors.toList());
                
            results.addAll(batchResults);
            
            // 批次间的内存检查
            if ((i + batchSize) % (batchSize * 4) == 0) {
                performMemoryCleanup();
            }
        }
        
        return results;
    }
    
    /**
     * 流式映射优化 - 适用于超大数据集
     */
    public <S, T> java.util.stream.Stream<T> optimizedStreamMapping(
            java.util.stream.Stream<S> sourceStream,
            Class<T> targetClass,
            MappingFunction<S, T> mappingFunction) {
        
        return sourceStream
            .parallel()
            .map(source -> {
                try {
                    return optimizedMapping(source, targetClass, mappingFunction);
                } catch (OutOfMemoryError e) {
                    log.error("Out of memory during stream mapping, triggering emergency cleanup");
                    performEmergencyCleanup();
                    throw e;
                }
            })
            .filter(result -> result != null);
    }
    
    /**
     * 构建缓存键
     */
    private String buildCacheKey(Object source, Class<?> targetClass) {
        return String.format("%s->%s:%d", 
            source.getClass().getSimpleName(),
            targetClass.getSimpleName(),
            source.hashCode());
    }
    
    /**
     * 获取当前内存使用量
     */
    private long getCurrentMemoryUsage() {
        Runtime runtime = Runtime.getRuntime();
        return runtime.totalMemory() - runtime.freeMemory();
    }
    
    /**
     * 估算对象大小
     */
    private long estimateObjectSize(Object object) {
        if (object == null) return 0;
        
        // 简化的对象大小估算
        // 实际应用中可以使用更精确的方法,如 Instrumentation
        String className = object.getClass().getSimpleName();
        
        return switch (className) {
            case "OrderAggregateDto" -> 2048; // 2KB
            case "CustomerDto" -> 1024; // 1KB
            case "ProductDto" -> 512; // 512B
            default -> 256; // 默认256B
        };
    }
    
    /**
     * 更新内存统计
     */
    private void updateMemoryStatistics(long objectSize) {
        long current = totalMappingMemory.addAndGet(objectSize);
        long peak = peakMappingMemory.get();
        
        while (current > peak && !peakMappingMemory.compareAndSet(peak, current)) {
            peak = peakMappingMemory.get();
        }
    }
    
    /**
     * 执行内存清理
     */
    private void performMemoryCleanup() {
        // 清理失效的弱引用
        mappingCache.entrySet().removeIf(entry -> entry.getValue().get() == null);
        
        // 建议JVM进行垃圾回收
        System.gc();
        
        log.debug("Memory cleanup performed, cache size: {}", mappingCache.size());
    }
    
    /**
     * 紧急内存清理
     */
    private void performEmergencyCleanup() {
        // 清空所有缓存
        mappingCache.clear();
        
        // 强制垃圾回收
        System.gc();
        System.runFinalization();
        System.gc();
        
        log.warn("Emergency memory cleanup performed");
    }
    
    /**
     * 计算最优批次大小
     */
    private int calculateOptimalBatchSize(int totalSize) {
        long availableMemory = Runtime.getRuntime().freeMemory();
        int processors = Runtime.getRuntime().availableProcessors();
        
        // 基于可用内存和CPU核心数计算
        int memoryBasedSize = (int) (availableMemory / (2 * 1024 * 1024)); // 每2MB一个批次
        int cpuBasedSize = totalSize / (processors * 4);
        
        int batchSize = Math.min(memoryBasedSize, cpuBasedSize);
        return Math.max(Math.min(batchSize, 1000), 10); // 限制在10-1000之间
    }
    
    /**
     * 获取内存统计信息
     */
    public MemoryStatistics getMemoryStatistics() {
        Runtime runtime = Runtime.getRuntime();
        
        return MemoryStatistics.builder()
            .totalMappingMemory(totalMappingMemory.get())
            .peakMappingMemory(peakMappingMemory.get())
            .currentHeapUsage(runtime.totalMemory() - runtime.freeMemory())
            .maxHeapSize(runtime.maxMemory())
            .cacheSize(mappingCache.size())
            .build();
    }
}

/**
 * 映射函数接口
 */
@FunctionalInterface
public interface MappingFunction<S, T> {
    T apply(S source);
}

/**
 * 内存统计信息
 */
@lombok.Data
@lombok.Builder
public class MemoryStatistics {
    private long totalMappingMemory;
    private long peakMappingMemory;
    private long currentHeapUsage;
    private long maxHeapSize;
    private int cacheSize;
    
    public double getMemoryUtilization() {
        return maxHeapSize > 0 ? (double) currentHeapUsage / maxHeapSize : 0.0;
    }
}

⚡ 并发处理优化

1. 智能线程池管理器

java 复制代码
package io.github.nemoob.atlas.mapper.example.performance;

import org.springframework.stereotype.Component;
import lombok.extern.slf4j.Slf4j;

import java.util.concurrent.*;
import java.util.concurrent.atomic.AtomicInteger;

/**
 * 智能线程池管理器 - 动态调整线程池大小
 */
@Component
@Slf4j
public class SmartThreadPoolManager {
    
    private final ThreadPoolExecutor mappingExecutor;
    private final ScheduledExecutorService monitorExecutor;
    
    // 性能统计
    private final AtomicInteger activeTasks = new AtomicInteger(0);
    private final AtomicInteger completedTasks = new AtomicInteger(0);
    private final AtomicInteger rejectedTasks = new AtomicInteger(0);
    
    // 动态调整参数
    private volatile int targetUtilization = 80; // 目标CPU利用率
    private volatile long adjustmentInterval = 30000; // 调整间隔30秒
    
    public SmartThreadPoolManager() {
        // 创建自适应线程池
        this.mappingExecutor = createAdaptiveThreadPool();
        
        // 启动监控线程
        this.monitorExecutor = Executors.newSingleThreadScheduledExecutor(
            r -> new Thread(r, "thread-pool-monitor"));
        
        startMonitoring();
    }
    
    /**
     * 创建自适应线程池
     */
    private ThreadPoolExecutor createAdaptiveThreadPool() {
        int coreSize = Runtime.getRuntime().availableProcessors();
        int maxSize = coreSize * 4;
        
        ThreadPoolExecutor executor = new ThreadPoolExecutor(
            coreSize,
            maxSize,
            60L, TimeUnit.SECONDS,
            new LinkedBlockingQueue<>(1000),
            new ThreadFactory() {
                private final AtomicInteger threadNumber = new AtomicInteger(1);
                @Override
                public Thread newThread(Runnable r) {
                    Thread t = new Thread(r, "mapping-worker-" + threadNumber.getAndIncrement());
                    t.setDaemon(false);
                    t.setPriority(Thread.NORM_PRIORITY);
                    return t;
                }
            },
            new RejectedExecutionHandler() {
                @Override
                public void rejectedExecution(Runnable r, ThreadPoolExecutor executor) {
                    rejectedTasks.incrementAndGet();
                    log.warn("Task rejected, queue full. Active: {}, Queue: {}", 
                        executor.getActiveCount(), executor.getQueue().size());
                    
                    // 尝试在调用线程中执行
                    if (!executor.isShutdown()) {
                        try {
                            r.run();
                        } catch (Exception e) {
                            log.error("Error executing rejected task in caller thread", e);
                        }
                    }
                }
            }
        );
        
        // 允许核心线程超时
        executor.allowCoreThreadTimeOut(true);
        
        return executor;
    }
    
    /**
     * 提交映射任务
     */
    public <T> CompletableFuture<T> submitMappingTask(Callable<T> task) {
        activeTasks.incrementAndGet();
        
        return CompletableFuture.supplyAsync(() -> {
            try {
                return task.call();
            } catch (Exception e) {
                log.error("Error executing mapping task", e);
                throw new RuntimeException(e);
            } finally {
                activeTasks.decrementAndGet();
                completedTasks.incrementAndGet();
            }
        }, mappingExecutor);
    }
    
    /**
     * 批量提交任务
     */
    public <T> CompletableFuture<java.util.List<T>> submitBatchTasks(java.util.List<Callable<T>> tasks) {
        java.util.List<CompletableFuture<T>> futures = tasks.stream()
            .map(this::submitMappingTask)
            .collect(java.util.stream.Collectors.toList());
        
        return CompletableFuture.allOf(futures.toArray(new CompletableFuture[0]))
            .thenApply(v -> futures.stream()
                .map(CompletableFuture::join)
                .collect(java.util.stream.Collectors.toList()));
    }
    
    /**
     * 启动性能监控
     */
    private void startMonitoring() {
        monitorExecutor.scheduleAtFixedRate(
            this::adjustThreadPool, 
            adjustmentInterval, 
            adjustmentInterval, 
            TimeUnit.MILLISECONDS
        );
    }
    
    /**
     * 动态调整线程池
     */
    private void adjustThreadPool() {
        try {
            ThreadPoolStatistics stats = getThreadPoolStatistics();
            
            // 计算当前利用率
            double cpuUtilization = getCurrentCpuUtilization();
            double queueUtilization = (double) stats.getQueueSize() / 1000; // 队列容量1000
            
            log.debug("Thread pool stats - CPU: {:.2f}%, Queue: {:.2f}%, Active: {}, Core: {}, Max: {}", 
                cpuUtilization * 100, queueUtilization * 100, 
                stats.getActiveCount(), stats.getCorePoolSize(), stats.getMaximumPoolSize());
            
            // 调整策略
            if (cpuUtilization < targetUtilization / 100.0 - 0.1 && queueUtilization > 0.5) {
                // CPU利用率低但队列积压,增加核心线程
                int newCoreSize = Math.min(stats.getCorePoolSize() + 1, stats.getMaximumPoolSize());
                mappingExecutor.setCorePoolSize(newCoreSize);
                log.info("Increased core pool size to {}", newCoreSize);
                
            } else if (cpuUtilization > targetUtilization / 100.0 + 0.1 && queueUtilization < 0.1) {
                // CPU利用率高但队列空闲,减少核心线程
                int newCoreSize = Math.max(stats.getCorePoolSize() - 1, 1);
                mappingExecutor.setCorePoolSize(newCoreSize);
                log.info("Decreased core pool size to {}", newCoreSize);
            }
            
            // 调整最大线程数
            if (stats.getActiveCount() >= stats.getMaximumPoolSize() * 0.9) {
                int newMaxSize = Math.min(stats.getMaximumPoolSize() + 2, 
                    Runtime.getRuntime().availableProcessors() * 8);
                mappingExecutor.setMaximumPoolSize(newMaxSize);
                log.info("Increased maximum pool size to {}", newMaxSize);
            }
            
        } catch (Exception e) {
            log.error("Error adjusting thread pool", e);
        }
    }
    
    /**
     * 获取当前CPU利用率
     */
    private double getCurrentCpuUtilization() {
        java.lang.management.OperatingSystemMXBean osBean = 
            java.lang.management.ManagementFactory.getOperatingSystemMXBean();
        
        if (osBean instanceof com.sun.management.OperatingSystemMXBean sunOsBean) {
            return sunOsBean.getProcessCpuLoad();
        }
        
        // 备用方法:基于线程活跃度估算
        return (double) mappingExecutor.getActiveCount() / mappingExecutor.getMaximumPoolSize();
    }
    
    /**
     * 获取线程池统计信息
     */
    public ThreadPoolStatistics getThreadPoolStatistics() {
        return ThreadPoolStatistics.builder()
            .corePoolSize(mappingExecutor.getCorePoolSize())
            .maximumPoolSize(mappingExecutor.getMaximumPoolSize())
            .activeCount(mappingExecutor.getActiveCount())
            .queueSize(mappingExecutor.getQueue().size())
            .completedTaskCount(mappingExecutor.getCompletedTaskCount())
            .totalTaskCount(activeTasks.get() + completedTasks.get())
            .rejectedTaskCount(rejectedTasks.get())
            .build();
    }
    
    /**
     * 优雅关闭
     */
    public void shutdown() {
        log.info("Shutting down thread pool manager");
        
        monitorExecutor.shutdown();
        mappingExecutor.shutdown();
        
        try {
            if (!mappingExecutor.awaitTermination(30, TimeUnit.SECONDS)) {
                mappingExecutor.shutdownNow();
            }
        } catch (InterruptedException e) {
            mappingExecutor.shutdownNow();
            Thread.currentThread().interrupt();
        }
    }
}

/**
 * 线程池统计信息
 */
@lombok.Data
@lombok.Builder
public class ThreadPoolStatistics {
    private int corePoolSize;
    private int maximumPoolSize;
    private int activeCount;
    private int queueSize;
    private long completedTaskCount;
    private long totalTaskCount;
    private long rejectedTaskCount;
    
    public double getUtilization() {
        return maximumPoolSize > 0 ? (double) activeCount / maximumPoolSize : 0.0;
    }
    
    public double getQueueUtilization() {
        return queueSize / 1000.0; // 假设队列容量为1000
    }
    
    public double getCompletionRate() {
        return totalTaskCount > 0 ? (double) completedTaskCount / totalTaskCount : 0.0;
    }
}

2. 并发映射协调器

java 复制代码
package io.github.nemoob.atlas.mapper.example.performance;

import org.springframework.stereotype.Component;
import org.springframework.beans.factory.annotation.Autowired;
import lombok.extern.slf4j.Slf4j;

import java.util.concurrent.*;
import java.util.concurrent.atomic.AtomicInteger;

/**
 * 并发映射协调器 - 协调多个映射任务的并发执行
 */
@Component
@Slf4j
public class ConcurrentMappingCoordinator {
    
    @Autowired
    private SmartThreadPoolManager threadPoolManager;
    
    @Autowired
    private MemoryMappingOptimizer memoryOptimizer;
    
    // 并发控制
    private final Semaphore concurrencyLimiter;
    private final AtomicInteger activeMappings = new AtomicInteger(0);
    
    // 性能统计
    private final AtomicInteger totalMappings = new AtomicInteger(0);
    private final AtomicInteger successfulMappings = new AtomicInteger(0);
    private final AtomicInteger failedMappings = new AtomicInteger(0);
    
    public ConcurrentMappingCoordinator() {
        // 基于CPU核心数设置并发限制
        int maxConcurrency = Runtime.getRuntime().availableProcessors() * 2;
        this.concurrencyLimiter = new Semaphore(maxConcurrency);
    }
    
    /**
     * 协调并发映射执行
     */
    public <S, T> CompletableFuture<java.util.List<T>> coordinateConcurrentMapping(
            java.util.List<S> sources,
            Class<T> targetClass,
            MappingFunction<S, T> mappingFunction) {
        
        totalMappings.addAndGet(sources.size());
        
        // 计算最优分片策略
        int optimalChunkSize = calculateOptimalChunkSize(sources.size());
        java.util.List<java.util.List<S>> chunks = partitionList(sources, optimalChunkSize);
        
        // 创建并发任务
        java.util.List<CompletableFuture<java.util.List<T>>> chunkFutures = chunks.stream()
            .map(chunk -> processChunkConcurrently(chunk, targetClass, mappingFunction))
            .collect(java.util.stream.Collectors.toList());
        
        // 合并结果
        return CompletableFuture.allOf(chunkFutures.toArray(new CompletableFuture[0]))
            .thenApply(v -> {
                java.util.List<T> results = new java.util.ArrayList<>();
                for (CompletableFuture<java.util.List<T>> future : chunkFutures) {
                    try {
                        results.addAll(future.get());
                    } catch (Exception e) {
                        log.error("Error getting chunk result", e);
                        failedMappings.addAndGet(1);
                    }
                }
                return results;
            });
    }
    
    /**
     * 并发处理单个分片
     */
    private <S, T> CompletableFuture<java.util.List<T>> processChunkConcurrently(
            java.util.List<S> chunk,
            Class<T> targetClass,
            MappingFunction<S, T> mappingFunction) {
        
        return threadPoolManager.submitMappingTask(() -> {
            try {
                // 获取并发许可
                concurrencyLimiter.acquire();
                activeMappings.incrementAndGet();
                
                // 使用内存优化器处理分片
                java.util.List<T> results = memoryOptimizer.optimizedBatchMapping(
                    chunk, targetClass, mappingFunction);
                
                successfulMappings.addAndGet(results.size());
                return results;
                
            } catch (Exception e) {
                failedMappings.addAndGet(chunk.size());
                log.error("Error processing chunk of size {}", chunk.size(), e);
                throw new RuntimeException(e);
                
            } finally {
                activeMappings.decrementAndGet();
                concurrencyLimiter.release();
            }
        });
    }
    
    /**
     * 自适应流式并发映射
     */
    public <S, T> java.util.stream.Stream<T> adaptiveStreamMapping(
            java.util.stream.Stream<S> sourceStream,
            Class<T> targetClass,
            MappingFunction<S, T> mappingFunction) {
        
        return sourceStream
            .parallel()
            .map(source -> {
                try {
                    // 动态调整并发度
                    adjustConcurrencyBasedOnLoad();
                    
                    return memoryOptimizer.optimizedMapping(source, targetClass, mappingFunction);
                    
                } catch (Exception e) {
                    log.error("Error in adaptive stream mapping", e);
                    return null;
                }
            })
            .filter(result -> result != null);
    }
    
    /**
     * 基于负载动态调整并发度
     */
    private void adjustConcurrencyBasedOnLoad() {
        int currentActive = activeMappings.get();
        int availablePermits = concurrencyLimiter.availablePermits();
        
        // 获取系统负载指标
        double cpuLoad = getCurrentCpuLoad();
        double memoryUsage = getCurrentMemoryUsage();
        
        // 动态调整策略
        if (cpuLoad > 0.8 || memoryUsage > 0.8) {
            // 高负载时减少并发
            if (availablePermits < 2) {
                try {
                    Thread.sleep(10); // 短暂等待
                } catch (InterruptedException e) {
                    Thread.currentThread().interrupt();
                }
            }
        } else if (cpuLoad < 0.5 && memoryUsage < 0.5) {
            // 低负载时可以增加并发(通过减少等待时间)
            // 这里不需要特殊处理,让任务自然执行
        }
    }
    
    /**
     * 计算最优分片大小
     */
    private int calculateOptimalChunkSize(int totalSize) {
        int processors = Runtime.getRuntime().availableProcessors();
        int baseChunkSize = Math.max(totalSize / (processors * 4), 1);
        
        // 考虑内存限制
        long availableMemory = Runtime.getRuntime().freeMemory();
        int memoryBasedChunkSize = (int) (availableMemory / (10 * 1024 * 1024)); // 每10MB一个分片
        
        return Math.min(Math.max(baseChunkSize, 10), Math.min(memoryBasedChunkSize, 1000));
    }
    
    /**
     * 分割列表
     */
    private <T> java.util.List<java.util.List<T>> partitionList(java.util.List<T> list, int chunkSize) {
        java.util.List<java.util.List<T>> chunks = new java.util.ArrayList<>();
        
        for (int i = 0; i < list.size(); i += chunkSize) {
            int endIndex = Math.min(i + chunkSize, list.size());
            chunks.add(list.subList(i, endIndex));
        }
        
        return chunks;
    }
    
    /**
     * 获取当前CPU负载
     */
    private double getCurrentCpuLoad() {
        java.lang.management.OperatingSystemMXBean osBean = 
            java.lang.management.ManagementFactory.getOperatingSystemMXBean();
        
        if (osBean instanceof com.sun.management.OperatingSystemMXBean sunOsBean) {
            return sunOsBean.getProcessCpuLoad();
        }
        
        return 0.5; // 默认值
    }
    
    /**
     * 获取当前内存使用率
     */
    private double getCurrentMemoryUsage() {
        Runtime runtime = Runtime.getRuntime();
        long totalMemory = runtime.totalMemory();
        long freeMemory = runtime.freeMemory();
        return (double) (totalMemory - freeMemory) / runtime.maxMemory();
    }
    
    /**
     * 获取并发统计信息
     */
    public ConcurrencyStatistics getConcurrencyStatistics() {
        return ConcurrencyStatistics.builder()
            .activeMappings(activeMappings.get())
            .totalMappings(totalMappings.get())
            .successfulMappings(successfulMappings.get())
            .failedMappings(failedMappings.get())
            .availablePermits(concurrencyLimiter.availablePermits())
            .queuedThreads(concurrencyLimiter.getQueueLength())
            .build();
    }
}

/**
 * 并发统计信息
 */
@lombok.Data
@lombok.Builder
public class ConcurrencyStatistics {
    private int activeMappings;
    private int totalMappings;
    private int successfulMappings;
    private int failedMappings;
    private int availablePermits;
    private int queuedThreads;
    
    public double getSuccessRate() {
        return totalMappings > 0 ? (double) successfulMappings / totalMappings : 0.0;
    }
    
    public double getFailureRate() {
        return totalMappings > 0 ? (double) failedMappings / totalMappings : 0.0;
    }
}

📊 缓存策略优化

多级缓存管理器

java 复制代码
package io.github.nemoob.atlas.mapper.example.performance;

import org.springframework.stereotype.Component;
import org.springframework.cache.annotation.Cacheable;
import org.springframework.cache.annotation.CacheEvict;
import lombok.extern.slf4j.Slf4j;

import java.util.concurrent.ConcurrentHashMap;
import java.util.concurrent.TimeUnit;

/**
 * 多级缓存管理器 - 实现L1内存缓存 + L2分布式缓存
 */
@Component
@Slf4j
public class MultiLevelCacheManager {
    
    // L1缓存:本地内存缓存
    private final ConcurrentHashMap<String, CacheEntry> l1Cache = new ConcurrentHashMap<>();
    
    // L2缓存:分布式缓存(Redis等)
    @Autowired
    private org.springframework.data.redis.core.RedisTemplate<String, Object> redisTemplate;
    
    // 缓存配置
    private final long l1CacheExpiry = TimeUnit.MINUTES.toMillis(5); // L1缓存5分钟过期
    private final long l2CacheExpiry = TimeUnit.HOURS.toMillis(1);   // L2缓存1小时过期
    private final int maxL1CacheSize = 1000;
    
    /**
     * 获取缓存数据
     */
    @SuppressWarnings("unchecked")
    public <T> T get(String key, Class<T> type) {
        // 先查L1缓存
        CacheEntry entry = l1Cache.get(key);
        if (entry != null && !entry.isExpired()) {
            log.debug("L1 cache hit for key: {}", key);
            return (T) entry.getValue();
        }
        
        // L1缓存未命中,查L2缓存
        try {
            Object value = redisTemplate.opsForValue().get(key);
            if (value != null) {
                log.debug("L2 cache hit for key: {}", key);
                
                // 回填L1缓存
                putL1Cache(key, value);
                return (T) value;
            }
        } catch (Exception e) {
            log.warn("Error accessing L2 cache for key: {}", key, e);
        }
        
        log.debug("Cache miss for key: {}", key);
        return null;
    }
    
    /**
     * 存储缓存数据
     */
    public void put(String key, Object value) {
        // 存储到L1缓存
        putL1Cache(key, value);
        
        // 异步存储到L2缓存
        CompletableFuture.runAsync(() -> {
            try {
                redisTemplate.opsForValue().set(key, value, l2CacheExpiry, TimeUnit.MILLISECONDS);
                log.debug("Stored to L2 cache: {}", key);
            } catch (Exception e) {
                log.warn("Error storing to L2 cache for key: {}", key, e);
            }
        });
    }
    
    /**
     * 存储到L1缓存
     */
    private void putL1Cache(String key, Object value) {
        // 检查缓存大小限制
        if (l1Cache.size() >= maxL1CacheSize) {
            evictExpiredEntries();
            
            // 如果还是超过限制,使用LRU策略清理
            if (l1Cache.size() >= maxL1CacheSize) {
                evictLRUEntries();
            }
        }
        
        CacheEntry entry = new CacheEntry(value, System.currentTimeMillis() + l1CacheExpiry);
        l1Cache.put(key, entry);
        log.debug("Stored to L1 cache: {}", key);
    }
    
    /**
     * 清理过期条目
     */
    private void evictExpiredEntries() {
        long now = System.currentTimeMillis();
        l1Cache.entrySet().removeIf(entry -> entry.getValue().getExpiryTime() < now);
    }
    
    /**
     * LRU清理策略
     */
    private void evictLRUEntries() {
        int toRemove = maxL1CacheSize / 4; // 清理25%的条目
        
        l1Cache.entrySet().stream()
            .sorted((e1, e2) -> Long.compare(e1.getValue().getAccessTime(), e2.getValue().getAccessTime()))
            .limit(toRemove)
            .map(java.util.Map.Entry::getKey)
            .forEach(l1Cache::remove);
    }
    
    /**
     * 删除缓存
     */
    public void evict(String key) {
        l1Cache.remove(key);
        
        CompletableFuture.runAsync(() -> {
            try {
                redisTemplate.delete(key);
                log.debug("Evicted from L2 cache: {}", key);
            } catch (Exception e) {
                log.warn("Error evicting from L2 cache for key: {}", key, e);
            }
        });
    }
    
    /**
     * 清空所有缓存
     */
    public void clear() {
        l1Cache.clear();
        
        CompletableFuture.runAsync(() -> {
            try {
                // 这里应该根据实际情况清理特定前缀的键
                log.debug("Cleared L2 cache");
            } catch (Exception e) {
                log.warn("Error clearing L2 cache", e);
            }
        });
    }
    
    /**
     * 获取缓存统计信息
     */
    public CacheStatistics getStatistics() {
        return CacheStatistics.builder()
            .l1CacheSize(l1Cache.size())
            .l1HitRate(calculateL1HitRate())
            .l2HitRate(calculateL2HitRate())
            .build();
    }
    
    private double calculateL1HitRate() {
        // 实际实现中需要维护命中统计
        return 0.85; // 示例值
    }
    
    private double calculateL2HitRate() {
        // 实际实现中需要维护命中统计
        return 0.65; // 示例值
    }
}

/**
 * 缓存条目
 */
@lombok.Data
@lombok.AllArgsConstructor
class CacheEntry {
    private Object value;
    private long expiryTime;
    private long accessTime;
    
    public CacheEntry(Object value, long expiryTime) {
        this.value = value;
        this.expiryTime = expiryTime;
        this.accessTime = System.currentTimeMillis();
    }
    
    public boolean isExpired() {
        return System.currentTimeMillis() > expiryTime;
    }
    
    public Object getValue() {
        this.accessTime = System.currentTimeMillis(); // 更新访问时间
        return value;
    }
}

/**
 * 缓存统计信息
 */
@lombok.Data
@lombok.Builder
class CacheStatistics {
    private int l1CacheSize;
    private double l1HitRate;
    private double l2HitRate;
    
    public double getOverallHitRate() {
        return l1HitRate + (1 - l1HitRate) * l2HitRate;
    }
}

🎯 性能优化总结

关键优化指标

  1. 响应时间优化

    • 单个映射:< 5ms
    • 批量映射(10K):< 2s
    • 内存使用:< 500MB
  2. 吞吐量提升

    • 并发处理:1000+ TPS
    • 缓存命中率:> 80%
    • CPU利用率:70-80%
  3. 资源利用率

    • 内存利用率:< 85%
    • 线程池利用率:60-80%
    • GC停顿时间:< 100ms

最佳实践建议

  1. 内存管理

    • 使用对象池减少GC压力
    • 实施智能缓存策略
    • 监控内存使用情况
  2. 并发优化

    • 动态调整线程池大小
    • 实施背压控制机制
    • 使用异步处理模式
  3. 缓存策略

    • 多级缓存架构
    • 智能缓存失效策略
    • 缓存预热机制
  4. 监控告警

    • 实时性能监控
    • 自动化告警机制
    • 性能趋势分析
相关推荐
杨杨杨大侠2 小时前
案例03-附件D-监控系统
java·开源·github
少女续续念3 小时前
国产 DevOps 崛起!Gitee 领衔构建合规、高效的企业协作工具链
git·开源
uhakadotcom3 小时前
什么是OpenTelemetry?
后端·面试·github
少女续续念5 小时前
AI 不再是 “旁观者”!Gitee MCP Server 让智能助手接管代码仓库管理
git·开源
华仔啊5 小时前
主线程存了用户信息,子线程居然拿不到?ThreadLocal 背锅
java·后端
间彧5 小时前
Spring Boot项目中,Redis 如何同时执行多条命令
java·redis
召摇6 小时前
如何避免写垃圾代码:Java篇
java·后端·代码规范