字节跳动Java面试被问:一致性哈希的虚拟节点和数据迁移

一、一致性哈希基础回顾

1.1 传统哈希的问题

java

复制

下载

复制代码
// 传统哈希取模算法
public class TraditionalHash {
    public int getServerIndex(String key, int serverCount) {
        // 问题:服务器数量变化时,几乎所有数据都需要重新映射
        return Math.abs(key.hashCode()) % serverCount;
    }
}

1.2 一致性哈希原理

text

复制

下载

复制代码
一致性哈希环结构:
0 → 2^32-1
│
├── 节点A (哈希值: 100)
├── 数据K1 (哈希值: 150) → 分配给节点B
├── 节点B (哈希值: 200)
├── 数据K2 (哈希值: 250) → 分配给节点C
├── 节点C (哈希值: 300)
└── 数据K3 (哈希值: 50)  → 分配给节点A

二、虚拟节点实现详解

2.1 虚拟节点核心概念

java

复制

下载

复制代码
public class ConsistentHashWithVirtualNodes {
    
    // 物理节点到虚拟节点的映射
    private final Map<String, List<Integer>> physicalToVirtual = new HashMap<>();
    
    // 哈希环:虚拟节点哈希值 → 物理节点
    private final TreeMap<Integer, String> hashRing = new TreeMap<>();
    
    // 每个物理节点的虚拟节点数量
    private final int virtualNodesPerPhysical;
    
    public ConsistentHashWithVirtualNodes(int virtualNodesPerPhysical) {
        this.virtualNodesPerPhysical = virtualNodesPerPhysical;
    }
    
    // 添加物理节点
    public void addPhysicalNode(String physicalNode) {
        List<Integer> virtualNodeHashes = new ArrayList<>();
        
        // 为每个物理节点创建多个虚拟节点
        for (int i = 0; i < virtualNodesPerPhysical; i++) {
            // 虚拟节点标识:物理节点#编号
            String virtualNodeId = physicalNode + "#VN" + i;
            int hash = hash(virtualNodeId);
            
            // 添加到哈希环
            hashRing.put(hash, physicalNode);
            virtualNodeHashes.add(hash);
            
            System.out.printf("创建虚拟节点: %s → 哈希值: %d → 物理节点: %s%n", 
                virtualNodeId, hash, physicalNode);
        }
        
        physicalToVirtual.put(physicalNode, virtualNodeHashes);
    }
    
    // 删除物理节点
    public void removePhysicalNode(String physicalNode) {
        List<Integer> virtualNodeHashes = physicalToVirtual.get(physicalNode);
        if (virtualNodeHashes != null) {
            for (int hash : virtualNodeHashes) {
                hashRing.remove(hash);
            }
            physicalToVirtual.remove(physicalNode);
        }
    }
    
    // 为数据分配节点
    public String getNodeForKey(String key) {
        if (hashRing.isEmpty()) {
            return null;
        }
        
        int keyHash = hash(key);
        
        // 在哈希环上顺时针查找第一个节点
        Map.Entry<Integer, String> entry = hashRing.ceilingEntry(keyHash);
        
        if (entry == null) {
            // 环的末尾,返回第一个节点(环状结构)
            entry = hashRing.firstEntry();
        }
        
        return entry.getValue();
    }
    
    // 哈希函数 - 使用MD5然后取模
    private int hash(String input) {
        try {
            MessageDigest md = MessageDigest.getInstance("MD5");
            byte[] digest = md.digest(input.getBytes(StandardCharsets.UTF_8));
            
            // 取前4字节作为32位整数
            int hash = 0;
            for (int i = 0; i < 4; i++) {
                hash <<= 8;
                hash |= (digest[i] & 0xFF);
            }
            
            // 确保为正数
            return hash & 0x7FFFFFFF;
            
        } catch (NoSuchAlgorithmException e) {
            throw new RuntimeException("MD5 algorithm not found", e);
        }
    }
    
    // 打印哈希环状态
    public void printHashRing() {
        System.out.println("\n=== 一致性哈希环状态 ===");
        System.out.println("物理节点数量: " + physicalToVirtual.size());
        System.out.println("虚拟节点总数: " + hashRing.size());
        
        // 统计每个物理节点分配的虚拟节点数量
        Map<String, Integer> virtualNodeCount = new HashMap<>();
        for (String physicalNode : hashRing.values()) {
            virtualNodeCount.put(physicalNode, 
                virtualNodeCount.getOrDefault(physicalNode, 0) + 1);
        }
        
        System.out.println("各物理节点虚拟节点分布:");
        for (Map.Entry<String, Integer> entry : virtualNodeCount.entrySet()) {
            System.out.printf("  %s: %d 个虚拟节点 (%.1f%%)%n", 
                entry.getKey(), entry.getValue(), 
                entry.getValue() * 100.0 / hashRing.size());
        }
    }
}

2.2 虚拟节点优化策略

java

复制

下载

复制代码
public class OptimizedVirtualNodeConsistentHash {
    
    // 增强的虚拟节点实现
    private final TreeMap<Integer, VirtualNode> hashRing = new TreeMap<>();
    private final Map<String, PhysicalNode> physicalNodes = new HashMap<>();
    
    // 权重因子:不同性能的节点分配不同数量的虚拟节点
    private static class PhysicalNode {
        String id;
        int weight;  // 权重,决定虚拟节点数量
        int capacity; // 容量限制
        List<VirtualNode> virtualNodes = new ArrayList<>();
    }
    
    private static class VirtualNode {
        String virtualId;
        int hash;
        PhysicalNode physicalNode;
        int dataCount; // 统计分配的数据量
    }
    
    // 根据节点权重计算虚拟节点数量
    public void addPhysicalNode(String nodeId, int weight, int capacity) {
        PhysicalNode physicalNode = new PhysicalNode();
        physicalNode.id = nodeId;
        physicalNode.weight = weight;
        physicalNode.capacity = capacity;
        
        // 根据权重计算虚拟节点数量
        int virtualNodeCount = calculateVirtualNodeCount(weight);
        
        for (int i = 0; i < virtualNodeCount; i++) {
            String virtualId = String.format("%s#VN%04d", nodeId, i);
            int hash = hash(virtualId);
            
            VirtualNode virtualNode = new VirtualNode();
            virtualNode.virtualId = virtualId;
            virtualNode.hash = hash;
            virtualNode.physicalNode = physicalNode;
            
            hashRing.put(hash, virtualNode);
            physicalNode.virtualNodes.add(virtualNode);
        }
        
        physicalNodes.put(nodeId, physicalNode);
        
        log.info("添加物理节点: {}, 权重: {}, 虚拟节点数: {}", 
            nodeId, weight, virtualNodeCount);
    }
    
    // 计算虚拟节点数量的算法
    private int calculateVirtualNodeCount(int weight) {
        // 基础虚拟节点数 + 权重因子
        int baseVirtualNodes = 100;
        int weightFactor = 10;
        
        return baseVirtualNodes + weight * weightFactor;
    }
    
    // 带负载均衡的数据分配
    public String getNodeForKey(String key, boolean checkLoad) {
        int keyHash = hash(key);
        
        // 查找第一个虚拟节点
        Map.Entry<Integer, VirtualNode> entry = hashRing.ceilingEntry(keyHash);
        
        if (entry == null) {
            entry = hashRing.firstEntry();
        }
        
        VirtualNode virtualNode = entry.getValue();
        
        if (checkLoad) {
            // 检查物理节点负载
            PhysicalNode physicalNode = virtualNode.physicalNode;
            
            // 如果节点负载过高,寻找下一个可用节点
            if (isNodeOverloaded(physicalNode)) {
                return findNextAvailableNode(keyHash, physicalNode);
            }
        }
        
        // 记录数据分配
        virtualNode.dataCount++;
        
        return virtualNode.physicalNode.id;
    }
    
    // 检查节点是否过载
    private boolean isNodeOverloaded(PhysicalNode node) {
        // 计算当前负载率
        int totalData = node.virtualNodes.stream()
            .mapToInt(vn -> vn.dataCount)
            .sum();
        
        double loadRate = (double) totalData / node.capacity;
        
        // 负载率超过90%认为过载
        return loadRate > 0.9;
    }
    
    // 寻找下一个可用节点
    private String findNextAvailableNode(int startHash, PhysicalNode excludedNode) {
        // 从startHash开始顺时针遍历
        NavigableMap<Integer, VirtualNode> tailMap = hashRing.tailMap(startHash, false);
        
        for (VirtualNode vn : tailMap.values()) {
            if (!vn.physicalNode.equals(excludedNode) && 
                !isNodeOverloaded(vn.physicalNode)) {
                vn.dataCount++;
                return vn.physicalNode.id;
            }
        }
        
        // 如果没找到,从环开始处继续查找
        for (VirtualNode vn : hashRing.values()) {
            if (!vn.physicalNode.equals(excludedNode) && 
                !isNodeOverloaded(vn.physicalNode)) {
                vn.dataCount++;
                return vn.physicalNode.id;
            }
        }
        
        // 所有节点都过载,返回原始节点
        return excludedNode.id;
    }
    
    // 统计负载分布
    public void printLoadDistribution() {
        System.out.println("\n=== 负载分布统计 ===");
        
        for (PhysicalNode node : physicalNodes.values()) {
            int totalData = node.virtualNodes.stream()
                .mapToInt(vn -> vn.dataCount)
                .sum();
            
            double loadRate = (double) totalData / node.capacity;
            
            System.out.printf("节点: %s, 权重: %d, 容量: %d%n", 
                node.id, node.weight, node.capacity);
            System.out.printf("  虚拟节点数: %d, 数据量: %d, 负载率: %.2f%%%n",
                node.virtualNodes.size(), totalData, loadRate * 100);
            
            // 虚拟节点数据分布详情
            if (node.virtualNodes.size() <= 10) {
                for (VirtualNode vn : node.virtualNodes) {
                    System.out.printf("    虚拟节点 %s: %d 条数据%n", 
                        vn.virtualId, vn.dataCount);
                }
            }
        }
    }
}

篇幅限制下面就只能给大家展示小册部分内容了。整理了一份核心面试笔记包括了:Java面试、Spring、JVM、MyBatis、Redis、MySQL、并发编程、微服务、Linux、Springboot、SpringCloud、MQ、Kafc

需要全套面试笔记及答案
【点击此处即可/免费获取】

三、数据迁移策略实现

3.1 平滑数据迁移框架

java

复制

下载

复制代码
public class DataMigrationManager {
    
    // 迁移任务状态
    public enum MigrationStatus {
        PENDING,    // 等待迁移
        MIGRATING,  // 迁移中
        VERIFYING,  // 验证中
        COMPLETED,  // 已完成
        FAILED      // 失败
    }
    
    // 迁移任务定义
    public static class MigrationTask {
        String taskId;
        String sourceNode;
        String targetNode;
        Set<String> keysToMigrate;  // 需要迁移的键集合
        MigrationStatus status;
        long startTime;
        long endTime;
        int totalKeys;
        int migratedKeys;
        int failedKeys;
        
        // 进度回调
        interface ProgressCallback {
            void onProgress(double percentage);
            void onComplete(MigrationTask task);
            void onError(MigrationTask task, Throwable error);
        }
    }
    
    // 一致性哈希迁移协调器
    public class ConsistentHashMigrationCoordinator {
        private final ConsistentHashWithVirtualNodes consistentHash;
        private final ExecutorService migrationExecutor;
        private final Map<String, MigrationTask> migrationTasks = new ConcurrentHashMap<>();
        private final MigrationDataStorage dataStorage;
        
        public ConsistentHashMigrationCoordinator(
                ConsistentHashWithVirtualNodes consistentHash,
                MigrationDataStorage dataStorage) {
            this.consistentHash = consistentHash;
            this.dataStorage = dataStorage;
            
            // 创建迁移线程池
            this.migrationExecutor = Executors.newFixedThreadPool(
                Runtime.getRuntime().availableProcessors() * 2,
                new ThreadFactory() {
                    private final AtomicInteger counter = new AtomicInteger(0);
                    @Override
                    public Thread newThread(Runnable r) {
                        Thread thread = new Thread(r, 
                            "Migration-Worker-" + counter.incrementAndGet());
                        thread.setDaemon(true);
                        return thread;
                    }
                }
            );
        }
        
        // 添加节点触发迁移
        public MigrationTask addNodeAndMigrate(String newNode) {
            // 1. 添加新节点到哈希环
            consistentHash.addPhysicalNode(newNode);
            
            // 2. 识别需要迁移的数据
            Map<String, Set<String>> migrationPlan = calculateMigrationPlan(newNode);
            
            // 3. 创建迁移任务
            MigrationTask task = new MigrationTask();
            task.taskId = UUID.randomUUID().toString();
            task.targetNode = newNode;
            task.status = MigrationStatus.PENDING;
            task.startTime = System.currentTimeMillis();
            task.keysToMigrate = new HashSet<>();
            
            // 合并所有需要迁移的键
            for (Set<String> keys : migrationPlan.values()) {
                task.keysToMigrate.addAll(keys);
            }
            task.totalKeys = task.keysToMigrate.size();
            
            migrationTasks.put(task.taskId, task);
            
            // 4. 执行迁移
            executeMigration(task, migrationPlan);
            
            return task;
        }
        
        // 计算迁移计划
        private Map<String, Set<String>> calculateMigrationPlan(String newNode) {
            Map<String, Set<String>> migrationPlan = new HashMap<>();
            
            // 模拟数据存储,实际中需要从存储中扫描
            // 这里假设我们有一个数据扫描器
            DataScanner scanner = new DataScanner();
            Map<String, String> allData = scanner.scanAllData();
            
            for (Map.Entry<String, String> entry : allData.entrySet()) {
                String key = entry.getKey();
                String currentOwner = consistentHash.getNodeForKey(key);
                
                // 如果新节点应该是这个键的所有者
                String newOwner = consistentHash.getNodeForKey(key);
                if (!newOwner.equals(currentOwner)) {
                    // 需要从currentOwner迁移到newOwner
                    migrationPlan
                        .computeIfAbsent(currentOwner, k -> new HashSet<>())
                        .add(key);
                }
            }
            
            return migrationPlan;
        }
        
        // 执行迁移
        private void executeMigration(MigrationTask task, 
                                     Map<String, Set<String>> migrationPlan) {
            task.status = MigrationStatus.MIGRATING;
            
            // 为每个源节点创建迁移子任务
            List<Future<Void>> futures = new ArrayList<>();
            
            for (Map.Entry<String, Set<String>> entry : migrationPlan.entrySet()) {
                String sourceNode = entry.getKey();
                Set<String> keys = entry.getValue();
                
                Future<Void> future = migrationExecutor.submit(() -> {
                    migrateDataBatch(sourceNode, task.targetNode, keys, task);
                    return null;
                });
                
                futures.add(future);
            }
            
            // 等待所有迁移完成
            CompletableFuture.allOf(
                futures.stream()
                    .map(f -> CompletableFuture.runAsync(() -> {
                        try {
                            f.get();
                        } catch (Exception e) {
                            task.failedKeys += keysPerSource;
                        }
                    }))
                    .toArray(CompletableFuture[]::new)
            ).thenRun(() -> {
                // 迁移完成后验证
                verifyMigration(task);
            });
        }
        
        // 批量迁移数据
        private void migrateDataBatch(String sourceNode, String targetNode,
                                     Set<String> keys, MigrationTask task) {
            int batchSize = 100; // 每批迁移100条数据
            List<String> keyList = new ArrayList<>(keys);
            
            for (int i = 0; i < keyList.size(); i += batchSize) {
                int end = Math.min(i + batchSize, keyList.size());
                List<String> batch = keyList.subList(i, end);
                
                try {
                    // 1. 从源节点读取数据
                    Map<String, String> dataBatch = dataStorage.batchGet(sourceNode, batch);
                    
                    // 2. 写入目标节点
                    dataStorage.batchPut(targetNode, dataBatch);
                    
                    // 3. 删除源节点数据(先写后删,保证数据不丢失)
                    dataStorage.batchDelete(sourceNode, batch);
                    
                    // 4. 更新任务进度
                    synchronized (task) {
                        task.migratedKeys += batch.size();
                    }
                    
                    // 5. 记录迁移日志
                    logMigration(sourceNode, targetNode, batch);
                    
                    // 6. 短暂休眠,避免对系统造成过大压力
                    if (i % 1000 == 0) {
                        Thread.sleep(10);
                    }
                    
                } catch (Exception e) {
                    log.error("批量迁移失败: source={}, target={}, batch={}", 
                        sourceNode, targetNode, batch, e);
                    
                    synchronized (task) {
                        task.failedKeys += batch.size();
                    }
                }
            }
        }
        
        // 验证迁移结果
        private void verifyMigration(MigrationTask task) {
            task.status = MigrationStatus.VERIFYING;
            
            try {
                // 1. 数据一致性验证
                boolean allVerified = verifyDataConsistency(task);
                
                if (allVerified) {
                    task.status = MigrationStatus.COMPLETED;
                    task.endTime = System.currentTimeMillis();
                    
                    log.info("迁移任务完成: taskId={}, 迁移键数={}, 耗时={}ms",
                        task.taskId, task.migratedKeys, 
                        task.endTime - task.startTime);
                    
                    // 清理迁移过程中的临时数据
                    cleanupMigration(task);
                    
                } else {
                    task.status = MigrationStatus.FAILED;
                    log.error("迁移验证失败: taskId={}", task.taskId);
                    
                    // 触发回滚
                    rollbackMigration(task);
                }
                
            } catch (Exception e) {
                task.status = MigrationStatus.FAILED;
                log.error("迁移验证异常", e);
            }
        }
        
        // 数据一致性验证
        private boolean verifyDataConsistency(MigrationTask task) {
            int verifiedCount = 0;
            
            for (String key : task.keysToMigrate) {
                try {
                    // 检查目标节点是否有数据
                    String targetValue = dataStorage.get(task.targetNode, key);
                    String sourceValue = dataStorage.get(getOriginalOwner(key), key);
                    
                    if (targetValue != null && sourceValue == null) {
                        // 目标节点有数据,源节点已删除,验证通过
                        verifiedCount++;
                    } else {
                        log.warn("数据验证失败: key={}, targetValue={}, sourceValue={}",
                            key, targetValue, sourceValue);
                    }
                    
                } catch (Exception e) {
                    log.error("验证数据异常: key={}", key, e);
                }
            }
            
            double verificationRate = (double) verifiedCount / task.migratedKeys;
            log.info("数据验证完成: 验证率={:.2f}%", verificationRate * 100);
            
            return verificationRate >= 0.999; // 99.9%验证通过
        }
        
        // 获取键的原始所有者
        private String getOriginalOwner(String key) {
            // 这里需要记录迁移前的映射关系
            // 简化实现:使用一致性哈希计算
            return consistentHash.getNodeForKey(key);
        }
        
        // 清理迁移临时数据
        private void cleanupMigration(MigrationTask task) {
            // 删除迁移日志
            // 清理临时缓存
            // 更新元数据
        }
        
        // 回滚迁移
        private void rollbackMigration(MigrationTask task) {
            log.info("开始回滚迁移任务: taskId={}", task.taskId);
            
            // 将数据从目标节点移回源节点
            // 这里需要实现回滚逻辑
            
            log.info("回滚完成: taskId={}", task.taskId);
        }
        
        // 记录迁移日志
        private void logMigration(String source, String target, List<String> keys) {
            MigrationLog log = new MigrationLog();
            log.sourceNode = source;
            log.targetNode = target;
            log.keys = keys;
            log.timestamp = System.currentTimeMillis();
            
            // 保存到日志存储
            migrationLogStorage.save(log);
        }
    }
}

3.2 双写迁移策略

java

复制

下载

复制代码
public class DoubleWriteMigrationStrategy {
    
    // 双写阶段定义
    public enum DoubleWritePhase {
        PHASE_1_WRITE_BOTH,      // 同时写新旧节点
        PHASE_2_READ_NEW,        // 从新节点读
        PHASE_3_CLEANUP_OLD      // 清理旧数据
    }
    
    // 双写迁移管理器
    public class DoubleWriteMigrationManager {
        private final DataStorage oldStorage;
        private final DataStorage newStorage;
        private final ConsistentHashWithVirtualNodes oldHashRing;
        private final ConsistentHashWithVirtualNodes newHashRing;
        private volatile DoubleWritePhase currentPhase = DoubleWritePhase.PHASE_1_WRITE_BOTH;
        private final MigrationSwitchController switchController;
        
        public DoubleWriteMigrationManager(
                DataStorage oldStorage, 
                DataStorage newStorage,
                ConsistentHashWithVirtualNodes oldHashRing,
                ConsistentHashWithVirtualNodes newHashRing) {
            this.oldStorage = oldStorage;
            this.newStorage = newStorage;
            this.oldHashRing = oldHashRing;
            this.newHashRing = newHashRing;
            this.switchController = new MigrationSwitchController();
        }
        
        // 写入数据(根据当前阶段)
        public void write(String key, String value) {
            switch (currentPhase) {
                case PHASE_1_WRITE_BOTH:
                    // 双写阶段:同时写入新旧存储
                    writeToBoth(key, value);
                    break;
                    
                case PHASE_2_READ_NEW:
                case PHASE_3_CLEANUP_OLD:
                    // 单写阶段:只写入新存储
                    writeToNew(key, value);
                    break;
            }
        }
        
        // 读取数据(根据当前阶段)
        public String read(String key) {
            switch (currentPhase) {
                case PHASE_1_WRITE_BOTH:
                    // 双写阶段:优先从新存储读,如果失败则从旧存储读
                    try {
                        return newStorage.get(key);
                    } catch (Exception e) {
                        return oldStorage.get(key);
                    }
                    
                case PHASE_2_READ_NEW:
                case PHASE_3_CLEANUP_OLD:
                    // 单读阶段:只从新存储读
                    return newStorage.get(key);
                    
                default:
                    throw new IllegalStateException("未知的阶段: " + currentPhase);
            }
        }
        
        // 双写实现
        private void writeToBoth(String key, String value) {
            // 1. 写入新存储
            try {
                newStorage.put(key, value);
            } catch (Exception e) {
                log.error("写入新存储失败: key={}", key, e);
            }
            
            // 2. 写入旧存储
            try {
                oldStorage.put(key, value);
            } catch (Exception e) {
                log.error("写入旧存储失败: key={}", key, e);
            }
            
            // 3. 验证写入一致性
            verifyWriteConsistency(key, value);
        }
        
        // 验证写入一致性
        private void verifyWriteConsistency(String key, String expectedValue) {
            String oldValue = oldStorage.get(key);
            String newValue = newStorage.get(key);
            
            if (!Objects.equals(oldValue, newValue) || 
                !Objects.equals(newValue, expectedValue)) {
                
                log.error("数据不一致: key={}, oldValue={}, newValue={}, expected={}",
                    key, oldValue, newValue, expectedValue);
                
                // 触发自动修复
                autoFixInconsistency(key, expectedValue);
            }
        }
        
        // 自动修复不一致
        private void autoFixInconsistency(String key, String expectedValue) {
            // 根据读写规则决定修复策略
            String currentReadValue = read(key);
            
            if (Objects.equals(currentReadValue, expectedValue)) {
                // 读到的值是正确的,修复写
                if (!Objects.equals(oldStorage.get(key), expectedValue)) {
                    oldStorage.put(key, expectedValue);
                }
                if (!Objects.equals(newStorage.get(key), expectedValue)) {
                    newStorage.put(key, expectedValue);
                }
            } else {
                // 写修复
                writeToBoth(key, expectedValue);
            }
        }
        
        // 切换阶段
        public void switchPhase(DoubleWritePhase newPhase) {
            if (switchController.canSwitch(currentPhase, newPhase)) {
                log.info("切换迁移阶段: {} → {}", currentPhase, newPhase);
                currentPhase = newPhase;
                
                // 执行阶段切换后的操作
                onPhaseSwitched(newPhase);
            } else {
                throw new IllegalStateException(
                    String.format("无法从阶段 %s 切换到 %s", currentPhase, newPhase));
            }
        }
        
        // 阶段切换后的处理
        private void onPhaseSwitched(DoubleWritePhase newPhase) {
            switch (newPhase) {
                case PHASE_2_READ_NEW:
                    // 切换到只读新存储阶段
                    // 1. 预热新存储缓存
                    warmUpNewStorage();
                    
                    // 2. 监控读性能
                    startReadMonitoring();
                    
                    // 3. 数据最终一致性检查
                    verifyFinalConsistency();
                    break;
                    
                case PHASE_3_CLEANUP_OLD:
                    // 清理旧数据阶段
                    // 1. 批量删除旧数据
                    cleanupOldData();
                    
                    // 2. 释放旧存储资源
                    releaseOldResources();
                    break;
            }
        }
        
        // 预热新存储
        private void warmUpNewStorage() {
            log.info("开始预热新存储...");
            
            // 1. 扫描热点数据
            List<String> hotKeys = findHotKeys();
            
            // 2. 批量预加载
            int batchSize = 1000;
            for (int i = 0; i < hotKeys.size(); i += batchSize) {
                int end = Math.min(i + batchSize, hotKeys.size());
                List<String> batch = hotKeys.subList(i, end);
                
                // 从旧存储读取并写入新存储
                Map<String, String> data = oldStorage.batchGet(batch);
                newStorage.batchPut(data);
                
                log.info("预热进度: {}/{}", Math.min(i + batchSize, hotKeys.size()), hotKeys.size());
            }
            
            log.info("新存储预热完成");
        }
        
        // 清理旧数据
        private void cleanupOldData() {
            log.info("开始清理旧数据...");
            
            // 1. 创建清理任务
            CleanupTask cleanupTask = new CleanupTask(oldStorage, newStorage);
            
            // 2. 分批次清理
            cleanupTask.executeInBatches(10000); // 每批清理10000条
            
            // 3. 验证清理结果
            boolean cleanupVerified = verifyCleanup();
            
            if (cleanupVerified) {
                log.info("旧数据清理完成");
            } else {
                log.error("旧数据清理验证失败");
            }
        }
    }
    
    // 迁移开关控制器
    public class MigrationSwitchController {
        // 允许的切换路径
        private static final Map<DoubleWritePhase, Set<DoubleWritePhase>> ALLOWED_TRANSITIONS = 
            Map.of(
                DoubleWritePhase.PHASE_1_WRITE_BOTH, 
                Set.of(DoubleWritePhase.PHASE_2_READ_NEW),
                
                DoubleWritePhase.PHASE_2_READ_NEW,
                Set.of(DoubleWritePhase.PHASE_3_CLEANUP_OLD),
                
                DoubleWritePhase.PHASE_3_CLEANUP_OLD,
                Set.of() // 最终阶段,不允许再切换
            );
        
        public boolean canSwitch(DoubleWritePhase from, DoubleWritePhase to) {
            Set<DoubleWritePhase> allowed = ALLOWED_TRANSITIONS.get(from);
            return allowed != null && allowed.contains(to);
        }
        
        // 检查切换条件
        public SwitchCheckResult checkSwitchConditions(DoubleWritePhase from, 
                                                      DoubleWritePhase to) {
            SwitchCheckResult result = new SwitchCheckResult();
            
            switch (from) {
                case PHASE_1_WRITE_BOTH:
                    if (to == DoubleWritePhase.PHASE_2_READ_NEW) {
                        // 检查双写一致性
                        result.addCondition("数据一致性", checkDataConsistency());
                        result.addCondition("新存储性能", checkNewStoragePerformance());
                        result.addCondition("错误率", checkErrorRate());
                    }
                    break;
                    
                case PHASE_2_READ_NEW:
                    if (to == DoubleWritePhase.PHASE_3_CLEANUP_OLD) {
                        result.addCondition("读切换稳定性", checkReadStability());
                        result.addCondition("监控数据", checkMonitoringData());
                    }
                    break;
            }
            
            return result;
        }
        
        private boolean checkDataConsistency() {
            // 随机采样检查数据一致性
            int sampleSize = 1000;
            int inconsistentCount = 0;
            
            for (int i = 0; i < sampleSize; i++) {
                String key = generateRandomKey();
                String oldValue = oldStorage.get(key);
                String newValue = newStorage.get(key);
                
                if (!Objects.equals(oldValue, newValue)) {
                    inconsistentCount++;
                }
            }
            
            double inconsistencyRate = (double) inconsistentCount / sampleSize;
            return inconsistencyRate < 0.001; // 不一致率低于0.1%
        }
    }
}

四、迁移性能优化

4.1 并行迁移优化

java

复制

下载

复制代码
public class ParallelMigrationOptimizer {
    
    // 并行迁移控制器
    public class ParallelMigrationController {
        private final ExecutorService migrationExecutor;
        private final MigrationRateLimiter rateLimiter;
        private final MigrationMonitor monitor;
        private final AtomicInteger activeMigrations = new AtomicInteger(0);
        private final int maxConcurrentMigrations;
        
        public ParallelMigrationController(int maxConcurrentMigrations) {
            this.maxConcurrentMigrations = maxConcurrentMigrations;
            
            // 创建迁移线程池
            this.migrationExecutor = new ThreadPoolExecutor(
                maxConcurrentMigrations,
                maxConcurrentMigrations * 2,
                60, TimeUnit.SECONDS,
                new LinkedBlockingQueue<>(1000),
                new MigrationThreadFactory(),
                new MigrationRejectionHandler()
            );
            
            this.rateLimiter = new MigrationRateLimiter();
            this.monitor = new MigrationMonitor();
            
            // 启动监控
            startMonitoring();
        }
        
        // 执行并行迁移
        public MigrationResult executeParallelMigration(
                MigrationPlan plan, 
                MigrationStrategy strategy) {
            
            MigrationResult result = new MigrationResult();
            result.startTime = System.currentTimeMillis();
            
            // 1. 分割迁移任务
            List<MigrationTask> tasks = splitMigrationTasks(plan, strategy);
            result.totalTasks = tasks.size();
            
            // 2. 提交并行任务
            List<CompletableFuture<TaskResult>> futures = new ArrayList<>();
            
            for (MigrationTask task : tasks) {
                // 检查并发限制
                while (activeMigrations.get() >= maxConcurrentMigrations) {
                    try {
                        Thread.sleep(10);
                    } catch (InterruptedException e) {
                        Thread.currentThread().interrupt();
                        break;
                    }
                }
                
                // 速率限制
                rateLimiter.acquire();
                
                // 提交任务
                CompletableFuture<TaskResult> future = CompletableFuture.supplyAsync(() -> {
                    activeMigrations.incrementAndGet();
                    try {
                        return executeMigrationTask(task);
                    } finally {
                        activeMigrations.decrementAndGet();
                    }
                }, migrationExecutor);
                
                futures.add(future);
            }
            
            // 3. 等待所有任务完成
            CompletableFuture.allOf(
                futures.toArray(new CompletableFuture[0])
            ).thenAccept(v -> {
                // 收集结果
                List<TaskResult> taskResults = futures.stream()
                    .map(CompletableFuture::join)
                    .collect(Collectors.toList());
                
                // 统计结果
                result.successTasks = (int) taskResults.stream()
                    .filter(tr -> tr.success)
                    .count();
                result.failedTasks = result.totalTasks - result.successTasks;
                result.endTime = System.currentTimeMillis();
                
                // 分析性能
                analyzePerformance(taskResults, result);
                
            }).exceptionally(e -> {
                log.error("并行迁移异常", e);
                result.failedTasks = result.totalTasks;
                return null;
            });
            
            return result;
        }
        
        // 分割迁移任务
        private List<MigrationTask> splitMigrationTasks(
                MigrationPlan plan, 
                MigrationStrategy strategy) {
            
            List<MigrationTask> tasks = new ArrayList<>();
            
            switch (strategy.getSplitStrategy()) {
                case BY_KEY_RANGE:
                    // 按键范围分割
                    tasks = splitByKeyRange(plan, strategy.getBatchSize());
                    break;
                    
                case BY_HASH_SLOT:
                    // 按哈希槽分割
                    tasks = splitByHashSlot(plan, strategy.getConcurrency());
                    break;
                    
                case BY_DATA_SIZE:
                    // 按数据大小分割
                    tasks = splitByDataSize(plan, strategy.getTargetSizePerTask());
                    break;
            }
            
            return tasks;
        }
        
        // 按键范围分割
        private List<MigrationTask> splitByKeyRange(
                MigrationPlan plan, int batchSize) {
            
            List<MigrationTask> tasks = new ArrayList<>();
            List<String> allKeys = new ArrayList<>(plan.getKeysToMigrate());
            
            // 按字母顺序排序
            Collections.sort(allKeys);
            
            for (int i = 0; i < allKeys.size(); i += batchSize) {
                int end = Math.min(i + batchSize, allKeys.size());
                List<String> batchKeys = allKeys.subList(i, end);
                
                MigrationTask task = new MigrationTask();
                task.keys = new HashSet<>(batchKeys);
                task.sourceNode = plan.getSourceNode();
                task.targetNode = plan.getTargetNode();
                
                tasks.add(task);
            }
            
            return tasks;
        }
        
        // 执行单个迁移任务
        private TaskResult executeMigrationTask(MigrationTask task) {
            TaskResult result = new TaskResult();
            result.startTime = System.currentTimeMillis();
            
            try {
                // 1. 批量读取
                Map<String, String> data = dataStorage.batchGet(
                    task.sourceNode, new ArrayList<>(task.keys));
                
                // 2. 批量写入
                dataStorage.batchPut(task.targetNode, data);
                
                // 3. 验证写入
                boolean verified = verifyBatchWrite(task.targetNode, data);
                
                if (verified) {
                    // 4. 删除源数据
                    dataStorage.batchDelete(task.sourceNode, new ArrayList<>(task.keys));
                    
                    result.success = true;
                    result.keysMigrated = task.keys.size();
                } else {
                    result.success = false;
                    result.error = "验证失败";
                }
                
            } catch (Exception e) {
                result.success = false;
                result.error = e.getMessage();
                log.error("迁移任务执行失败", e);
            }
            
            result.endTime = System.currentTimeMillis();
            result.duration = result.endTime - result.startTime;
            
            // 记录性能指标
            monitor.recordTaskMetrics(result);
            
            return result;
        }
        
        // 速率限制器
        public class MigrationRateLimiter {
            private final RateLimiter rateLimiter;
            private final Semaphore concurrencyLimiter;
            
            public MigrationRateLimiter() {
                // QPS限制:每秒最多1000次操作
                this.rateLimiter = RateLimiter.create(1000);
                
                // 并发数限制
                this.concurrencyLimiter = new Semaphore(maxConcurrentMigrations * 10);
            }
            
            public void acquire() {
                // 获取速率限制许可
                rateLimiter.acquire();
                
                // 获取并发限制许可
                try {
                    concurrencyLimiter.acquire();
                } catch (InterruptedException e) {
                    Thread.currentThread().interrupt();
                    throw new RuntimeException("获取并发许可被中断", e);
                }
            }
            
            public void release() {
                concurrencyLimiter.release();
            }
            
            // 动态调整速率
            public void adjustRate(double newRate) {
                // 创建新的RateLimiter
                RateLimiter newLimiter = RateLimiter.create(newRate);
                
                // 原子替换
                synchronized (this) {
                    rateLimiter = newLimiter;
                }
            }
        }
        
        // 迁移监控器
        public class MigrationMonitor {
            private final MetricsCollector metricsCollector;
            private final AlertManager alertManager;
            private final Map<String, MigrationMetrics> taskMetrics = new ConcurrentHashMap<>();
            
            public void recordTaskMetrics(TaskResult result) {
                MigrationMetrics metrics = new MigrationMetrics();
                metrics.duration = result.duration;
                metrics.keysMigrated = result.keysMigrated;
                metrics.success = result.success;
                metrics.timestamp = System.currentTimeMillis();
                
                taskMetrics.put(UUID.randomUUID().toString(), metrics);
                
                // 实时计算性能指标
                calculatePerformanceMetrics();
                
                // 检查异常
                checkForAnomalies(result);
            }
            
            private void calculatePerformanceMetrics() {
                // 计算平均迁移速度
                long totalKeys = taskMetrics.values().stream()
                    .filter(m -> m.success)
                    .mapToLong(m -> m.keysMigrated)
                    .sum();
                
                long totalTime = taskMetrics.values().stream()
                    .filter(m -> m.success)
                    .mapToLong(m -> m.duration)
                    .sum();
                
                if (totalTime > 0) {
                    double keysPerSecond = totalKeys * 1000.0 / totalTime;
                    metricsCollector.recordGauge("migration.throughput", keysPerSecond);
                }
                
                // 计算成功率
                long totalTasks = taskMetrics.size();
                long successTasks = taskMetrics.values().stream()
                    .filter(m -> m.success)
                    .count();
                
                double successRate = totalTasks > 0 ? 
                    successTasks * 100.0 / totalTasks : 100;
                
                metricsCollector.recordGauge("migration.success.rate", successRate);
            }
            
            private void checkForAnomalies(TaskResult result) {
                // 检查迁移时长异常
                if (result.duration > 30000) { // 超过30秒
                    alertManager.sendAlert("MIGRATION_SLOW", 
                        String.format("迁移任务执行过慢: %dms", result.duration));
                }
                
                // 检查失败率异常
                if (!result.success) {
                    // 统计最近10个任务的失败率
                    List<MigrationMetrics> recentMetrics = getRecentMetrics(10);
                    long recentFailures = recentMetrics.stream()
                        .filter(m -> !m.success)
                        .count();
                    
                    if (recentFailures >= 3) { // 最近10个任务失败3个以上
                        alertManager.sendAlert("MIGRATION_FAILURE_RATE_HIGH",
                            String.format("迁移失败率过高: %d/%d", recentFailures, recentMetrics.size()));
                    }
                }
            }
        }
    }
}

篇幅限制下面就只能给大家展示小册部分内容了。整理了一份核心面试笔记包括了:Java面试、Spring、JVM、MyBatis、Redis、MySQL、并发编程、微服务、Linux、Springboot、SpringCloud、MQ、Kafc

需要全套面试笔记及答案
【点击此处即可/免费获取】

五、容错与一致性保证

5.1 迁移事务保证

java

复制

下载

复制代码
public class MigrationTransactionManager {
    
    // 两阶段提交迁移
    public class TwoPhaseCommitMigration {
        private final TransactionCoordinator coordinator;
        private final List<MigrationParticipant> participants;
        private final TransactionLog transactionLog;
        
        public MigrationResult migrateWith2PC(MigrationPlan plan) {
            String transactionId = generateTransactionId();
            MigrationResult result = new MigrationResult();
            
            try {
                // 阶段1:准备阶段
                boolean allPrepared = preparePhase(transactionId, plan);
                
                if (!allPrepared) {
                    // 回滚
                    rollbackPhase(transactionId);
                    result.success = false;
                    result.error = "准备阶段失败";
                    return result;
                }
                
                // 记录准备完成
                transactionLog.logPrepared(transactionId, plan);
                
                // 阶段2:提交阶段
                boolean allCommitted = commitPhase(transactionId);
                
                if (allCommitted) {
                    result.success = true;
                    transactionLog.logCommitted(transactionId);
                } else {
                    // 需要人工干预
                    result.success = false;
                    result.requiresManualIntervention = true;
                    transactionLog.logInDoubt(transactionId);
                }
                
            } catch (Exception e) {
                log.error("两阶段提交迁移异常", e);
                result.success = false;
                result.error = e.getMessage();
                
                // 尝试回滚
                try {
                    rollbackPhase(transactionId);
                } catch (Exception rollbackEx) {
                    log.error("回滚失败", rollbackEx);
                }
            }
            
            return result;
        }
        
        private boolean preparePhase(String transactionId, MigrationPlan plan) {
            List<CompletableFuture<Boolean>> futures = new ArrayList<>();
            
            for (MigrationParticipant participant : participants) {
                CompletableFuture<Boolean> future = CompletableFuture.supplyAsync(() -> {
                    try {
                        return participant.prepare(transactionId, plan);
                    } catch (Exception e) {
                        log.error("参与者准备失败", e);
                        return false;
                    }
                });
                
                futures.add(future);
            }
            
            // 等待所有参与者响应
            List<Boolean> results = futures.stream()
                .map(CompletableFuture::join)
                .collect(Collectors.toList());
            
            // 所有参与者都必须准备成功
            return results.stream().allMatch(Boolean::booleanValue);
        }
        
        private boolean commitPhase(String transactionId) {
            List<CompletableFuture<Boolean>> futures = new ArrayList<>();
            
            for (MigrationParticipant participant : participants) {
                CompletableFuture<Boolean> future = CompletableFuture.supplyAsync(() -> {
                    try {
                        participant.commit(transactionId);
                        return true;
                    } catch (Exception e) {
                        log.error("参与者提交失败", e);
                        return false;
                    }
                });
                
                futures.add(future);
            }
            
            List<Boolean> results = futures.stream()
                .map(CompletableFuture::join)
                .collect(Collectors.toList());
            
            // 所有参与者都必须提交成功
            return results.stream().allMatch(Boolean::booleanValue);
        }
        
        private void rollbackPhase(String transactionId) {
            for (MigrationParticipant participant : participants) {
                try {
                    participant.rollback(transactionId);
                } catch (Exception e) {
                    log.error("参与者回滚失败", e);
                }
            }
            
            transactionLog.logRolledBack(transactionId);
        }
    }
    
    // 迁移参与者接口
    public interface MigrationParticipant {
        boolean prepare(String transactionId, MigrationPlan plan);
        void commit(String transactionId);
        void rollback(String transactionId);
        MigrationStatus getStatus(String transactionId);
    }
    
    // 基于状态机的迁移参与者实现
    public class StatefulMigrationParticipant implements MigrationParticipant {
        private final DataStorage storage;
        private final Map<String, MigrationState> transactionStates = new ConcurrentHashMap<>();
        
        @Override
        public boolean prepare(String transactionId, MigrationPlan plan) {
            try {
                // 1. 检查是否可以执行迁移
                if (!canMigrate(plan)) {
                    return false;
                }
                
                // 2. 锁定要迁移的数据
                Set<String> lockedKeys = lockKeys(plan.getKeysToMigrate());
                
                if (lockedKeys.size() != plan.getKeysToMigrate().size()) {
                    // 锁定失败
                    unlockKeys(lockedKeys);
                    return false;
                }
                
                // 3. 创建迁移快照
                MigrationSnapshot snapshot = createSnapshot(plan);
                
                // 4. 保存状态
                MigrationState state = new MigrationState();
                state.transactionId = transactionId;
                state.plan = plan;
                state.snapshot = snapshot;
                state.lockedKeys = lockedKeys;
                state.status = MigrationStatus.PREPARED;
                state.prepareTime = System.currentTimeMillis();
                
                transactionStates.put(transactionId, state);
                
                // 5. 持久化状态
                persistState(state);
                
                return true;
                
            } catch (Exception e) {
                log.error("准备阶段失败", e);
                return false;
            }
        }
        
        @Override
        public void commit(String transactionId) {
            MigrationState state = transactionStates.get(transactionId);
            if (state == null || state.status != MigrationStatus.PREPARED) {
                throw new IllegalStateException("无效的事务状态");
            }
            
            try {
                // 1. 执行数据迁移
                executeMigration(state.plan);
                
                // 2. 更新状态
                state.status = MigrationStatus.COMMITTED;
                state.commitTime = System.currentTimeMillis();
                
                // 3. 释放锁
                unlockKeys(state.lockedKeys);
                
                // 4. 清理快照
                cleanupSnapshot(state.snapshot);
                
                // 5. 更新持久化状态
                updatePersistedState(state);
                
            } catch (Exception e) {
                log.error("提交阶段失败", e);
                throw e;
            }
        }
        
        @Override
        public void rollback(String transactionId) {
            MigrationState state = transactionStates.get(transactionId);
            if (state == null) {
                return;
            }
            
            try {
                // 1. 恢复快照
                restoreFromSnapshot(state.snapshot);
                
                // 2. 更新状态
                state.status = MigrationStatus.ROLLED_BACK;
                state.rollbackTime = System.currentTimeMillis();
                
                // 3. 释放锁
                unlockKeys(state.lockedKeys);
                
                // 4. 清理资源
                cleanupResources(state);
                
                // 5. 更新持久化状态
                updatePersistedState(state);
                
            } catch (Exception e) {
                log.error("回滚阶段失败", e);
            }
        }
        
        // 恢复机制
        public void recover() {
            // 1. 从持久化存储加载未完成的事务
            List<MigrationState> incompleteStates = loadIncompleteStates();
            
            for (MigrationState state : incompleteStates) {
                switch (state.status) {
                    case PREPARED:
                        // 需要协调器决定提交或回滚
                        log.warn("发现未决事务: {}", state.transactionId);
                        break;
                        
                    case COMMITTING:
                        // 提交中,尝试完成提交
                        try {
                            completeCommit(state);
                        } catch (Exception e) {
                            log.error("完成提交失败", e);
                        }
                        break;
                        
                    case ROLLING_BACK:
                        // 回滚中,尝试完成回滚
                        try {
                            completeRollback(state);
                        } catch (Exception e) {
                            log.error("完成回滚失败", e);
                        }
                        break;
                }
            }
        }
    }
}

六、最佳实践总结

6.1 虚拟节点配置建议

text

复制

下载

复制代码
虚拟节点数量计算公式:
虚拟节点数 = 基础数量 × 权重因子 × 容错因子

推荐配置:
1. 小型集群(<10节点):
   • 基础虚拟节点:100-200/物理节点
   • 总虚拟节点:1000-2000

2. 中型集群(10-100节点):
   • 基础虚拟节点:200-500/物理节点  
   • 总虚拟节点:5000-20000

3. 大型集群(>100节点):
   • 基础虚拟节点:500-1000/物理节点
   • 总虚拟节点:50000-200000

权重因子考虑:
• CPU性能:1.0-2.0
• 内存容量:1.0-1.5
• 磁盘性能:1.0-1.3
• 网络带宽:1.0-1.2

6.2 迁移策略选择矩阵

text

复制

下载

复制代码
迁移场景                   推荐策略                  关键指标
-----------------------|----------------------|----------------------
小规模扩容(<10%节点)  | 增量迁移             | 迁移时间 < 1小时
大规模扩容(>30%节点)  | 双写迁移             | 数据一致性 > 99.99%
节点替换                | 并行迁移 + 2PC       | 零数据丢失
紧急缩容                | 快速迁移 + 备份      | 业务影响 < 1分钟
数据均衡                | 平滑迁移             | 负载方差 < 10%

6.3 性能优化要点

text

复制

下载

复制代码
1. 迁移并行度优化:
   • CPU核心数 × 2-4 个迁移线程
   • 每批数据量:100-1000条
   • 批次间隔:10-100ms

2. 网络优化:
   • 开启数据压缩(>1KB)
   • 使用批量接口减少RTT
   • 就近迁移(同机房优先)

3. 存储优化:
   • SSD优先于HDD
   • 预热目标节点缓存
   • 监控IOPS和吞吐量

4. 监控告警:
   • 迁移速率监控
   • 错误率监控
   • 资源使用率监控
   • 数据一致性监控

6.4 故障处理预案

text

复制

下载

复制代码
故障类型             处理方案                     恢复时间目标
------------------|---------------------------|----------------
网络分区           | 暂停迁移,等待恢复          | < 5分钟
节点宕机           | 自动切换,重新分配迁移任务  | < 2分钟
存储故障           | 使用备份恢复,重新迁移      | < 30分钟
数据不一致         | 自动校验和修复              | < 10分钟
迁移进程异常       | 自动重启,断点续传          | < 1分钟

通过以上完整的一致性哈希虚拟节点和数据迁移方案,可以构建一个高可用、高性能的分布式存储系统,实现平滑的扩缩容和数据均衡。

相关推荐
qq_318121592 小时前
互联网大厂Java面试故事:支付与金融服务微服务架构、消息队列与AI风控全流程解析
java·spring boot·redis·微服务·kafka·支付系统·金融服务
文慧的科技江湖2 小时前
重卡的充电桩一般都是多少千瓦? - 慧知开源充电桩平台
java·开发语言·开源·充电桩开源平台·慧知重卡开源充电桩平台
VT.馒头2 小时前
【力扣】2622. 有时间限制的缓存
javascript·算法·leetcode·缓存·typescript
Hcoco_me2 小时前
大模型面试题71: DPO有什么缺点?后续对DPO算法有哪些改进?
人工智能·深度学习·算法·自然语言处理·transformer·vllm
小白学大数据2 小时前
爬虫技术选股:Python 自动化筛选潜力股
开发语言·爬虫·python·自动化
悟能不能悟2 小时前
jasper里面$F和$P的区别
开发语言·后端
mit6.8242 小时前
dfs|bfs建图|hash贪心
算法
辰风沐阳2 小时前
JavaScript 的 WebSocket 使用指南
开发语言·javascript·websocket
短剑重铸之日2 小时前
《7天学会Redis》Day 3 - 持久化机制深度解析
java·redis·后端·缓存