一、设计需求与核心挑战
1. 核心需求矩阵
java
复制
下载
// 必须满足的特性
1. 全局唯一性:分布式环境下绝不重复
2. 趋势递增:有利于数据库索引性能(B+树)
3. 高可用性:7×24小时服务,99.99%可用性
4. 高性能:QPS > 10万,延迟 < 1ms
5. 可扩展:支持业务线性扩容
6. 时间有序:反映生成时间顺序
// 需要考虑的问题
- 时钟回拨:服务器时间不一致
- 机器ID分配:如何保证不重复
- 数据安全:ID是否可被猜测
- 存储成本:ID长度与存储空间
二、主流方案对比分析
1. 方案选型矩阵
text
复制
下载
┌─────────────────────────────────────────────────────────────┐
│ 方案类型 │ 优点 │ 缺点 │
├─────────────────┼─────────────────────────┼─────────────────────────┤
│ UUID │ 简单、本地生成、无中心化 │ 无序、索引性能差、128位 │
│ 数据库自增ID │ 绝对有序、简单可靠 │ 单点故障、扩展困难 │
│ Redis INCR │ 高性能、原子性 │ 持久化问题、内存依赖 │
│ Snowflake │ 高性能、趋势递增、短小 │ 时钟回拨、机器ID管理 │
│ Leaf(美团) │ 双Buffer、监控完善 │ 相对复杂、依赖DB/Redis │
│ TinyID(滴滴) │ HTTP友好、号段模式 │ 网络开销、中心化 │
└─────────────────┴─────────────────────────┴─────────────────────────┘
2. Snowflake算法深度设计
java
复制
下载
// 64位ID结构设计(经典版本)
// 0 | 41位时间戳(69年) | 10位机器ID | 12位序列号
public class SnowflakeIdGenerator {
// 各部分的位数
private static final long SEQUENCE_BITS = 12L;
private static final long WORKER_ID_BITS = 5L;
private static final long DATACENTER_ID_BITS = 5L;
private static final long MAX_WORKER_ID = ~(-1L << WORKER_ID_BITS); // 31
private static final long MAX_DATACENTER_ID = ~(-1L << DATACENTER_ID_BITS); // 31
// 偏移量
private static final long WORKER_ID_SHIFT = SEQUENCE_BITS; // 12
private static final long DATACENTER_ID_SHIFT = SEQUENCE_BITS + WORKER_ID_BITS; // 17
private static final long TIMESTAMP_SHIFT = SEQUENCE_BITS + WORKER_ID_BITS + DATACENTER_ID_BITS; // 22
// 起始时间戳(可自定义,如2024-01-01)
private static final long EPOCH = 1704067200000L;
// 序列号掩码(4095)
private static final long SEQUENCE_MASK = ~(-1L << SEQUENCE_BITS);
// 实例变量
private long workerId;
private long datacenterId;
private long sequence = 0L;
private long lastTimestamp = -1L;
// 解决时钟回拨的三种策略
private enum ClockBackStrategy {
THROW_EXCEPTION, // 抛出异常
WAIT_AND_RETRY, // 等待时钟追上
USE_BACKUP_WORKER // 使用备份WorkerID
}
}
三、生产级Snowflake优化方案
1. 时钟回拨全面解决方案
java
复制
下载
public class SnowflakeWithClockProtection {
private long lastTimestamp = -1L;
private long sequence = 0L;
private int clockBackwardCount = 0;
private final int maxClockBackwardTolerance = 10; // 最大容忍回拨次数
// 方案1:等待时钟追上(推荐)
private long waitForClock(long lastTimestamp) {
long timestamp = System.currentTimeMillis();
while (timestamp <= lastTimestamp) {
long offset = lastTimestamp - timestamp;
if (offset > 1000) { // 回拨超过1秒
log.warn("时钟回拨超过1秒: {}ms", offset);
sendAlert(); // 发送告警
}
try {
// 指数退避等待
long sleepTime = Math.min(offset, 100);
Thread.sleep(sleepTime);
} catch (InterruptedException e) {
Thread.currentThread().interrupt();
throw new RuntimeException("时钟同步被中断", e);
}
timestamp = System.currentTimeMillis();
}
return timestamp;
}
// 方案2:备份WorkerID切换
private long backupWorkerId = -1L;
private boolean usingBackup = false;
private synchronized long nextIdWithBackup() {
long timestamp = timeGen();
if (timestamp < lastTimestamp) {
clockBackwardCount++;
if (clockBackwardCount > maxClockBackwardTolerance) {
if (!usingBackup && backupWorkerId != -1) {
// 切换到备份WorkerID
this.workerId = backupWorkerId;
usingBackup = true;
log.warn("切换至备份WorkerID: {}", backupWorkerId);
} else {
throw new ClockBackwardException("时钟回拨且无可用备份");
}
}
// 尝试等待
timestamp = waitForClock(lastTimestamp);
}
if (lastTimestamp == timestamp) {
sequence = (sequence + 1) & SEQUENCE_MASK;
if (sequence == 0) {
timestamp = tilNextMillis(lastTimestamp);
}
} else {
sequence = 0L;
}
lastTimestamp = timestamp;
return ((timestamp - EPOCH) << TIMESTAMP_SHIFT) |
(datacenterId << DATACENTER_ID_SHIFT) |
(workerId << WORKER_ID_SHIFT) |
sequence;
}
// 方案3:记录历史时间戳并验证
private ConcurrentSkipListMap<Long, Long> timestampHistory =
new ConcurrentSkipListMap<>();
private boolean validateTimestamp(long currentTimestamp) {
// 检查当前时间是否小于历史最大时间
Long maxHistorical = timestampHistory.lastKey();
if (maxHistorical != null && currentTimestamp < maxHistorical) {
log.error("检测到时钟回拨,历史最大: {}, 当前: {}",
maxHistorical, currentTimestamp);
return false;
}
// 记录当前时间戳(保留最近1000个)
timestampHistory.put(currentTimestamp, currentTimestamp);
if (timestampHistory.size() > 1000) {
timestampHistory.pollFirstEntry();
}
return true;
}
}
2. 动态WorkerID分配方案
java
复制
下载
// 基于ZooKeeper的WorkerID动态分配
public class DynamicWorkerIdAllocator {
private ZooKeeper zkClient;
private String basePath = "/snowflake/workers";
private long workerId = -1;
private String workerNodePath;
public long allocateWorkerId() throws Exception {
// 1. 创建临时顺序节点
workerNodePath = zkClient.create(
basePath + "/worker-",
null,
ZooDefs.Ids.OPEN_ACL_UNSAFE,
CreateMode.EPHEMERAL_SEQUENTIAL
);
// 2. 解析节点序号
String sequenceStr = workerNodePath.substring(
workerNodePath.lastIndexOf('-') + 1);
long sequence = Long.parseLong(sequenceStr);
// 3. 映射到WorkerID范围
workerId = sequence % (MAX_WORKER_ID + 1);
// 4. 监听节点变化
setupWatch();
// 5. 定期续约
startHeartbeat();
return workerId;
}
private void setupWatch() throws Exception {
// 监听父节点,感知其他worker变化
zkClient.getChildren(basePath, event -> {
if (event.getType() == EventType.NodeChildrenChanged) {
// 重新检查WorkerID冲突
checkAndReallocateIfNeeded();
}
});
}
private void startHeartbeat() {
ScheduledExecutorService scheduler =
Executors.newSingleThreadScheduledExecutor();
scheduler.scheduleAtFixedRate(() -> {
try {
// 更新节点数据保持连接
zkClient.setData(workerNodePath,
new byte[]{}, -1);
} catch (Exception e) {
log.error("WorkerID心跳失败", e);
// 尝试重新分配
reallocateWorkerId();
}
}, 30, 30, TimeUnit.SECONDS);
}
}
// 基于Redis的WorkerID分配(无ZK依赖)
public class RedisWorkerIdAllocator {
private RedisTemplate<String, String> redisTemplate;
private static final String WORKER_KEY = "snowflake:workers";
private static final long LOCK_EXPIRE = 30000; // 30秒
public long allocateWorkerId() {
// 使用Redis分布式锁分配ID
String lockKey = "snowflake:worker_lock";
String lockValue = UUID.randomUUID().toString();
try {
// 尝试获取锁
Boolean locked = redisTemplate.opsForValue()
.setIfAbsent(lockKey, lockValue, LOCK_EXPIRE, TimeUnit.MILLISECONDS);
if (Boolean.TRUE.equals(locked)) {
// 获取所有已分配的WorkerID
Set<String> assignedIds = redisTemplate.opsForSet()
.members(WORKER_KEY);
// 寻找可用的WorkerID
for (long id = 0; id <= MAX_WORKER_ID; id++) {
if (!assignedIds.contains(String.valueOf(id))) {
// 分配并记录
redisTemplate.opsForSet().add(WORKER_KEY, String.valueOf(id));
redisTemplate.expire(WORKER_KEY, 3600, TimeUnit.SECONDS); // 1小时
return id;
}
}
throw new RuntimeException("无可用WorkerID");
}
} finally {
// 释放锁(Lua脚本保证原子性)
String luaScript =
"if redis.call('get', KEYS[1]) == ARGV[1] then " +
" return redis.call('del', KEYS[1]) " +
"else " +
" return 0 " +
"end";
redisTemplate.execute(
new DefaultRedisScript<>(luaScript, Long.class),
Collections.singletonList(lockKey),
lockValue
);
}
throw new RuntimeException("获取WorkerID失败");
}
}
四、Leaf-Segment号段模式深度设计
1. 数据库表设计
sql
复制
下载
CREATE TABLE leaf_alloc (
biz_tag VARCHAR(128) NOT NULL COMMENT '业务标识',
max_id BIGINT NOT NULL COMMENT '当前最大ID',
step INT NOT NULL COMMENT '号段步长',
version BIGINT NOT NULL COMMENT '乐观锁版本',
description VARCHAR(256) COMMENT '描述',
update_time TIMESTAMP DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP,
PRIMARY KEY (biz_tag),
INDEX idx_update_time (update_time)
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COMMENT='Leaf号段表';
-- 初始化数据
INSERT INTO leaf_alloc (biz_tag, max_id, step, version, description) VALUES
('order', 1000, 1000, 1, '订单ID'),
('user', 10000, 2000, 1, '用户ID'),
('payment', 5000, 1500, 1, '支付ID');
篇幅限制下面就只能给大家展示小册部分内容了。整理了一份核心面试笔记包括了:Java面试、Spring、JVM、MyBatis、Redis、MySQL、并发编程、微服务、Linux、Springboot、SpringCloud、MQ、Kafc
需要全套面试笔记及答案
【点击此处即可/免费获取】
2. 双Buffer优化实现
java
复制
下载
public class DoubleBufferSegment {
// 当前使用的buffer
private SegmentBuffer currentBuffer;
// 预加载的buffer
private SegmentBuffer nextBuffer;
// 是否正在加载下一个buffer
private volatile boolean loadingNext = false;
// 阈值:当使用率达到多少时开始预加载
private static final double LOAD_THRESHOLD = 0.8;
class SegmentBuffer {
private volatile Segment segment;
private AtomicLong value = new AtomicLong();
private volatile boolean ready = false;
public boolean isExhausted() {
return value.get() >= segment.getMax();
}
public boolean isNearlyExhausted() {
long current = value.get();
long threshold = segment.getMax() -
(segment.getMax() - segment.getMin()) * LOAD_THRESHOLD;
return current >= threshold;
}
}
class Segment {
private long min;
private long max;
private int step;
// getters
}
public synchronized long nextId(String bizTag) {
// 如果当前buffer用完且下一个buffer已准备好
if (currentBuffer.isExhausted() && nextBuffer.isReady()) {
currentBuffer = nextBuffer;
nextBuffer = new SegmentBuffer();
loadingNext = false;
}
// 如果当前buffer快要用完,异步加载下一个buffer
if (currentBuffer.isNearlyExhausted() && !loadingNext) {
loadingNext = true;
loadNextBufferAsync(bizTag);
}
long id = currentBuffer.value.getAndIncrement();
if (id >= currentBuffer.segment.getMax()) {
// 等待下一个buffer加载
return waitAndRetry(bizTag);
}
return id;
}
private void loadNextBufferAsync(String bizTag) {
executorService.submit(() -> {
try {
Segment newSegment = loadSegmentFromDB(bizTag);
nextBuffer.segment = newSegment;
nextBuffer.ready = true;
} catch (Exception e) {
log.error("加载号段失败", e);
loadingNext = false;
}
});
}
private Segment loadSegmentFromDB(String bizTag) {
// 使用乐观锁更新号段
int retryCount = 0;
while (retryCount < 3) {
LeafAlloc alloc = leafAllocMapper.selectForUpdate(bizTag);
// 计算新的max_id
long newMaxId = alloc.getMaxId() + alloc.getStep();
// 尝试更新
int updated = leafAllocMapper.updateMaxId(
bizTag, newMaxId, alloc.getVersion());
if (updated > 0) {
return new Segment(alloc.getMaxId(), newMaxId, alloc.getStep());
}
retryCount++;
try {
Thread.sleep(100 * retryCount); // 指数退避
} catch (InterruptedException e) {
Thread.currentThread().interrupt();
break;
}
}
throw new RuntimeException("获取号段失败");
}
}
3. 多级缓存优化
java
复制
下载
public class MultiLevelCacheIdGenerator {
// 一级缓存:本地内存(毫秒级)
private ConcurrentHashMap<String, BlockingQueue<Long>> localCache =
new ConcurrentHashMap<>();
// 二级缓存:Redis(秒级)
private RedisTemplate<String, String> redisTemplate;
// 三级缓存:数据库(持久化)
private static final int LOCAL_CACHE_SIZE = 1000;
private static final int REDIS_CACHE_SIZE = 10000;
public void preloadIds(String bizTag, int count) {
// 1. 检查本地缓存
BlockingQueue<Long> localQueue = localCache.get(bizTag);
if (localQueue == null || localQueue.size() < LOCAL_CACHE_SIZE / 2) {
// 2. 检查Redis缓存
Long redisSize = redisTemplate.opsForList().size("id:" + bizTag);
if (redisSize == null || redisSize < REDIS_CACHE_SIZE / 2) {
// 3. 从数据库批量加载
List<Long> ids = loadBatchFromDB(bizTag,
Math.max(count, REDIS_CACHE_SIZE));
// 填充Redis
for (Long id : ids) {
redisTemplate.opsForList().rightPush("id:" + bizTag,
String.valueOf(id));
}
redisTemplate.expire("id:" + bizTag, 3600, TimeUnit.SECONDS);
}
// 从Redis填充本地缓存
List<String> redisIds = redisTemplate.opsForList()
.range("id:" + bizTag, 0, LOCAL_CACHE_SIZE - 1);
BlockingQueue<Long> queue = localCache.computeIfAbsent(bizTag,
k -> new LinkedBlockingQueue<>(LOCAL_CACHE_SIZE));
for (String idStr : redisIds) {
queue.offer(Long.parseLong(idStr));
}
}
}
}
五、混合模式架构设计
1. Leaf-Snowflake混合模式
java
复制
下载
public class HybridIdGenerator {
// 根据业务特性选择生成模式
private enum GenerateMode {
SNOWFLAKE, // 高并发、无顺序要求
SEGMENT, // 批量操作、需要连续ID
AUTO // 自动选择
}
private SnowflakeIdGenerator snowflakeGenerator;
private SegmentIdGenerator segmentGenerator;
public long nextId(String bizTag, GenerateMode mode) {
switch (mode) {
case SNOWFLAKE:
return snowflakeGenerator.nextId();
case SEGMENT:
return segmentGenerator.nextId(bizTag);
case AUTO:
// 根据业务特性自动选择
return selectByBizTag(bizTag);
default:
throw new IllegalArgumentException("不支持的生成模式");
}
}
private long selectByBizTag(String bizTag) {
// 业务规则映射
Map<String, GenerateMode> bizRules = new HashMap<>();
bizRules.put("order", GenerateMode.SEGMENT); // 订单需要连续ID
bizRules.put("log", GenerateMode.SNOWFLAKE); // 日志适合时间有序
bizRules.put("message", GenerateMode.SNOWFLAKE); // 消息高并发
GenerateMode mode = bizRules.getOrDefault(bizTag, GenerateMode.AUTO);
if (mode == GenerateMode.AUTO) {
// 根据QPS自动选择
long qps = getBizQPS(bizTag);
return qps > 10000 ?
snowflakeGenerator.nextId() :
segmentGenerator.nextId(bizTag);
}
return nextId(bizTag, mode);
}
}
2. 多数据中心部署方案
java
复制
下载
public class MultiDataCenterIdGenerator {
// 64位ID结构:0 | 41位时间戳 | 3位数据中心 | 5位机器ID | 12位序列号
private static final long DATACENTER_BITS = 3L; // 支持8个数据中心
private static final long WORKER_BITS = 5L; // 每个中心32台机器
private static final long SEQUENCE_BITS = 12L;
// 数据中心ID(通过配置中心获取)
private long datacenterId;
// 跨数据中心时钟同步
private NTPTimeSynchronizer timeSynchronizer;
public class NTPTimeSynchronizer {
private List<String> ntpServers = Arrays.asList(
"ntp1.aliyun.com",
"ntp2.aliyun.com",
"time.windows.com"
);
private long timeOffset = 0;
private ScheduledExecutorService scheduler;
public void startSync() {
scheduler = Executors.newSingleThreadScheduledExecutor();
scheduler.scheduleAtFixedRate(() -> {
try {
syncTimeWithNTPServer();
} catch (Exception e) {
log.error("NTP时间同步失败", e);
}
}, 0, 60, TimeUnit.SECONDS); // 每分钟同步一次
}
private void syncTimeWithNTPServer() {
long minOffset = Long.MAX_VALUE;
for (String server : ntpServers) {
try {
long offset = calculateTimeOffset(server);
minOffset = Math.min(minOffset, Math.abs(offset));
} catch (Exception e) {
log.warn("NTP服务器 {} 同步失败", server, e);
}
}
if (minOffset != Long.MAX_VALUE) {
this.timeOffset = minOffset;
log.info("NTP时间同步完成,偏移: {}ms", timeOffset);
}
}
public long getCurrentTime() {
return System.currentTimeMillis() + timeOffset;
}
}
}
六、高可用与监控体系
1. 熔断与降级策略
java
复制
下载
public class CircuitBreakerIdGenerator {
private IdGenerator primaryGenerator; // 主生成器
private IdGenerator fallbackGenerator; // 降级生成器
private CircuitBreaker circuitBreaker;
private static class CircuitBreaker {
private int failureThreshold = 10; // 失败阈值
private long resetTimeout = 60000; // 重置超时(1分钟)
private AtomicInteger failureCount = new AtomicInteger(0);
private volatile long lastFailureTime = 0;
private volatile State state = State.CLOSED;
enum State { CLOSED, OPEN, HALF_OPEN }
public boolean allowRequest() {
if (state == State.CLOSED) {
return true;
}
if (state == State.OPEN) {
// 检查是否应该进入半开状态
if (System.currentTimeMillis() - lastFailureTime > resetTimeout) {
state = State.HALF_OPEN;
return true;
}
return false;
}
// HALF_OPEN状态允许少量请求
return true;
}
public void recordFailure() {
int count = failureCount.incrementAndGet();
lastFailureTime = System.currentTimeMillis();
if (count >= failureThreshold) {
state = State.OPEN;
log.warn("熔断器开启,切换到降级模式");
}
}
public void recordSuccess() {
failureCount.set(0);
state = State.CLOSED;
}
}
public long nextId() {
if (circuitBreaker.allowRequest()) {
try {
long id = primaryGenerator.nextId();
circuitBreaker.recordSuccess();
return id;
} catch (Exception e) {
circuitBreaker.recordFailure();
log.error("主ID生成器失败,使用降级", e);
// 切换到降级生成器
return fallbackGenerator.nextId();
}
} else {
// 熔断状态,直接使用降级
return fallbackGenerator.nextId();
}
}
}
2. 全方位监控指标
java
复制
下载
public class IdGeneratorMetrics {
// 计数器
private Meter generatedIds; // ID生成速率
private Meter failedGenerations; // 生成失败次数
private Histogram latencyHistogram; // 生成延迟分布
// 仪表盘
private Gauge cacheHitRate; // 缓存命中率
private Gauge queueSize; // 待处理队列大小
private Gauge workerIdUsage; // WorkerID使用率
// 特殊事件
private Counter clockBackwardEvents; // 时钟回拨次数
private Counter segmentExhaustions; // 号段耗尽次数
// 监控方法
public void recordGeneration(long startTime) {
long duration = System.currentTimeMillis() - startTime;
generatedIds.mark();
latencyHistogram.update(duration);
// 如果延迟过高,记录警告
if (duration > 100) { // 超过100ms
log.warn("ID生成延迟过高: {}ms", duration);
}
}
public void recordClockBackward(long offset) {
clockBackwardEvents.increment();
// 根据回拨程度发送不同级别的告警
if (offset > 1000) {
sendAlert(AlertLevel.CRITICAL,
"时钟回拨超过1秒: " + offset + "ms");
} else if (offset > 100) {
sendAlert(AlertLevel.WARNING,
"时钟回拨超过100ms: " + offset + "ms");
}
}
// Prometheus指标暴露
@Bean
public MeterRegistryCustomizer<MeterRegistry> metricsCustomizer() {
return registry -> {
registry.config().commonTags("application", "id-generator");
// 注册自定义指标
Gauge.builder("id_generator.cache.hit_rate",
() -> cacheHitRate.getValue())
.description("ID生成缓存命中率")
.register(registry);
Counter.builder("id_generator.clock.backward.total")
.description("时钟回拨总次数")
.register(registry);
};
}
}
七、安全与防攻击设计
1. ID防猜测设计
java
复制
下载
public class SecureIdGenerator {
// 方案1:ID混淆(可逆)
public long generateObfuscatedId(long originalId) {
// 使用线性同余算法进行混淆
long a = 1664525L;
long c = 1013904223L;
long m = (1L << 31);
return (a * originalId + c) % m;
}
// 方案2:ID加密(不可逆)
public String generateEncryptedId(long originalId) {
// 使用AES加密
Cipher cipher = Cipher.getInstance("AES/CBC/PKCS5Padding");
cipher.init(Cipher.ENCRYPT_MODE, secretKey, ivParameterSpec);
byte[] encrypted = cipher.doFinal(
String.valueOf(originalId).getBytes());
return Base64.getUrlEncoder().encodeToString(encrypted);
}
// 方案3:添加校验位
public long generateIdWithChecksum(long originalId) {
// 计算校验和(简单示例)
long checksum = calculateChecksum(originalId);
// 将校验和附加到ID末尾(如最后4位)
return (originalId << 4) | (checksum & 0xF);
}
private long calculateChecksum(long id) {
// 简单的校验和计算
long sum = 0;
while (id > 0) {
sum += id % 10;
id /= 10;
}
return sum % 16; // 4位校验和
}
// 方案4:限流防刷
@Slf4j
public class IdGeneratorRateLimiter {
private RateLimiter rateLimiter;
private Map<String, Integer> ipRequestCount = new ConcurrentHashMap<>();
public boolean allowRequest(String clientIp, String bizTag) {
// 1. 令牌桶限流
if (!rateLimiter.tryAcquire()) {
log.warn("ID生成服务限流,客户端IP: {}", clientIp);
return false;
}
// 2. IP频率限制
int count = ipRequestCount.getOrDefault(clientIp, 0);
if (count > 1000) { // 每秒最大1000次
log.warn("IP频率超限: {}", clientIp);
return false;
}
ipRequestCount.put(clientIp, count + 1);
// 3. 业务标签配额
if (!checkBizQuota(bizTag)) {
return false;
}
return true;
}
}
}
八、实战部署方案
1. K8s部署配置
yaml
复制
下载
# id-generator-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: id-generator
spec:
replicas: 3
selector:
matchLabels:
app: id-generator
template:
metadata:
labels:
app: id-generator
spec:
containers:
- name: id-generator
image: id-generator:1.0.0
resources:
requests:
memory: "512Mi"
cpu: "500m"
limits:
memory: "1Gi"
cpu: "1000m"
env:
- name: WORKER_ID
valueFrom:
fieldRef:
fieldPath: metadata.name # 使用Pod名生成WorkerID
- name: DATACENTER_ID
value: "1"
ports:
- containerPort: 8080
readinessProbe:
httpGet:
path: /health/ready
port: 8080
initialDelaySeconds: 30
periodSeconds: 10
livenessProbe:
httpGet:
path: /health/live
port: 8080
initialDelaySeconds: 60
periodSeconds: 15
---
# id-generator-service.yaml
apiVersion: v1
kind: Service
metadata:
name: id-generator
spec:
selector:
app: id-generator
ports:
- port: 80
targetPort: 8080
type: ClusterIP
2. 多活数据中心部署
java
复制
下载
public class MultiActiveIdGenerator {
// 全局配置中心(如Nacos、Apollo)
private ConfigService configService;
// 数据中心配置
private DataCenterConfig dcConfig;
class DataCenterConfig {
private String dcId; // 数据中心ID
private long dcTimestampOffset; // 时间戳偏移量
private List<String> peers; // 其他数据中心地址
private boolean primary; // 是否主数据中心
}
// 跨数据中心ID同步
private DCSynchronizer synchronizer;
class DCSynchronizer {
private ScheduledExecutorService scheduler;
private Map<String, Long> lastSyncedIds = new ConcurrentHashMap<>();
public void startSync() {
scheduler = Executors.newScheduledThreadPool(3);
// 定期同步最大ID
scheduler.scheduleAtFixedRate(() -> {
syncMaxIdsWithPeers();
}, 0, 5, TimeUnit.SECONDS);
// 定期同步时钟
scheduler.scheduleAtFixedRate(() -> {
syncTimeWithPeers();
}, 0, 60, TimeUnit.SECONDS);
}
private void syncMaxIdsWithPeers() {
long localMaxId = getLocalMaxId();
for (String peer : dcConfig.getPeers()) {
try {
long peerMaxId = queryPeerMaxId(peer);
// 如果对端ID更大,调整本地偏移
if (peerMaxId > localMaxId) {
adjustLocalOffset(peerMaxId - localMaxId);
}
} catch (Exception e) {
log.warn("同步对端 {} 失败", peer, e);
}
}
}
}
}
篇幅限制下面就只能给大家展示小册部分内容了。整理了一份核心面试笔记包括了:Java面试、Spring、JVM、MyBatis、Redis、MySQL、并发编程、微服务、Linux、Springboot、SpringCloud、MQ、Kafc
需要全套面试笔记及答案
【点击此处即可/免费获取】
九、性能压测与优化
1. JMeter压测配置
xml
复制
下载
运行
<!-- id-generator-test.jmx -->
<?xml version="1.0" encoding="UTF-8"?>
<jmeterTestPlan version="1.2" properties="5.0">
<hashTree>
<TestPlan guiclass="TestPlanGui" testclass="TestPlan" testname="ID生成器压测">
<stringProp name="TestPlan.comments"></stringProp>
<boolProp name="TestPlan.functional_mode">false</boolProp>
<boolProp name="TestPlan.tearDown_on_shutdown">true</boolProp>
<boolProp name="TestPlan.serialize_threadgroups">false</boolProp>
<elementProp name="TestPlan.user_defined_variables" elementType="Arguments">
<collectionProp name="Arguments.arguments"/>
</elementProp>
</TestPlan>
<hashTree>
<ThreadGroup guiclass="ThreadGroupGui" testclass="ThreadGroup" testname="并发测试">
<intProp name="ThreadGroup.num_threads">1000</intProp>
<intProp name="ThreadGroup.ramp_time">60</intProp>
<longProp name="ThreadGroup.start_time">0</longProp>
<longProp name="ThreadGroup.end_time">0</longProp>
<boolProp name="ThreadGroup.scheduler">false</boolProp>
<intProp name="ThreadGroup.duration">300</intProp>
<intProp name="ThreadGroup.delay">0</intProp>
<boolProp name="ThreadGroup.same_user_on_next_iteration">true</boolProp>
</ThreadGroup>
<hashTree>
<HTTPSamplerProxy guiclass="HttpTestSampleGui" testclass="HTTPSamplerProxy" testname="生成ID">
<boolProp name="HTTPSampler.postBodyRaw">false</boolProp>
<elementProp name="HTTPsampler.Arguments" elementType="Arguments">
<collectionProp name="Arguments.arguments">
<elementProp name="" elementType="HTTPArgument">
<boolProp name="HTTPArgument.always_encode">false</boolProp>
<stringProp name="Argument.value">{"biz_tag":"test"}</stringProp>
<stringProp name="Argument.metadata">=</stringProp>
</elementProp>
</collectionProp>
</elementProp>
<stringProp name="HTTPSampler.domain">localhost</stringProp>
<stringProp name="HTTPSampler.port">8080</stringProp>
<stringProp name="HTTPSampler.protocol">http</stringProp>
<stringProp name="HTTPSampler.contentEncoding"></stringProp>
<stringProp name="HTTPSampler.path">/api/id/next</stringProp>
<stringProp name="HTTPSampler.method">POST</stringProp>
</HTTPSamplerProxy>
</hashTree>
</hashTree>
</hashTree>
</jmeterTestPlan>
2. 性能优化策略
java
复制
下载
public class IdGeneratorOptimizer {
// 优化策略1:批量生成减少RPC调用
public List<Long> batchGenerate(int batchSize) {
List<Long> ids = new ArrayList<>(batchSize);
for (int i = 0; i < batchSize; i++) {
ids.add(nextId());
}
return ids;
}
// 优化策略2:本地预生成
private BlockingQueue<Long> idQueue = new LinkedBlockingQueue<>(10000);
public void startPreGenerator() {
Thread preGenThread = new Thread(() -> {
while (!Thread.currentThread().isInterrupted()) {
if (idQueue.size() < 5000) {
List<Long> batch = batchGenerate(1000);
idQueue.addAll(batch);
}
try {
Thread.sleep(10);
} catch (InterruptedException e) {
Thread.currentThread().interrupt();
}
}
});
preGenThread.setDaemon(true);
preGenThread.start();
}
// 优化策略3:连接池优化
public void optimizeConnectionPool() {
HikariConfig config = new HikariConfig();
config.setJdbcUrl("jdbc:mysql://localhost:3306/id_generator");
config.setUsername("root");
config.setPassword("password");
config.setMaximumPoolSize(20);
config.setMinimumIdle(10);
config.setConnectionTimeout(30000);
config.setIdleTimeout(600000);
config.setMaxLifetime(1800000);
// 启用监控
config.addDataSourceProperty("cachePrepStmts", "true");
config.addDataSourceProperty("prepStmtCacheSize", "250");
config.addDataSourceProperty("prepStmtCacheSqlLimit", "2048");
}
}
十、总结与选型建议
🎯 选型决策树
text
复制
下载
需要全局唯一ID吗?
├── 是 → 需要顺序递增吗?
│ ├── 是 → 需要绝对连续吗?
│ │ ├── 是 → 数据库自增ID(单库)
│ │ └── 否 → 号段模式(Leaf-Segment)
│ └── 否 → 需要高性能吗?
│ ├── 是 → Snowflake/改进版
│ └── 否 → UUID(简单场景)
└── 否 → 可以使用本地ID
📊 方案对比总结表
| 维度 | Snowflake | Leaf-Segment | 数据库自增 | UUID |
|---|---|---|---|---|
| 性能 | 极高(10万+/s) | 高(1万+/s) | 低(1千+/s) | 极高 |
| 顺序性 | 趋势递增 | 绝对连续 | 绝对连续 | 无序 |
| 可用性 | 高 | 非常高 | 低 | 极高 |
| 扩展性 | 好 | 一般 | 差 | 极好 |
| ID长度 | 64位 | 64位 | 64位 | 128位 |
| 复杂度 | 中 | 中 | 低 | 低 |
✅ 最佳实践建议
-
中小型系统:直接使用改进版Snowflake(解决时钟问题)
-
电商交易系统:Leaf-Segment模式(需要连续订单号)
-
日志/消息系统:Snowflake(时间有序,高性能)
-
国际化系统:结合数据中心ID的多活方案
-
安全敏感系统:增加ID混淆/加密层
🔧 关键配置参数
yaml
复制
下载
# application-id-generator.yml
id-generator:
type: snowflake # snowflake | segment | hybrid
snowflake:
epoch: 2024-01-01T00:00:00Z
worker-id-bits: 5
sequence-bits: 12
max-clock-backward-ms: 1000
segment:
step: 1000
retry-times: 3
cache-threshold: 0.8
hybrid:
default-mode: auto
qps-threshold: 10000
记住:没有完美的ID生成方案,只有最适合业务场景的方案。设计时应考虑未来3-5年的业务增长,预留足够的扩展空间。最重要的是,无论选择哪种方案,都必须有完善的监控、告警和熔断降级机制。