Java后端开发面试题清单(50道)
- Java后端开发面试题清单(50道)
-
- 一、Java基础与进阶(20题)
-
- JVM与内存管理(7题)
-
- [1. JVM内存区域划分](#1. JVM内存区域划分)
- [2. 垃圾回收机制与算法](#2. 垃圾回收机制与算法)
- [3. 类加载过程与双亲委派模型](#3. 类加载过程与双亲委派模型)
- [4. OutOfMemoryError分析](#4. OutOfMemoryError分析)
- [5. 内存泄漏分析与修复](#5. 内存泄漏分析与修复)
- [6. 逃逸分析与优化](#6. 逃逸分析与优化)
- [7. JVM参数优化](#7. JVM参数优化)
- 并发编程(7题)
-
- [8. synchronized原理](#8. synchronized原理)
- [9. volatile关键字与DCL](#9. volatile关键字与DCL)
- [10. ConcurrentHashMap原理](#10. ConcurrentHashMap原理)
- [11. 线程安全单例模式](#11. 线程安全单例模式)
- [12. 线程池与拒绝策略](#12. 线程池与拒绝策略)
- [13. 线程顺序执行](#13. 线程顺序执行)
- [14. 库存扣减线程安全方案](#14. 库存扣减线程安全方案)
- 集合框架(6题)
-
- [15. HashMap原理详解](#15. HashMap原理详解)
- [16. ConcurrentHashMap差异](#16. ConcurrentHashMap差异)
- [17. 找重复元素方案对比](#17. 找重复元素方案对比)
- [18. HashMap键对象hashCode变化问题](#18. HashMap键对象hashCode变化问题)
- [19. 线程安全LRU缓存实现](#19. 线程安全LRU缓存实现)
- [20. ArrayList vs LinkedList性能对比](#20. ArrayList vs LinkedList性能对比)
- 二、数据库与持久层(15题)
-
- MySQL与关系数据库(8题)
-
- [21. B+树索引原理](#21. B+树索引原理)
- [22. 事务隔离级别](#22. 事务隔离级别)
- [23. SQL性能优化实践](#23. SQL性能优化实践)
- [24. 分库分表设计方案](#24. 分库分表设计方案)
- [25. 树形结构存储方案](#25. 树形结构存储方案)
- [26. 库存扣减实现](#26. 库存扣减实现)
- [27. 覆盖索引与索引下推](#27. 覆盖索引与索引下推)
- [28. 主从延迟解决方案](#28. 主从延迟解决方案)
- Redis与缓存(5题)
-
- [29. Redis数据类型与应用场景](#29. Redis数据类型与应用场景)
- [30. Redis持久化对比](#30. Redis持久化对比)
- [31. 缓存问题解决方案](#31. 缓存问题解决方案)
- [32. 分布式Session方案](#32. 分布式Session方案)
- [33. 秒杀库存与限流实现](#33. 秒杀库存与限流实现)
- ORM框架(2题)
-
- [34. MyBatis #{}和{}区别](#{}和{}区别)
- [35. Hibernate缓存机制](#35. Hibernate缓存机制)
- 三、框架、系统与分布式(15题)
-
- Spring框架(5题)
-
- [36. Spring Bean生命周期](#36. Spring Bean生命周期)
- [37. Spring AOP实现原理](#37. Spring AOP实现原理)
- [38. Spring事务传播行为](#38. Spring事务传播行为)
- [39. 循环依赖解决方案](#39. 循环依赖解决方案)
- [40. 自定义注解与AOP集成](#40. 自定义注解与AOP集成)
- 系统设计(5题)
-
- [41. 接口幂等性](#41. 接口幂等性)
- [42. 秒杀系统详细设计](#42. 秒杀系统详细设计)
- [43. 短URL系统设计](#43. 短URL系统设计)
- [44. 实时排行榜系统](#44. 实时排行榜系统)
- [45. 分布式配置中心设计](#45. 分布式配置中心设计)
Java后端开发面试题清单(50道)
一、Java基础与进阶(20题)
JVM与内存管理(7题)
1. JVM内存区域划分
JVM内存主要分为以下几个区域:
堆(Heap):
- 所有线程共享,存放对象实例和数组
- 是垃圾回收的主要区域
- 分为新生代和老年代
- 新生代:Eden区、Survivor0、Survivor1
- 老年代:长期存活的对象
- 通过
-Xms和-Xmx设置初始和最大堆大小
方法区(Metaspace):
- 存储已被加载的类信息、常量、静态变量
- JDK8以前称为永久代(PermGen),JDK8改为元空间(Metaspace)
- 元空间使用本地内存,减少OOM风险
虚拟机栈(Stack):
- 线程私有,每个方法执行时创建一个栈帧
- 栈帧存储局部变量表、操作数栈、动态链接、方法出口
- 局部变量表存放基本类型和对象引用
本地方法栈:
- 为Native方法服务
程序计数器:
- 当前线程执行的字节码行号指示器
- 唯一不会发生OOM的区域
java
// 示例:观察堆内存分配
public class MemoryExample {
private static final int _1MB = 1024 * 1024;
public static void main(String[] args) {
byte[] allocation1 = new byte[2 * _1MB];
byte[] allocation2 = new byte[2 * _1MB];
byte[] allocation3 = new byte[2 * _1MB];
// 触发Minor GC
byte[] allocation4 = new byte[4 * _1MB];
}
}
2. 垃圾回收机制与算法
垃圾回收算法:
-
标记-清除:
- 标记所有可达对象,清除未标记对象
- 产生内存碎片
-
复制算法:
- 将内存分为两块,每次使用一块
- 存活对象复制到另一块,清除当前块
- 无碎片,但内存利用率低
-
标记-整理:
- 标记后,将存活对象向一端移动
- 清理边界外内存
- 无碎片,适合老年代
-
分代收集:
- 新生代:复制算法
- 老年代:标记-清除或标记-整理
java
// 垃圾回收示例
public class GCDemo {
public static void main(String[] args) {
// 触发Full GC
List<byte[]> list = new ArrayList<>();
for (int i = 0; i < 1000; i++) {
list.add(new byte[1024 * 1024]); // 1MB
if (i % 100 == 0) {
System.gc(); // 建议GC
}
}
}
}
3. 类加载过程与双亲委派模型
类加载过程:
-
加载:
- 通过类全限定名获取二进制字节流
- 将字节流转化为方法区的运行时数据结构
- 生成对应的Class对象
-
验证:
- 文件格式、元数据、字节码、符号引用验证
-
准备:
- 为类变量分配内存并设置初始零值
- 不执行赋值操作
-
解析:
- 将符号引用转换为直接引用
-
初始化:
- 执行类构造器
<clinit>()方法 - 为类变量赋予正确的初始值
- 执行类构造器
双亲委派模型:
java
// 类加载器层次
Bootstrap ClassLoader (C++实现)
↓
Extension ClassLoader (sun.misc.Launcher$ExtClassLoader)
↓
Application ClassLoader (sun.misc.Launcher$AppClassLoader)
↓
Custom ClassLoader (用户自定义)
// 双亲委派流程
protected Class<?> loadClass(String name, boolean resolve) {
synchronized (getClassLoadingLock(name)) {
// 1. 检查是否已加载
Class<?> c = findLoadedClass(name);
if (c == null) {
try {
if (parent != null) {
// 2. 委派给父加载器
c = parent.loadClass(name, false);
} else {
c = findBootstrapClassOrNull(name);
}
} catch (ClassNotFoundException e) {
// 父加载器找不到
}
if (c == null) {
// 3. 自己加载
c = findClass(name);
}
}
if (resolve) {
resolveClass(c);
}
return c;
}
}
4. OutOfMemoryError分析
java
// 内存泄漏示例
public class MemoryLeak {
private static List<byte[]> list = new ArrayList<>();
public static void main(String[] args) {
while (true) {
list.add(new byte[1024 * 1024]); // 1MB
try {
Thread.sleep(100);
} catch (InterruptedException e) {
e.printStackTrace();
}
}
}
}
定位步骤:
- 使用
jps查看Java进程 - 使用
jstat -gc pid 1000查看GC情况 - 使用
jmap -dump:live,format=b,file=heap.bin pid导出堆转储 - 使用MAT、JProfiler分析堆转储
5. 内存泄漏分析与修复
java
// 常见内存泄漏场景
public class CommonLeaks {
// 1. 静态集合类持有引用
private static Map<String, Object> cache = new HashMap<>();
public void addToCache(String key, Object value) {
cache.put(key, value); // 对象永远不会被回收
}
// 2. 监听器未移除
public void addListener() {
component.addListener(new Listener() {
// 匿名内部类持有外部类引用
});
// 忘记removeListener
}
// 3. 内部类持有外部类
private class InnerClass {
// 隐式持有OuterClass.this引用
}
}
修复方案:
java
// 1. 使用弱引用
private static Map<String, WeakReference<Object>> cache = new HashMap<>();
// 2. 及时清理
public void removeListener(Listener listener) {
component.removeListener(listener);
}
// 3. 使用静态内部类
private static class StaticInnerClass {
// 不持有外部类引用
}
6. 逃逸分析与优化
java
// 逃逸分析示例
public class EscapeAnalysis {
public static String createString() {
String s = new String("hello"); // 未逃逸,可栈上分配
return s; // 方法返回,逃逸到外部
}
public void localVariable() {
byte[] buffer = new byte[1024]; // 未逃逸,可栈上分配
// 使用buffer
} // buffer在方法结束时失效
}
JIT优化:
- 栈上分配:对象在栈上分配,方法结束自动销毁
- 同步消除:锁对象未逃逸,可消除同步
- 标量替换:将对象拆分为基本类型,在栈上分配
7. JVM参数优化
bash
# 基础参数
-Xms2g -Xmx2g # 堆大小
-Xmn1g # 新生代大小
-XX:SurvivorRatio=8 # Eden:Survivor=8:1
# GC参数
-XX:+UseG1GC # 使用G1收集器
-XX:MaxGCPauseMillis=200 # 最大GC停顿时间
-XX:InitiatingHeapOccupancyPercent=45 # 触发GC的堆占用比例
# 内存溢出处理
-XX:+HeapDumpOnOutOfMemoryError # OOM时生成堆转储
-XX:HeapDumpPath=/path/to/dump.hprof # 转储文件路径
# 监控参数
-XX:+PrintGCDetails # 打印GC详情
-XX:+PrintGCDateStamps
-Xloggc:/path/to/gc.log
并发编程(7题)
8. synchronized原理
java
public class SynchronizedDemo {
// 对象锁
public synchronized void instanceMethod() {
// 锁住当前实例
}
// 类锁
public static synchronized void staticMethod() {
// 锁住当前类
}
// 同步代码块
public void blockMethod() {
synchronized (this) {
// 锁住指定对象
}
}
}
锁升级过程:
- 无锁:初始状态
- 偏向锁:只有一个线程访问,在对象头记录线程ID
- 轻量级锁:多个线程交替执行,通过CAS自旋获取锁
- 重量级锁:竞争激烈,线程进入阻塞队列
对象头结构(64位JVM):
|-------------------------------------------------------|--------------------|
| Mark Word (64 bits) | State |
|-------------------------------------------------------|--------------------|
| unused:25 | identity_hashcode:31 | unused:1 | age:4 | biased_lock:1 | 0 | Normal
|-------------------------------------------------------|--------------------|
| thread:54 | epoch:2 | unused:1 | age:4 | biased_lock:1 | 1 | Biased
|-------------------------------------------------------|--------------------|
| ptr_to_lock_record:62 | 00 | Lightweight Locked
|-------------------------------------------------------|--------------------|
| ptr_to_heavyweight_monitor:62 | 10 | Heavyweight Locked
|-------------------------------------------------------|--------------------|
| | 11 | Marked for GC
|-------------------------------------------------------|--------------------|
9. volatile关键字与DCL
java
public class DoubleCheckedLocking {
private volatile static Singleton instance;
public static Singleton getInstance() {
if (instance == null) { // 第一次检查
synchronized (DoubleCheckedLocking.class) {
if (instance == null) { // 第二次检查
instance = new Singleton(); // 可能发生指令重排序
}
}
}
return instance;
}
static class Singleton {
private Singleton() {}
}
}
volatile作用:
- 可见性:写操作立即刷新到主内存,读操作从主内存读取
- 禁止指令重排序:通过内存屏障实现
DCL问题与解决:
java
instance = new Singleton();
// 可能重排序为:
// 1. 分配内存空间
// 2. 设置instance指向内存空间(此时instance!=null,但对象未初始化)
// 3. 初始化对象
// volatile禁止2和3重排序
10. ConcurrentHashMap原理
JDK 1.7 vs JDK 1.8对比:
| 特性 | JDK 1.7 | JDK 1.8 |
|---|---|---|
| 数据结构 | Segment数组+HashEntry数组+链表 | Node数组+链表/红黑树 |
| 锁粒度 | Segment锁(默认16个) | 链表头节点锁(更细粒度) |
| 并发度 | 固定(Segment数量) | 更高 |
| 扩容 | 每个Segment独立扩容 | 协助扩容 |
java
// JDK 1.8 put操作简化流程
final V putVal(K key, V value, boolean onlyIfAbsent) {
if (key == null || value == null) throw new NullPointerException();
int hash = spread(key.hashCode());
int binCount = 0;
for (Node<K,V>[] tab = table;;) {
Node<K,V> f; int n, i, fh;
if (tab == null || (n = tab.length) == 0)
tab = initTable(); // 初始化
else if ((f = tabAt(tab, i = (n - 1) & hash)) == null) {
// CAS插入新节点
if (casTabAt(tab, i, null, new Node<K,V>(hash, key, value)))
break;
}
else if ((fh = f.hash) == MOVED)
tab = helpTransfer(tab, f); // 协助扩容
else {
synchronized (f) { // 锁住链表头节点
// 链表或红黑树插入
}
}
}
addCount(1L, binCount);
return null;
}
11. 线程安全单例模式
java
// 枚举单例(推荐)
public enum EnumSingleton {
INSTANCE;
public void doSomething() {
// 业务方法
}
}
// 静态内部类单例
public class StaticInnerSingleton {
private StaticInnerSingleton() {}
private static class SingletonHolder {
private static final StaticInnerSingleton INSTANCE =
new StaticInnerSingleton();
}
public static StaticInnerSingleton getInstance() {
return SingletonHolder.INSTANCE; // 类加载时初始化,线程安全
}
}
// 双重检查锁定单例
public class DCLSingleton {
private volatile static DCLSingleton instance;
private DCLSingleton() {}
public static DCLSingleton getInstance() {
if (instance == null) {
synchronized (DCLSingleton.class) {
if (instance == null) {
instance = new DCLSingleton();
}
}
}
return instance;
}
}
12. 线程池与拒绝策略
java
public class ThreadPoolExample {
public static void main(String[] args) {
// 创建线程池
ThreadPoolExecutor executor = new ThreadPoolExecutor(
5, // 核心线程数
10, // 最大线程数
60L, // 空闲线程存活时间
TimeUnit.SECONDS,
new LinkedBlockingQueue<>(100), // 任务队列
Executors.defaultThreadFactory(), // 线程工厂
new ThreadPoolExecutor.CallerRunsPolicy() // 拒绝策略
);
// 拒绝策略类型
// 1. AbortPolicy:抛出RejectedExecutionException(默认)
// 2. CallerRunsPolicy:由调用者线程执行任务
// 3. DiscardPolicy:直接丢弃任务
// 4. DiscardOldestPolicy:丢弃队列最前面的任务,然后重试
// 提交任务
for (int i = 0; i < 20; i++) {
int taskId = i;
executor.execute(() -> {
System.out.println("Task " + taskId + " executed by "
+ Thread.currentThread().getName());
});
}
// 关闭线程池
executor.shutdown();
try {
if (!executor.awaitTermination(60, TimeUnit.SECONDS)) {
executor.shutdownNow();
}
} catch (InterruptedException e) {
executor.shutdownNow();
}
}
}
13. 线程顺序执行
java
public class ThreadSequence {
// 方法1:使用join
public static void useJoin() throws InterruptedException {
Thread t1 = new Thread(() -> System.out.println("Thread 1"));
Thread t2 = new Thread(() -> {
try {
t1.join(); // 等待t1完成
System.out.println("Thread 2");
} catch (InterruptedException e) {
e.printStackTrace();
}
});
Thread t3 = new Thread(() -> {
try {
t2.join(); // 等待t2完成
System.out.println("Thread 3");
} catch (InterruptedException e) {
e.printStackTrace();
}
});
t1.start();
t2.start();
t3.start();
}
// 方法2:使用CountDownLatch
public static void useCountDownLatch() throws InterruptedException {
CountDownLatch latch1 = new CountDownLatch(1);
CountDownLatch latch2 = new CountDownLatch(1);
new Thread(() -> {
System.out.println("Thread 1");
latch1.countDown();
}).start();
new Thread(() -> {
try {
latch1.await();
System.out.println("Thread 2");
latch2.countDown();
} catch (InterruptedException e) {
e.printStackTrace();
}
}).start();
new Thread(() -> {
try {
latch2.await();
System.out.println("Thread 3");
} catch (InterruptedException e) {
e.printStackTrace();
}
}).start();
}
// 方法3:使用单线程池
public static void useSingleThreadExecutor() {
ExecutorService executor = Executors.newSingleThreadExecutor();
executor.execute(() -> System.out.println("Thread 1"));
executor.execute(() -> System.out.println("Thread 2"));
executor.execute(() -> System.out.println("Thread 3"));
executor.shutdown();
}
}
14. 库存扣减线程安全方案
java
// 方案1:数据库乐观锁
@Service
public class InventoryService {
@Transactional
public boolean deductStock(Long productId, Integer quantity) {
Product product = productMapper.selectForUpdate(productId);
if (product.getStock() >= quantity) {
int rows = productMapper.updateStock(productId,
product.getStock() - quantity,
product.getVersion());
return rows > 0; // 更新成功
}
return false;
}
}
// 方案2:Redis分布式锁
@Service
public class RedisInventoryService {
private static final String LOCK_PREFIX = "inventory:lock:";
private static final long LOCK_EXPIRE = 3000; // 3秒
@Autowired
private RedisTemplate<String, String> redisTemplate;
public boolean deductStock(Long productId, Integer quantity) {
String lockKey = LOCK_PREFIX + productId;
String requestId = UUID.randomUUID().toString();
try {
// 尝试获取分布式锁
boolean locked = tryLock(lockKey, requestId, LOCK_EXPIRE);
if (!locked) {
return false; // 获取锁失败
}
// 扣减库存
Integer stock = getStockFromRedis(productId);
if (stock >= quantity) {
decrementStockInRedis(productId, quantity);
// 异步同步到数据库
asyncSyncToDatabase(productId, quantity);
return true;
}
return false;
} finally {
// 释放锁
unlock(lockKey, requestId);
}
}
private boolean tryLock(String key, String value, long expire) {
return redisTemplate.opsForValue()
.setIfAbsent(key, value, expire, TimeUnit.MILLISECONDS);
}
private void unlock(String key, String value) {
String currentValue = redisTemplate.opsForValue().get(key);
if (value.equals(currentValue)) {
redisTemplate.delete(key);
}
}
}
// 方案3:使用消息队列串行化
@Component
public class InventoryConsumer {
@RabbitListener(queues = "inventory.deduct")
public void handleDeductMessage(DeductMessage message) {
// 串行处理库存扣减
inventoryService.deductStock(message.getProductId(),
message.getQuantity());
}
}
集合框架(6题)
15. HashMap原理详解
java
// HashMap内部结构
public class HashMap<K,V> extends AbstractMap<K,V>
implements Map<K,V>, Cloneable, Serializable {
// 默认容量
static final int DEFAULT_INITIAL_CAPACITY = 1 << 4; // 16
// 最大容量
static final int MAXIMUM_CAPACITY = 1 << 30;
// 默认负载因子
static final float DEFAULT_LOAD_FACTOR = 0.75f;
// 树化阈值
static final int TREEIFY_THRESHOLD = 8;
// 链化阈值
static final int UNTREEIFY_THRESHOLD = 6;
// 最小树化容量
static final int MIN_TREEIFY_CAPACITY = 64;
// Node节点
static class Node<K,V> implements Map.Entry<K,V> {
final int hash;
final K key;
V value;
Node<K,V> next;
// ...
}
// 红黑树节点
static final class TreeNode<K,V> extends LinkedHashMap.Entry<K,V> {
TreeNode<K,V> parent;
TreeNode<K,V> left;
TreeNode<K,V> right;
TreeNode<K,V> prev;
boolean red;
// ...
}
}
put操作流程:
java
final V putVal(int hash, K key, V value, boolean onlyIfAbsent,
boolean evict) {
Node<K,V>[] tab; Node<K,V> p; int n, i;
// 1. 如果table为空,初始化
if ((tab = table) == null || (n = tab.length) == 0)
n = (tab = resize()).length;
// 2. 计算数组下标,如果该位置为空
if ((p = tab[i = (n - 1) & hash]) == null)
tab[i] = newNode(hash, key, value, null);
else {
Node<K,V> e; K k;
// 3. 如果key已存在
if (p.hash == hash &&
((k = p.key) == key || (key != null && key.equals(k))))
e = p;
// 4. 如果是树节点
else if (p instanceof TreeNode)
e = ((TreeNode<K,V>)p).putTreeVal(this, tab, hash, key, value);
else {
// 5. 遍历链表
for (int binCount = 0; ; ++binCount) {
if ((e = p.next) == null) {
p.next = newNode(hash, key, value, null);
if (binCount >= TREEIFY_THRESHOLD - 1) // -1 for 1st
treeifyBin(tab, hash); // 链表转红黑树
break;
}
if (e.hash == hash &&
((k = e.key) == key || (key != null && key.equals(k))))
break;
p = e;
}
}
// 6. 更新值
if (e != null) { // existing mapping for key
V oldValue = e.value;
if (!onlyIfAbsent || oldValue == null)
e.value = value;
afterNodeAccess(e);
return oldValue;
}
}
++modCount;
// 7. 扩容检查
if (++size > threshold)
resize();
afterNodeInsertion(evict);
return null;
}
扩容机制:
java
final Node<K,V>[] resize() {
Node<K,V>[] oldTab = table;
int oldCap = (oldTab == null) ? 0 : oldTab.length;
int oldThr = threshold;
int newCap, newThr = 0;
if (oldCap > 0) {
// 超过最大容量
if (oldCap >= MAXIMUM_CAPACITY) {
threshold = Integer.MAX_VALUE;
return oldTab;
}
// 扩容为原来的2倍
else if ((newCap = oldCap << 1) < MAXIMUM_CAPACITY &&
oldCap >= DEFAULT_INITIAL_CAPACITY)
newThr = oldThr << 1; // 阈值也扩大2倍
}
threshold = newThr;
// 创建新数组
Node<K,V>[] newTab = (Node<K,V>[])new Node[newCap];
table = newTab;
// 重新哈希所有元素
if (oldTab != null) {
for (int j = 0; j < oldCap; ++j) {
Node<K,V> e;
if ((e = oldTab[j]) != null) {
oldTab[j] = null;
if (e.next == null)
newTab[e.hash & (newCap - 1)] = e;
else if (e instanceof TreeNode)
((TreeNode<K,V>)e).split(this, newTab, j, oldCap);
else { // preserve order
// 链表重新分布
Node<K,V> loHead = null, loTail = null;
Node<K,V> hiHead = null, hiTail = null;
Node<K,V> next;
do {
next = e.next;
// 判断元素在新数组中的位置
if ((e.hash & oldCap) == 0) {
// 位置不变
if (loTail == null)
loHead = e;
else
loTail.next = e;
loTail = e;
}
else {
// 位置为原索引+oldCap
if (hiTail == null)
hiHead = e;
else
hiTail.next = e;
hiTail = e;
}
} while ((e = next) != null);
if (loTail != null) {
loTail.next = null;
newTab[j] = loHead;
}
if (hiTail != null) {
hiTail.next = null;
newTab[j + oldCap] = hiHead;
}
}
}
}
}
return newTab;
}
16. ConcurrentHashMap差异
JDK 1.7实现:
java
// 分段锁设计
public class ConcurrentHashMap<K, V> {
// Segment数组
final Segment<K,V>[] segments;
static final class Segment<K,V> extends ReentrantLock {
// HashEntry数组
transient volatile HashEntry<K,V>[] table;
// 每个Segment独立计数
transient int count;
}
// put操作
public V put(K key, V value) {
Segment<K,V> s;
int hash = hash(key);
int segmentIndex = (hash >>> segmentShift) & segmentMask;
// 获取Segment
s = ensureSegment(segmentIndex);
// 调用Segment的put方法
return s.put(key, hash, value, false);
}
}
JDK 1.8改进:
- 锁粒度更细:从Segment锁变为链表头节点锁
- 数据结构优化:链表长度>8且数组长度>=64时转为红黑树
- 扩容优化:支持多线程协助扩容
- 统计优化:使用LongAdder代替AtomicLong
17. 找重复元素方案对比
java
public class FindDuplicates {
// 方案1:使用HashSet
public static Integer findDuplicateByHashSet(List<Integer> list) {
Set<Integer> seen = new HashSet<>();
for (Integer num : list) {
if (!seen.add(num)) { // add失败说明已存在
return num;
}
}
return null;
}
// 方案2:排序后查找
public static Integer findDuplicateBySorting(List<Integer> list) {
Collections.sort(list);
for (int i = 1; i < list.size(); i++) {
if (list.get(i).equals(list.get(i - 1))) {
return list.get(i);
}
}
return null;
}
// 方案3:位图法(适合数据范围已知)
public static Integer findDuplicateByBitmap(List<Integer> list, int maxValue) {
BitSet bitSet = new BitSet(maxValue + 1);
for (Integer num : list) {
if (bitSet.get(num)) {
return num;
}
bitSet.set(num);
}
return null;
}
// 方案4:快慢指针(找环,适用于数组元素在1..n范围内)
public static int findDuplicateByFloyd(int[] nums) {
// 把数组看作链表:i -> nums[i]
int slow = nums[0];
int fast = nums[0];
// 第一阶段:找到相遇点
do {
slow = nums[slow];
fast = nums[nums[fast]];
} while (slow != fast);
// 第二阶段:找到环的入口
int ptr1 = nums[0];
int ptr2 = slow;
while (ptr1 != ptr2) {
ptr1 = nums[ptr1];
ptr2 = nums[ptr2];
}
return ptr1;
}
}
18. HashMap键对象hashCode变化问题
java
public class MutableKeyExample {
static class MutableKey {
private String value;
public MutableKey(String value) {
this.value = value;
}
public void setValue(String value) {
this.value = value;
}
@Override
public int hashCode() {
return value != null ? value.hashCode() : 0;
}
@Override
public boolean equals(Object obj) {
// 省略equals实现
return true;
}
}
public static void main(String[] args) {
Map<MutableKey, String> map = new HashMap<>();
MutableKey key = new MutableKey("initial");
map.put(key, "value");
System.out.println(map.get(key)); // 输出: value
key.setValue("changed"); // 修改key的hashCode
System.out.println(map.get(key)); // 可能输出: null
// 原因:键的hashCode变化,但HashMap仍使用旧hashCode计算的位置
}
}
19. 线程安全LRU缓存实现
java
public class ThreadSafeLRUCache<K, V> {
private final int capacity;
private final Map<K, Node<K, V>> cache;
private final Node<K, V> head, tail;
private final ReadWriteLock lock = new ReentrantReadWriteLock();
// 双向链表节点
private static class Node<K, V> {
K key;
V value;
Node<K, V> prev, next;
Node(K key, V value) {
this.key = key;
this.value = value;
}
}
public ThreadSafeLRUCache(int capacity) {
this.capacity = capacity;
this.cache = new HashMap<>();
this.head = new Node<>(null, null);
this.tail = new Node<>(null, null);
head.next = tail;
tail.prev = head;
}
public V get(K key) {
lock.readLock().lock();
try {
Node<K, V> node = cache.get(key);
if (node == null) {
return null;
}
// 移动到头部
moveToHead(node);
return node.value;
} finally {
lock.readLock().unlock();
}
}
public void put(K key, V value) {
lock.writeLock().lock();
try {
Node<K, V> node = cache.get(key);
if (node == null) {
// 创建新节点
Node<K, V> newNode = new Node<>(key, value);
cache.put(key, newNode);
addToHead(newNode);
// 检查容量
if (cache.size() > capacity) {
Node<K, V> tailNode = removeTail();
cache.remove(tailNode.key);
}
} else {
// 更新值
node.value = value;
moveToHead(node);
}
} finally {
lock.writeLock().unlock();
}
}
private void addToHead(Node<K, V> node) {
node.prev = head;
node.next = head.next;
head.next.prev = node;
head.next = node;
}
private void removeNode(Node<K, V> node) {
node.prev.next = node.next;
node.next.prev = node.prev;
}
private void moveToHead(Node<K, V> node) {
removeNode(node);
addToHead(node);
}
private Node<K, V> removeTail() {
Node<K, V> node = tail.prev;
removeNode(node);
return node;
}
}
20. ArrayList vs LinkedList性能对比
java
public class ListPerformanceComparison {
public static void main(String[] args) {
int size = 100000;
// ArrayList性能测试
List<Integer> arrayList = new ArrayList<>();
long start = System.currentTimeMillis();
// 随机访问
for (int i = 0; i < size; i++) {
arrayList.add(i);
}
long arrayAccess = 0;
for (int i = 0; i < 10000; i++) {
arrayAccess += arrayList.get(i);
}
long arrayTime = System.currentTimeMillis() - start;
// LinkedList性能测试
List<Integer> linkedList = new LinkedList<>();
start = System.currentTimeMillis();
for (int i = 0; i < size; i++) {
linkedList.add(i);
}
long linkedAccess = 0;
for (int i = 0; i < 10000; i++) {
linkedAccess += linkedList.get(i); // LinkedList随机访问慢
}
long linkedTime = System.currentTimeMillis() - start;
System.out.println("ArrayList 随机访问时间: " + arrayTime + "ms");
System.out.println("LinkedList 随机访问时间: " + linkedTime + "ms");
// 插入性能测试
start = System.currentTimeMillis();
for (int i = 0; i < 1000; i++) {
arrayList.add(0, i); // ArrayList头部插入慢
}
arrayTime = System.currentTimeMillis() - start;
start = System.currentTimeMillis();
for (int i = 0; i < 1000; i++) {
linkedList.add(0, i); // LinkedList头部插入快
}
linkedTime = System.currentTimeMillis() - start;
System.out.println("ArrayList 头部插入时间: " + arrayTime + "ms");
System.out.println("LinkedList 头部插入时间: " + linkedTime + "ms");
}
}
性能总结:
| 操作 | ArrayList | LinkedList |
|---|---|---|
| 随机访问 | O(1) | O(n) |
| 头部插入 | O(n) | O(1) |
| 尾部插入 | O(1)(均摊) | O(1) |
| 中间插入 | O(n) | O(n)(需要遍历) |
| 内存占用 | 连续内存,缓存友好 | 每个元素额外内存 |
二、数据库与持久层(15题)
MySQL与关系数据库(8题)
21. B+树索引原理
B+树结构特点:
非叶子节点:只存储键值,不存储数据,可以存储更多键
叶子节点:存储键值和数据,叶子节点之间形成双向链表
所有数据都存储在叶子节点,查询更稳定
B+树 vs B树 vs 哈希索引:
| 索引类型 | 结构 | 范围查询 | 排序 | 磁盘IO | 适用场景 |
|---|---|---|---|---|---|
| B+树 | 多路平衡树 | 支持 | 支持 | 少 | 通用 |
| B树 | 多路平衡树 | 支持 | 支持 | 多 | 内存数据库 |
| 哈希 | 哈希表 | 不支持 | 不支持 | 随机 | 等值查询 |
sql
-- 创建索引示例
CREATE TABLE users (
id INT PRIMARY KEY,
name VARCHAR(100),
age INT,
email VARCHAR(100),
created_at TIMESTAMP
);
-- 单列索引
CREATE INDEX idx_name ON users(name);
-- 复合索引
CREATE INDEX idx_name_age ON users(name, age);
-- 唯一索引
CREATE UNIQUE INDEX idx_email ON users(email);
-- 覆盖索引示例
EXPLAIN SELECT id, name FROM users WHERE name = 'John';
-- 如果索引包含(id, name),则不需要回表
22. 事务隔离级别
四种隔离级别:
sql
-- 设置隔离级别
SET SESSION TRANSACTION ISOLATION LEVEL READ COMMITTED;
-- 查看当前隔离级别
SELECT @@transaction_isolation;
问题与解决方案:
| 隔离级别 | 脏读 | 不可重复读 | 幻读 | 实现方式 |
|---|---|---|---|---|
| 读未提交 | 可能 | 可能 | 可能 | 无锁 |
| 读已提交 | 解决 | 可能 | 可能 | 行锁 |
| 可重复读 | 解决 | 解决 | 可能 | MVCC+间隙锁 |
| 串行化 | 解决 | 解决 | 解决 | 表锁 |
MVCC(多版本并发控制)原理:
每个事务有唯一事务ID
每行数据有隐藏列:创建版本号、删除版本号
SELECT:只查找创建版本号<=当前事务ID,且删除版本号>当前事务ID的行
INSERT:设置创建版本号为当前事务ID
DELETE:设置删除版本号为当前事务ID
UPDATE:插入新行,设置创建版本号;旧行设置删除版本号
23. SQL性能优化实践
sql
-- 慢查询示例及优化
-- 原始SQL(问题:使用函数,无法使用索引)
SELECT * FROM orders WHERE YEAR(created_at) = 2023;
-- 优化后
SELECT * FROM orders
WHERE created_at >= '2023-01-01'
AND created_at < '2024-01-01';
-- 创建索引
CREATE INDEX idx_created_at ON orders(created_at);
-- 使用覆盖索引
-- 原始SQL
SELECT * FROM users WHERE age > 20 ORDER BY created_at DESC;
-- 优化:创建复合索引
CREATE INDEX idx_age_created ON users(age, created_at);
-- 改写SQL
SELECT id, name, age, created_at
FROM users
WHERE age > 20
ORDER BY created_at DESC;
-- 使用覆盖索引,避免回表
-- 分页优化
-- 原始SQL(深分页问题)
SELECT * FROM orders ORDER BY id LIMIT 1000000, 20;
-- 优化1:使用索引覆盖
SELECT id FROM orders ORDER BY id LIMIT 1000000, 20;
-- 然后再根据id查询详细数据
-- 优化2:使用游标分页
SELECT * FROM orders
WHERE id > ? -- 上一页最后一条记录的id
ORDER BY id LIMIT 20;
24. 分库分表设计方案
java
// 分库分表策略
public class ShardingStrategy {
// 哈希取模分片
public static String getTableName(Long userId, int tableCount) {
int tableIndex = Math.abs(userId.hashCode()) % tableCount;
return "order_" + tableIndex;
}
// 范围分片
public static String getTableByRange(Long userId) {
if (userId >= 1 && userId <= 1000000) {
return "order_1";
} else if (userId > 1000000 && userId <= 2000000) {
return "order_2";
}
// ...
return "order_n";
}
// 时间分片
public static String getTableByTime(Date createTime) {
SimpleDateFormat sdf = new SimpleDateFormat("yyyy_MM");
return "order_" + sdf.format(createTime);
}
}
// 使用Sharding-JDBC配置
@Configuration
public class ShardingConfig {
@Bean
public DataSource dataSource() throws SQLException {
// 配置分片规则
ShardingRuleConfiguration shardingRuleConfig = new ShardingRuleConfiguration();
// 分表规则
TableRuleConfiguration orderTableRuleConfig = new TableRuleConfiguration(
"t_order", "ds${0..1}.order_${0..15}");
// 分片算法
orderTableRuleConfig.setTableShardingStrategyConfig(
new StandardShardingStrategyConfiguration(
"user_id",
new OrderTableShardingAlgorithm()));
orderTableRuleConfig.setDatabaseShardingStrategyConfig(
new StandardShardingStrategyConfiguration(
"user_id",
new OrderDatabaseShardingAlgorithm()));
shardingRuleConfig.getTableRuleConfigs().add(orderTableRuleConfig);
return ShardingDataSourceFactory.createDataSource(
createDataSourceMap(), shardingRuleConfig, new Properties());
}
// 自定义分表算法
public class OrderTableShardingAlgorithm
implements PreciseShardingAlgorithm<Long> {
@Override
public String doSharding(Collection<String> tableNames,
PreciseShardingValue<Long> shardingValue) {
Long userId = shardingValue.getValue();
int tableIndex = (int) (userId % 16);
for (String tableName : tableNames) {
if (tableName.endsWith("_" + tableIndex)) {
return tableName;
}
}
throw new IllegalArgumentException();
}
}
}
25. 树形结构存储方案
sql
-- 方案1:邻接表(最简单)
CREATE TABLE departments (
id INT PRIMARY KEY,
name VARCHAR(100),
parent_id INT,
FOREIGN KEY (parent_id) REFERENCES departments(id)
);
-- 查询子树(递归)
WITH RECURSIVE sub_departments AS (
SELECT * FROM departments WHERE id = 1 -- 根节点
UNION ALL
SELECT d.* FROM departments d
INNER JOIN sub_departments sd ON d.parent_id = sd.id
)
SELECT * FROM sub_departments;
-- 方案2:路径枚举
CREATE TABLE departments_path (
id INT PRIMARY KEY,
name VARCHAR(100),
path VARCHAR(1000) -- 存储路径如 /1/2/3/
);
-- 查询子树
SELECT * FROM departments_path
WHERE path LIKE '/1/%';
-- 方案3:闭包表
CREATE TABLE departments (
id INT PRIMARY KEY,
name VARCHAR(100)
);
CREATE TABLE department_closure (
ancestor INT,
descendant INT,
depth INT,
PRIMARY KEY (ancestor, descendant),
FOREIGN KEY (ancestor) REFERENCES departments(id),
FOREIGN KEY (descendant) REFERENCES departments(id)
);
-- 查询子树
SELECT d.* FROM departments d
JOIN department_closure dc ON d.id = dc.descendant
WHERE dc.ancestor = 1;
-- 插入新节点
INSERT INTO departments (id, name) VALUES (4, '新部门');
-- 为新节点建立闭包关系
INSERT INTO department_closure (ancestor, descendant, depth)
SELECT ancestor, 4, depth + 1 FROM department_closure
WHERE descendant = 3 -- 父节点
UNION ALL SELECT 4, 4, 0;
26. 库存扣减实现
java
@Service
public class InventoryService {
// 方案1:数据库乐观锁
@Transactional
public boolean deductStockByOptimisticLock(Long productId, Integer quantity) {
Product product = productMapper.selectById(productId);
if (product.getStock() < quantity) {
throw new RuntimeException("库存不足");
}
// 使用版本号控制
int rows = productMapper.updateStockWithVersion(
productId,
product.getStock() - quantity,
product.getVersion());
return rows > 0;
}
// 方案2:数据库悲观锁
@Transactional
public boolean deductStockByPessimisticLock(Long productId, Integer quantity) {
// 使用SELECT FOR UPDATE锁定行
Product product = productMapper.selectForUpdate(productId);
if (product.getStock() < quantity) {
return false;
}
product.setStock(product.getStock() - quantity);
productMapper.updateById(product);
return true;
}
// 方案3:Redis分布式锁 + Lua原子操作
public boolean deductStockByRedis(Long productId, Integer quantity) {
String lockKey = "inventory:lock:" + productId;
String stockKey = "inventory:stock:" + productId;
String requestId = UUID.randomUUID().toString();
try {
// 获取分布式锁
boolean locked = redisTemplate.opsForValue()
.setIfAbsent(lockKey, requestId, 10, TimeUnit.SECONDS);
if (!locked) {
return false;
}
// 使用Lua脚本保证原子性
String luaScript =
"local stock = tonumber(redis.call('GET', KEYS[1])) " +
"if stock == nil then " +
" return -1 " + // 商品不存在
"end " +
"if stock < tonumber(ARGV[1]) then " +
" return 0 " + // 库存不足
"end " +
"redis.call('DECRBY', KEYS[1], ARGV[1]) " +
"return 1"; // 扣减成功
Long result = redisTemplate.execute(
new DefaultRedisScript<>(luaScript, Long.class),
Arrays.asList(stockKey),
quantity.toString()
);
if (result == 1) {
// 扣减成功,异步同步到数据库
asyncSyncToDatabase(productId, quantity);
return true;
}
return false;
} finally {
// 释放锁
String currentValue = redisTemplate.opsForValue().get(lockKey);
if (requestId.equals(currentValue)) {
redisTemplate.delete(lockKey);
}
}
}
}
27. 覆盖索引与索引下推
sql
-- 创建测试表
CREATE TABLE employees (
id INT PRIMARY KEY,
name VARCHAR(100),
age INT,
department VARCHAR(100),
salary DECIMAL(10, 2),
hire_date DATE,
INDEX idx_name_age_department (name, age, department)
);
-- 覆盖索引示例
-- 原始SQL:需要回表
EXPLAIN SELECT * FROM employees
WHERE name = 'John' AND age > 30;
-- 优化:只查询索引包含的列
EXPLAIN SELECT id, name, age, department
FROM employees
WHERE name = 'John' AND age > 30;
-- 使用覆盖索引,避免回表
-- 索引下推示例(MySQL 5.6+)
-- 原始SQL:没有索引下推时
SELECT * FROM employees
WHERE name LIKE 'J%' AND age > 25;
-- 执行流程(无索引下推):
-- 1. 使用索引找到所有name以'J'开头的记录
-- 2. 回表查询完整数据
-- 3. 过滤age > 25的记录
-- 执行流程(有索引下推):
-- 1. 使用索引找到所有name以'J'开头的记录
-- 2. 在索引层过滤age > 25的记录
-- 3. 只对符合条件的记录回表查询
-- 查看是否使用索引下推
EXPLAIN FORMAT=JSON
SELECT * FROM employees
WHERE name LIKE 'J%' AND age > 25;
-- 在Extra字段查看"Using index condition"
-- 创建适合索引下推的索引
CREATE INDEX idx_name_age ON employees(name, age);
28. 主从延迟解决方案
java
@Service
public class ReadWriteSeparationService {
// 方案1:写后强制读主库
@Transactional
public void updateAndReadConsistent(User user) {
// 写操作
userMapper.updateById(user);
// 强制读主库
User freshUser = readFromMaster(user.getId());
}
@Master // 自定义注解,强制读主库
public User readFromMaster(Long userId) {
return userMapper.selectById(userId);
}
// 方案2:延迟读取
public User readWithDelay(Long userId) {
User user = userMapper.selectById(userId); // 从从库读取
// 检查数据是否最新
if (!isDataFresh(user)) {
// 从主库重新读取
user = readFromMaster(userId);
}
return user;
}
// 方案3:使用中间件判断延迟
public User readWithMiddleware(Long userId) {
// 使用ShardingSphere等中间件
// 配置主从延迟阈值
return userMapper.selectById(userId);
}
// 方案4:基于时间戳判断
public User readWithTimestamp(Long userId, Long lastUpdateTime) {
User user = userMapper.selectById(userId);
// 如果从库数据更新时间早于要求的时间
if (user.getUpdateTime().getTime() < lastUpdateTime) {
// 从主库读取
user = readFromMaster(userId);
}
return user;
}
}
// 使用AOP实现主从路由
@Aspect
@Component
public class MasterSlaveAspect {
@Around("@annotation(master)")
public Object routeToMaster(ProceedingJoinPoint joinPoint, Master master)
throws Throwable {
// 设置数据源为master
DynamicDataSource.setDataSource("master");
try {
return joinPoint.proceed();
} finally {
DynamicDataSource.clearDataSource();
}
}
}
Redis与缓存(5题)
29. Redis数据类型与应用场景
java
@Service
public class RedisDataTypeExamples {
@Autowired
private RedisTemplate<String, Object> redisTemplate;
// String类型:缓存、计数器
public void stringExamples() {
// 缓存对象
User user = new User(1, "John", "john@example.com");
redisTemplate.opsForValue().set("user:1", user, 30, TimeUnit.MINUTES);
// 计数器
redisTemplate.opsForValue().increment("article:100:views");
redisTemplate.opsForValue().increment("user:login:count", 1);
// 分布式锁
Boolean locked = redisTemplate.opsForValue()
.setIfAbsent("lock:order:1", "request123", 10, TimeUnit.SECONDS);
}
// Hash类型:存储对象
public void hashExamples() {
// 存储用户信息
redisTemplate.opsForHash().putAll("user:2",
Map.of("name", "Alice", "age", "25", "email", "alice@example.com"));
// 获取部分字段
String name = (String) redisTemplate.opsForHash()
.get("user:2", "name");
// 更新单个字段
redisTemplate.opsForHash().put("user:2", "age", "26");
}
// List类型:消息队列、最新列表
public void listExamples() {
// 消息队列(左进右出)
redisTemplate.opsForList().leftPush("queue:tasks", "task1");
String task = (String) redisTemplate.opsForList().rightPop("queue:tasks");
// 最新文章列表
redisTemplate.opsForList().leftPush("articles:latest", "article:100");
redisTemplate.opsForList().leftPush("articles:latest", "article:101");
// 只保留最近10条
redisTemplate.opsForList().trim("articles:latest", 0, 9);
}
// Set类型:标签、共同好友
public void setExamples() {
// 用户标签
redisTemplate.opsForSet().add("user:1:tags", "java", "redis", "spring");
redisTemplate.opsForSet().add("user:2:tags", "java", "python", "docker");
// 共同标签
Set<Object> commonTags = redisTemplate.opsForSet()
.intersect("user:1:tags", "user:2:tags");
// 随机推荐
Object randomTag = redisTemplate.opsForSet()
.randomMember("user:1:tags");
}
// Sorted Set类型:排行榜
public void sortedSetExamples() {
// 游戏排行榜
redisTemplate.opsForZSet().add("game:leaderboard",
"player1", 1000);
redisTemplate.opsForZSet().add("game:leaderboard",
"player2", 1500);
// 获取排名
Long rank = redisTemplate.opsForZSet()
.reverseRank("game:leaderboard", "player1");
// 获取前10名
Set<ZSetOperations.TypedTuple<Object>> top10 = redisTemplate.opsForZSet()
.reverseRangeWithScores("game:leaderboard", 0, 9);
}
}
30. Redis持久化对比
RDB持久化:
bash
# 配置文件示例
save 900 1 # 900秒内至少有1个key被修改
save 300 10 # 300秒内至少有10个key被修改
save 60 10000 # 60秒内至少有10000个key被修改
stop-writes-on-bgsave-error yes # 备份出错停止写操作
rdbcompression yes # 压缩
rdbchecksum yes # 校验和
dbfilename dump.rdb # 文件名
dir ./ # 保存路径
AOF持久化:
bash
# 配置文件示例
appendonly yes # 开启AOF
appendfilename "appendonly.aof" # 文件名
# 同步策略
appendfsync everysec # 每秒同步,性能和数据安全的平衡
# appendfsync always # 每次写都同步,最安全但性能差
# appendfsync no # 由操作系统决定,性能最好但可能丢失数据
auto-aof-rewrite-percentage 100 # 当前AOF文件大小超过上次重写后大小的100%时重写
auto-aof-rewrite-min-size 64mb # AOF文件最小重写大小
# 混合持久化(Redis 4.0+)
aof-use-rdb-preamble yes # 开启混合持久化
对比总结:
| 特性 | RDB | AOF |
|---|---|---|
| 持久化方式 | 快照 | 记录写命令 |
| 文件大小 | 小(二进制) | 大(文本) |
| 恢复速度 | 快 | 慢 |
| 数据安全 | 可能丢失数据 | 相对安全 |
| 性能影响 | 保存时影响性能 | 持续写入影响小 |
| 使用场景 | 备份、灾难恢复 | 数据安全性要求高 |
31. 缓存问题解决方案
java
@Service
public class CacheProblemSolutions {
// 1. 缓存穿透解决方案
public Object getWithBloomFilter(String key) {
// 使用布隆过滤器判断key是否存在
if (!bloomFilter.mightContain(key)) {
return null; // 肯定不存在
}
Object value = redisTemplate.opsForValue().get(key);
if (value == null) {
// 缓存空值,设置较短过期时间
redisTemplate.opsForValue().set(key, "NULL", 5, TimeUnit.MINUTES);
}
return "NULL".equals(value) ? null : value;
}
// 2. 缓存击穿解决方案
public Object getWithMutexLock(String key) {
Object value = redisTemplate.opsForValue().get(key);
if (value == null) {
// 获取分布式锁
String lockKey = "lock:" + key;
String requestId = UUID.randomUUID().toString();
try {
boolean locked = redisTemplate.opsForValue()
.setIfAbsent(lockKey, requestId, 10, TimeUnit.SECONDS);
if (locked) {
// 查询数据库
value = loadFromDB(key);
// 写入缓存,设置合理过期时间
redisTemplate.opsForValue()
.set(key, value, 30, TimeUnit.MINUTES);
// 异步更新缓存,延长过期时间
asyncRefreshCache(key);
} else {
// 等待其他线程加载
Thread.sleep(100);
return getWithMutexLock(key); // 重试
}
} catch (InterruptedException e) {
Thread.currentThread().interrupt();
} finally {
// 释放锁
String currentValue = redisTemplate.opsForValue().get(lockKey);
if (requestId.equals(currentValue)) {
redisTemplate.delete(lockKey);
}
}
}
return value;
}
// 3. 缓存雪崩解决方案
public Object getWithRandomExpire(String key) {
// 设置随机过期时间,避免同时过期
Object value = redisTemplate.opsForValue().get(key);
if (value == null) {
value = loadFromDB(key);
// 基础过期时间 + 随机时间
int baseExpire = 1800; // 30分钟
int randomExpire = ThreadLocalRandom.current().nextInt(300); // 0-5分钟
redisTemplate.opsForValue()
.set(key, value, baseExpire + randomExpire, TimeUnit.SECONDS);
}
return value;
}
// 4. 缓存预热
@PostConstruct
public void warmUpCache() {
// 启动时加载热点数据到缓存
List<HotItem> hotItems = loadHotItemsFromDB();
for (HotItem item : hotItems) {
String key = "item:" + item.getId();
redisTemplate.opsForValue()
.set(key, item, 1, TimeUnit.HOURS);
}
}
// 5. 缓存降级
public Object getWithDegradation(String key) {
try {
Object value = redisTemplate.opsForValue().get(key);
if (value == null) {
// 缓存失效时,不直接访问数据库
// 返回降级数据或空值
return getDegradedData();
}
return value;
} catch (Exception e) {
// Redis异常时,直接返回降级数据
return getDegradedData();
}
}
}
32. 分布式Session方案
java
@Component
public class RedisSessionManager {
private static final String SESSION_PREFIX = "session:";
private static final String USER_SESSIONS_PREFIX = "user_sessions:";
private static final int SESSION_TIMEOUT = 30 * 60; // 30分钟
@Autowired
private RedisTemplate<String, Object> redisTemplate;
// 创建Session
public Session createSession(Long userId, UserInfo userInfo) {
String sessionId = generateSessionId();
Session session = new Session();
session.setId(sessionId);
session.setUserId(userId);
session.setUserInfo(userInfo);
session.setCreateTime(System.currentTimeMillis());
session.setLastAccessTime(System.currentTimeMillis());
// 存储Session数据
String sessionKey = SESSION_PREFIX + sessionId;
redisTemplate.opsForHash().putAll(sessionKey,
convertSessionToMap(session));
redisTemplate.expire(sessionKey, SESSION_TIMEOUT, TimeUnit.SECONDS);
// 记录用户的所有Session
String userSessionsKey = USER_SESSIONS_PREFIX + userId;
redisTemplate.opsForSet().add(userSessionsKey, sessionId);
redisTemplate.expire(userSessionsKey, SESSION_TIMEOUT, TimeUnit.SECONDS);
return session;
}
// 获取Session
public Session getSession(String sessionId) {
String sessionKey = SESSION_PREFIX + sessionId;
Map<Object, Object> sessionData = redisTemplate.opsForHash()
.entries(sessionKey);
if (sessionData.isEmpty()) {
return null;
}
// 更新最后访问时间
redisTemplate.opsForHash().put(sessionKey,
"lastAccessTime", System.currentTimeMillis());
// 续期
redisTemplate.expire(sessionKey, SESSION_TIMEOUT, TimeUnit.SECONDS);
return convertMapToSession(sessionData);
}
// 销毁Session
public void invalidateSession(String sessionId) {
String sessionKey = SESSION_PREFIX + sessionId;
Map<Object, Object> sessionData = redisTemplate.opsForHash()
.entries(sessionKey);
if (!sessionData.isEmpty()) {
// 从用户Session集合中移除
Long userId = (Long) sessionData.get("userId");
if (userId != null) {
String userSessionsKey = USER_SESSIONS_PREFIX + userId;
redisTemplate.opsForSet().remove(userSessionsKey, sessionId);
}
}
// 删除Session
redisTemplate.delete(sessionKey);
}
// 强制用户下线(踢出所有Session)
public void forceLogout(Long userId) {
String userSessionsKey = USER_SESSIONS_PREFIX + userId;
Set<Object> sessionIds = redisTemplate.opsForSet()
.members(userSessionsKey);
if (sessionIds != null) {
for (Object sessionId : sessionIds) {
invalidateSession((String) sessionId);
}
}
redisTemplate.delete(userSessionsKey);
}
// Session监听器,处理过期事件
@Component
public class SessionExpirationListener {
@EventListener
public void handleKeyExpiredEvent(KeyExpiredEvent<String> event) {
String expiredKey = event.getKey();
if (expiredKey.startsWith(SESSION_PREFIX)) {
// 从用户Session集合中移除过期的Session
String sessionId = expiredKey.substring(SESSION_PREFIX.length());
// 可以记录日志或发送通知
System.out.println("Session expired: " + sessionId);
}
}
}
}
33. 秒杀库存与限流实现
java
@Component
public class SpikeSystem {
private static final String STOCK_KEY_PREFIX = "spike:stock:";
private static final String ORDER_KEY_PREFIX = "spike:order:";
private static final String RATE_LIMIT_KEY_PREFIX = "spike:rate:";
@Autowired
private RedisTemplate<String, Object> redisTemplate;
@Autowired
private RabbitTemplate rabbitTemplate;
// 预热库存到Redis
public void preheatStock(Long productId, Integer stock) {
String stockKey = STOCK_KEY_PREFIX + productId;
redisTemplate.opsForValue().set(stockKey, stock);
// 设置过期时间,避免长期占用内存
redisTemplate.expire(stockKey, 2, TimeUnit.HOURS);
}
// 秒杀核心逻辑
public SpikeResult spike(Long userId, Long productId) {
// 1. 用户限流
if (!checkUserRateLimit(userId, productId)) {
return SpikeResult.fail("请求过于频繁");
}
// 2. 商品限流
if (!checkProductRateLimit(productId)) {
return SpikeResult.fail("商品太火爆了,请稍后重试");
}
// 3. 验证用户资格
if (hasPurchased(userId, productId)) {
return SpikeResult.fail("您已经购买过该商品");
}
// 4. 原子扣减库存
String stockKey = STOCK_KEY_PREFIX + productId;
Long remaining = redisTemplate.opsForValue().decrement(stockKey);
if (remaining == null || remaining < 0) {
// 库存不足,回滚
if (remaining != null && remaining < 0) {
redisTemplate.opsForValue().increment(stockKey);
}
return SpikeResult.fail("商品已售罄");
}
// 5. 生成订单号
String orderId = generateOrderId();
// 6. 记录购买关系
recordPurchase(userId, productId, orderId);
// 7. 发送订单消息到队列
sendOrderMessage(userId, productId, orderId);
return SpikeResult.success(orderId);
}
// 使用Lua脚本保证原子性
private SpikeResult spikeWithLua(Long userId, Long productId) {
String luaScript =
"local stockKey = KEYS[1] " +
"local orderKey = KEYS[2] " +
"local rateKey = KEYS[3] " +
"local productId = ARGV[1] " +
"local userId = ARGV[2] " +
"local timestamp = ARGV[3] " +
// 1. 检查库存
"local stock = tonumber(redis.call('GET', stockKey)) " +
"if not stock or stock <= 0 then " +
" return '0' " + // 库存不足
"end " +
// 2. 检查用户是否已购买
"local hasPurchased = redis.call('SISMEMBER', orderKey, userId) " +
"if hasPurchased == 1 then " +
" return '1' " + // 已购买
"end " +
// 3. 限流检查(滑动窗口)
"local now = tonumber(timestamp) " +
"local window = 10 " + // 10秒窗口
"local limit = 3 " + // 最多3次
"local clearBefore = now - window * 1000 " +
"redis.call('ZREMRANGEBYSCORE', rateKey, 0, clearBefore) " +
"local requestCount = redis.call('ZCARD', rateKey) " +
"if requestCount >= limit then " +
" return '2' " + // 限流
"end " +
// 4. 扣减库存
"redis.call('DECR', stockKey) " +
// 5. 记录购买
"redis.call('SADD', orderKey, userId) " +
// 6. 记录请求
"redis.call('ZADD', rateKey, now, userId .. ':' .. now) " +
"redis.call('EXPIRE', rateKey, window) " +
"return '3'"; // 成功
String stockKey = STOCK_KEY_PREFIX + productId;
String orderKey = ORDER_KEY_PREFIX + productId;
String rateKey = RATE_LIMIT_KEY_PREFIX + productId;
String result = (String) redisTemplate.execute(
new DefaultRedisScript<>(luaScript, String.class),
Arrays.asList(stockKey, orderKey, rateKey),
productId.toString(),
userId.toString(),
String.valueOf(System.currentTimeMillis())
);
switch (result) {
case "0": return SpikeResult.fail("库存不足");
case "1": return SpikeResult.fail("已购买");
case "2": return SpikeResult.fail("请求频繁");
case "3":
String orderId = generateOrderId();
sendOrderMessage(userId, productId, orderId);
return SpikeResult.success(orderId);
default: return SpikeResult.fail("系统繁忙");
}
}
// 滑动窗口限流
private boolean checkUserRateLimit(Long userId, Long productId) {
String rateKey = RATE_LIMIT_KEY_PREFIX + "user:" + userId + ":" + productId;
long now = System.currentTimeMillis();
long window = 10 * 1000; // 10秒窗口
long limit = 3; // 最多3次
// 移除窗口外的记录
redisTemplate.opsForZSet().removeRangeByScore(
rateKey, 0, now - window);
// 获取当前窗口内的请求数
Long count = redisTemplate.opsForZSet().zCard(rateKey);
if (count != null && count >= limit) {
return false;
}
// 记录本次请求
redisTemplate.opsForZSet().add(
rateKey, String.valueOf(now), now);
redisTemplate.expire(rateKey, window / 1000, TimeUnit.SECONDS);
return true;
}
}
ORM框架(2题)
34. MyBatis #{}和${}区别
xml
<!-- MyBatis映射文件示例 -->
<mapper namespace="com.example.UserMapper">
<!-- 使用#{},预编译,防止SQL注入 -->
<select id="selectUserById" resultType="User">
SELECT * FROM users WHERE id = #{id}
<!-- 编译为:SELECT * FROM users WHERE id = ? -->
<!-- 参数id会被预编译处理 -->
</select>
<!-- 使用${},字符串替换,有SQL注入风险 -->
<select id="selectUsersOrderBy" resultType="User">
SELECT * FROM users ORDER BY ${orderBy}
<!-- 编译为:SELECT * FROM users ORDER BY column_name -->
<!-- 直接替换,不预编译 -->
</select>
<!-- ${}的安全使用场景 -->
<select id="selectByTable" resultType="map">
SELECT * FROM ${tableName}
WHERE status = #{status}
<!-- 动态表名必须使用${} -->
</select>
</mapper>
安全示例:
java
// 危险示例:SQL注入
public List<User> findUsersByName(String name) {
String sql = "SELECT * FROM users WHERE name = '" + name + "'";
// 如果name = "admin' OR '1'='1"
// SQL变成:SELECT * FROM users WHERE name = 'admin' OR '1'='1'
return sqlSession.selectList("findUsers", sql);
}
// 安全示例:使用#{}
@Select("SELECT * FROM users WHERE name = #{name}")
List<User> findUsersByName(@Param("name") String name);
// ${}的正确使用场景
@Select("SELECT * FROM ${tableName} WHERE id = #{id}")
User findById(@Param("tableName") String tableName, @Param("id") Long id);
// ${}与#{}结合使用
@Select("SELECT * FROM users ORDER BY ${orderBy} ${orderDir} LIMIT #{limit}")
List<User> findUsersWithOrder(@Param("orderBy") String orderBy,
@Param("orderDir") String orderDir,
@Param("limit") Integer limit);
35. Hibernate缓存机制
java
@Entity
@Table(name = "users")
@Cacheable // 启用二级缓存
@Cache(usage = CacheConcurrencyStrategy.READ_WRITE) // 缓存策略
public class User {
@Id
@GeneratedValue(strategy = GenerationType.IDENTITY)
private Long id;
private String name;
private String email;
@OneToMany(mappedBy = "user", cascade = CascadeType.ALL)
@Cache(usage = CacheConcurrencyStrategy.READ_WRITE)
private List<Order> orders = new ArrayList<>();
// getters and setters
}
@Service
public class HibernateCacheService {
@Autowired
private SessionFactory sessionFactory;
// 一级缓存示例(Session级别)
public void firstLevelCacheDemo() {
Session session = sessionFactory.openSession();
try {
// 第一次查询,发送SQL
User user1 = session.get(User.class, 1L);
System.out.println("First query: " + user1.getName());
// 第二次查询同一对象,从一级缓存获取,不发送SQL
User user2 = session.get(User.class, 1L);
System.out.println("Second query: " + user2.getName());
// 修改对象,缓存同步更新
user1.setName("Updated Name");
session.update(user1);
// 查询时获取更新后的对象
User user3 = session.get(User.class, 1L);
System.out.println("After update: " + user3.getName());
} finally {
session.close(); // 关闭Session,清空一级缓存
}
}
// 二级缓存示例(SessionFactory级别)
public void secondLevelCacheDemo() {
Session session1 = sessionFactory.openSession();
Session session2 = sessionFactory.openSession();
try {
// Session1查询,发送SQL,存入二级缓存
User user1 = session1.get(User.class, 1L);
System.out.println("Session1 query: " + user1.getName());
// Session2查询同一对象,从二级缓存获取,不发送SQL
User user2 = session2.get(User.class, 1L);
System.out.println("Session2 query: " + user2.getName());
} finally {
session1.close();
session2.close();
}
}
// 查询缓存示例
public void queryCacheDemo() {
Session session = sessionFactory.openSession();
try {
// 第一次查询,发送SQL,结果存入查询缓存
Query<User> query1 = session.createQuery(
"FROM User WHERE name = :name", User.class);
query1.setParameter("name", "John");
query1.setCacheable(true); // 启用查询缓存
List<User> users1 = query1.getResultList();
// 第二次相同查询,从查询缓存获取,不发送SQL
Query<User> query2 = session.createQuery(
"FROM User WHERE name = :name", User.class);
query2.setParameter("name", "John");
query2.setCacheable(true);
List<User> users2 = query2.getResultList();
} finally {
session.close();
}
}
}
// Hibernate配置
@Configuration
public class HibernateConfig {
@Bean
public LocalSessionFactoryBean sessionFactory(DataSource dataSource) {
LocalSessionFactoryBean sessionFactory = new LocalSessionFactoryBean();
sessionFactory.setDataSource(dataSource);
sessionFactory.setPackagesToScan("com.example.entity");
Properties hibernateProperties = new Properties();
hibernateProperties.put("hibernate.dialect",
"org.hibernate.dialect.MySQL8Dialect");
// 启用二级缓存
hibernateProperties.put("hibernate.cache.use_second_level_cache", "true");
hibernateProperties.put("hibernate.cache.use_query_cache", "true");
hibernateProperties.put("hibernate.cache.region.factory_class",
"org.hibernate.cache.ehcache.EhCacheRegionFactory");
sessionFactory.setHibernateProperties(hibernateProperties);
return sessionFactory;
}
}
缓存策略:
| 策略 | 描述 | 适用场景 |
|---|---|---|
| READ_ONLY | 只读缓存,不更新 | 从不修改的数据 |
| READ_WRITE | 读写缓存,使用锁 | 经常读偶尔写的数据 |
| NONSTRICT_READ_WRITE | 不严格读写,可能读到旧数据 | 数据更新不频繁 |
| TRANSACTIONAL | 事务缓存,JTA环境 | 需要强一致性的环境 |
三、框架、系统与分布式(15题)
Spring框架(5题)
36. Spring Bean生命周期
java
@Component
public class LifecycleBean implements
BeanNameAware, BeanFactoryAware, ApplicationContextAware,
InitializingBean, DisposableBean {
private String name;
public LifecycleBean() {
System.out.println("1. 构造函数执行");
}
@PostConstruct
public void postConstruct() {
System.out.println("4. @PostConstruct 执行");
}
@Override
public void setBeanName(String name) {
System.out.println("2. BeanNameAware.setBeanName: " + name);
this.name = name;
}
@Override
public void setBeanFactory(BeanFactory beanFactory)
throws BeansException {
System.out.println("3. BeanFactoryAware.setBeanFactory");
}
@Override
public void setApplicationContext(ApplicationContext applicationContext)
throws BeansException {
System.out.println("3. ApplicationContextAware.setApplicationContext");
}
@Override
public void afterPropertiesSet() throws Exception {
System.out.println("5. InitializingBean.afterPropertiesSet");
}
public void initMethod() {
System.out.println("6. 自定义init-method");
}
@PreDestroy
public void preDestroy() {
System.out.println("8. @PreDestroy 执行");
}
@Override
public void destroy() throws Exception {
System.out.println("9. DisposableBean.destroy");
}
public void destroyMethod() {
System.out.println("10. 自定义destroy-method");
}
}
// Bean后置处理器
@Component
public class CustomBeanPostProcessor implements BeanPostProcessor {
@Override
public Object postProcessBeforeInitialization(Object bean,
String beanName) throws BeansException {
if (bean instanceof LifecycleBean) {
System.out.println("BeanPostProcessor.postProcessBeforeInitialization: "
+ beanName);
}
return bean;
}
@Override
public Object postProcessAfterInitialization(Object bean,
String beanName) throws BeansException {
if (bean instanceof LifecycleBean) {
System.out.println("BeanPostProcessor.postProcessAfterInitialization: "
+ beanName);
}
return bean;
}
}
// 完整的Bean生命周期:
// 1. 实例化Bean
// 2. 填充属性
// 3. 调用Aware接口方法
// 4. BeanPostProcessor.postProcessBeforeInitialization
// 5. @PostConstruct
// 6. InitializingBean.afterPropertiesSet
// 7. 自定义init-method
// 8. BeanPostProcessor.postProcessAfterInitialization
// 9. Bean就绪,可以使用
// 10. @PreDestroy
// 11. DisposableBean.destroy
// 12. 自定义destroy-method
37. Spring AOP实现原理
java
// 1. 基于接口的JDK动态代理
public interface UserService {
void addUser(String name);
void deleteUser(Long id);
}
@Service
public class UserServiceImpl implements UserService {
@Override
public void addUser(String name) {
System.out.println("添加用户: " + name);
}
@Override
public void deleteUser(Long id) {
System.out.println("删除用户: " + id);
}
}
// JDK动态代理实现
public class JdkDynamicProxy implements InvocationHandler {
private Object target;
public JdkDynamicProxy(Object target) {
this.target = target;
}
public Object getProxy() {
return Proxy.newProxyInstance(
target.getClass().getClassLoader(),
target.getClass().getInterfaces(),
this
);
}
@Override
public Object invoke(Object proxy, Method method, Object[] args)
throws Throwable {
// 前置增强
System.out.println("Before method: " + method.getName());
// 执行目标方法
Object result = method.invoke(target, args);
// 后置增强
System.out.println("After method: " + method.getName());
return result;
}
}
// 2. 基于继承的CGLIB代理
@Service
public class ProductService {
public void addProduct(String name) {
System.out.println("添加产品: " + name);
}
public final void finalMethod() {
// final方法不能被CGLIB代理
}
}
// CGLIB代理实现
public class CglibProxy implements MethodInterceptor {
private Object target;
public CglibProxy(Object target) {
this.target = target;
}
public Object getProxy() {
Enhancer enhancer = new Enhancer();
enhancer.setSuperclass(target.getClass());
enhancer.setCallback(this);
return enhancer.create();
}
@Override
public Object intercept(Object obj, Method method, Object[] args,
MethodProxy proxy) throws Throwable {
System.out.println("CGLIB Before method: " + method.getName());
Object result = proxy.invokeSuper(obj, args);
System.out.println("CGLIB After method: " + method.getName());
return result;
}
}
// Spring AOP使用
@Aspect
@Component
public class LoggingAspect {
// 前置通知
@Before("execution(* com.example.service.*.*(..))")
public void logBefore(JoinPoint joinPoint) {
System.out.println("方法调用前: " + joinPoint.getSignature().getName());
}
// 后置通知
@AfterReturning(pointcut = "execution(* com.example.service.*.*(..))",
returning = "result")
public void logAfterReturning(JoinPoint joinPoint, Object result) {
System.out.println("方法返回后: " + result);
}
// 环绕通知
@Around("execution(* com.example.service.*.*(..))")
public Object logAround(ProceedingJoinPoint joinPoint) throws Throwable {
System.out.println("环绕前: " + joinPoint.getSignature().getName());
long start = System.currentTimeMillis();
Object result = joinPoint.proceed();
long elapsedTime = System.currentTimeMillis() - start;
System.out.println("环绕后,执行时间: " + elapsedTime + "ms");
return result;
}
// 异常通知
@AfterThrowing(pointcut = "execution(* com.example.service.*.*(..))",
throwing = "error")
public void logAfterThrowing(JoinPoint joinPoint, Throwable error) {
System.out.println("方法异常: " + error.getMessage());
}
}
// AOP配置
@Configuration
@EnableAspectJAutoProxy(proxyTargetClass = true) // true使用CGLIB,false使用JDK
public class AopConfig {
}
38. Spring事务传播行为
java
@Service
public class TransactionService {
@Autowired
private UserRepository userRepository;
@Autowired
private OrderRepository orderRepository;
@Autowired
private LogService logService;
// REQUIRED(默认):如果当前有事务,则加入;否则新建事务
@Transactional(propagation = Propagation.REQUIRED)
public void createUserAndOrder(User user, Order order) {
userRepository.save(user);
createOrder(order); // 调用内部方法,加入同一事务
}
@Transactional(propagation = Propagation.REQUIRED)
public void createOrder(Order order) {
orderRepository.save(order);
// 异常会回滚整个事务
if (order.getAmount() <= 0) {
throw new RuntimeException("金额必须大于0");
}
}
// REQUIRES_NEW:新建事务,挂起当前事务(如果存在)
@Transactional(propagation = Propagation.REQUIRED)
public void processOrder(Long orderId) {
// 主业务逻辑
updateOrderStatus(orderId, "PROCESSING");
try {
// 记录日志,独立事务
logService.auditLog(orderId, "开始处理");
} catch (Exception e) {
// 日志记录失败不影响主业务
System.err.println("日志记录失败: " + e.getMessage());
}
// 继续主业务
completeOrder(orderId);
}
@Transactional(propagation = Propagation.REQUIRES_NEW)
public void auditLog(Long orderId, String action) {
// 独立事务,即使失败也不回滚主业务
logRepository.save(new AuditLog(orderId, action));
// 这里抛异常不会影响processOrder
}
// NESTED:嵌套事务,Savepoint机制
@Transactional(propagation = Propagation.REQUIRED)
public void batchProcessOrders(List<Long> orderIds) {
for (Long orderId : orderIds) {
try {
// 嵌套事务,失败时回滚到Savepoint
processSingleOrder(orderId);
} catch (Exception e) {
// 单个订单处理失败,不影响其他订单
System.err.println("订单处理失败: " + orderId);
}
}
}
@Transactional(propagation = Propagation.NESTED)
public void processSingleOrder(Long orderId) {
// 嵌套事务,失败只回滚自己
updateOrderStatus(orderId, "PROCESSING");
if (checkInventory(orderId)) {
completeOrder(orderId);
} else {
throw new RuntimeException("库存不足");
}
}
// SUPPORTS:支持当前事务,如果没有则以非事务执行
@Transactional(propagation = Propagation.SUPPORTS)
public User getUser(Long id) {
// 如果调用方有事务,则加入;否则非事务执行
return userRepository.findById(id).orElse(null);
}
// NOT_SUPPORTED:非事务执行,挂起当前事务(如果存在)
@Transactional(propagation = Propagation.NOT_SUPPORTED)
public void generateReport() {
// 复杂的报表生成,不需要事务
// 也不受当前事务影响
}
// NEVER:非事务执行,如果当前有事务则抛出异常
@Transactional(propagation = Propagation.NEVER)
public void validateData() {
// 数据验证,不应该在事务中执行
}
// MANDATORY:必须存在事务,否则抛出异常
@Transactional(propagation = Propagation.MANDATORY)
public void updateCriticalData(Long id) {
// 关键数据更新,必须在事务中执行
userRepository.updateCriticalField(id, "newValue");
}
}
// 事务隔离级别
@Service
public class IsolationService {
@Transactional(isolation = Isolation.READ_COMMITTED)
public void readCommittedOperation() {
// 读取已提交的数据
// 避免脏读,可能出现不可重复读
}
@Transactional(isolation = Isolation.REPEATABLE_READ)
public void repeatableReadOperation() {
// 可重复读
// 避免脏读和不可重复读,可能出现幻读
}
@Transactional(isolation = Isolation.SERIALIZABLE)
public void serializableOperation() {
// 序列化
// 避免所有并发问题,性能最差
}
}
39. 循环依赖解决方案
java
// 循环依赖示例
@Service
public class ServiceA {
@Autowired
private ServiceB serviceB; // 依赖B
public void methodA() {
System.out.println("ServiceA.methodA");
serviceB.methodB();
}
}
@Service
public class ServiceB {
@Autowired
private ServiceA serviceA; // 依赖A,形成循环依赖
public void methodB() {
System.out.println("ServiceB.methodB");
serviceA.methodA();
}
}
// Spring三级缓存解决循环依赖
public class DefaultSingletonBeanRegistry {
// 一级缓存:存放完全初始化好的Bean
private final Map<String, Object> singletonObjects =
new ConcurrentHashMap<>(256);
// 二级缓存:存放早期Bean(未填充属性)
private final Map<String, Object> earlySingletonObjects =
new ConcurrentHashMap<>(16);
// 三级缓存:存放Bean工厂,用于创建早期引用
private final Map<String, ObjectFactory<?>> singletonFactories =
new ConcurrentHashMap<>(16);
// 正在创建中的Bean
private final Set<String> singletonsCurrentlyInCreation =
Collections.newSetFromMap(new ConcurrentHashMap<>(16));
protected Object getSingleton(String beanName, boolean allowEarlyReference) {
// 1. 从一级缓存获取
Object singletonObject = this.singletonObjects.get(beanName);
if (singletonObject == null && isSingletonCurrentlyInCreation(beanName)) {
synchronized (this.singletonObjects) {
// 2. 从二级缓存获取
singletonObject = this.earlySingletonObjects.get(beanName);
if (singletonObject == null && allowEarlyReference) {
// 3. 从三级缓存获取Bean工厂
ObjectFactory<?> singletonFactory =
this.singletonFactories.get(beanName);
if (singletonFactory != null) {
// 创建早期引用
singletonObject = singletonFactory.getObject();
// 放入二级缓存
this.earlySingletonObjects.put(beanName, singletonObject);
// 从三级缓存移除
this.singletonFactories.remove(beanName);
}
}
}
}
return singletonObject;
}
}
// 解决循环依赖的过程
// 1. 创建ServiceA
// 2. 实例化ServiceA(构造函数)
// 3. 将ServiceA的ObjectFactory放入三级缓存
// 4. 填充ServiceA属性(需要ServiceB)
// 5. 创建ServiceB
// 6. 实例化ServiceB
// 7. 将ServiceB的ObjectFactory放入三级缓存
// 8. 填充ServiceB属性(需要ServiceA)
// 9. 从三级缓存获取ServiceA的早期引用
// 10. ServiceB初始化完成,放入一级缓存
// 11. ServiceA继续初始化
// 12. ServiceA初始化完成,放入一级缓存
// 无法解决的循环依赖场景
@Service
public class ConstructorServiceA {
private final ConstructorServiceB serviceB;
@Autowired
public ConstructorServiceA(ConstructorServiceB serviceB) {
this.serviceB = serviceB; // 构造器注入的循环依赖无法解决
}
}
@Service
public class ConstructorServiceB {
private final ConstructorServiceA serviceA;
@Autowired
public ConstructorServiceB(ConstructorServiceA serviceA) {
this.serviceA = serviceA;
}
}
40. 自定义注解与AOP集成
java
// 自定义注解
@Target(ElementType.METHOD)
@Retention(RetentionPolicy.RUNTIME)
@Documented
public @interface TimeLog {
String value() default "";
boolean logArgs() default false;
boolean logResult() default false;
}
// 业务方法使用注解
@Service
public class BusinessService {
@TimeLog(value = "用户查询", logArgs = true, logResult = false)
public User getUserById(Long id) {
// 模拟耗时操作
try {
Thread.sleep(100);
} catch (InterruptedException e) {
Thread.currentThread().interrupt();
}
return new User(id, "用户" + id);
}
@TimeLog(value = "订单创建", logArgs = true, logResult = true)
public Order createOrder(OrderRequest request) {
try {
Thread.sleep(200);
} catch (InterruptedException e) {
Thread.currentThread().interrupt();
}
return new Order(UUID.randomUUID().toString(), request.getAmount());
}
}
// AOP切面实现
@Aspect
@Component
@Slf4j
public class TimeLogAspect {
private static final ThreadLocal<Long> startTimeThreadLocal =
new ThreadLocal<>();
// 切点:所有被@TimeLog注解的方法
@Pointcut("@annotation(com.example.annotation.TimeLog)")
public void timeLogPointcut() {}
// 环绕通知
@Around("timeLogPointcut()")
public Object logExecutionTime(ProceedingJoinPoint joinPoint)
throws Throwable {
// 获取注解信息
MethodSignature signature = (MethodSignature) joinPoint.getSignature();
Method method = signature.getMethod();
TimeLog timeLog = method.getAnnotation(TimeLog.class);
// 记录开始时间
long startTime = System.currentTimeMillis();
startTimeThreadLocal.set(startTime);
// 记录方法入参(如果配置)
if (timeLog.logArgs()) {
Object[] args = joinPoint.getArgs();
String methodName = joinPoint.getSignature().getName();
log.info("方法 {} 调用,参数: {}", methodName, Arrays.toString(args));
}
// 执行目标方法
Object result;
try {
result = joinPoint.proceed();
} catch (Throwable throwable) {
// 记录异常
long endTime = System.currentTimeMillis();
long executionTime = endTime - startTime;
log.error("方法 {} 执行异常,耗时: {}ms,异常: {}",
timeLog.value(), executionTime, throwable.getMessage());
throw throwable;
}
// 记录结束时间和执行耗时
long endTime = System.currentTimeMillis();
long executionTime = endTime - startTime;
// 记录方法结果(如果配置)
if (timeLog.logResult()) {
log.info("方法 {} 执行完成,耗时: {}ms,结果: {}",
timeLog.value(), executionTime, result);
} else {
log.info("方法 {} 执行完成,耗时: {}ms",
timeLog.value(), executionTime);
}
// 清理ThreadLocal
startTimeThreadLocal.remove();
return result;
}
// 前置通知:记录调用链
@Before("timeLogPointcut()")
public void logBefore(JoinPoint joinPoint) {
MethodSignature signature = (MethodSignature) joinPoint.getSignature();
TimeLog timeLog = signature.getMethod().getAnnotation(TimeLog.class);
log.debug("开始执行方法: {}", timeLog.value());
}
// 后置通知:统计成功率等
@AfterReturning(pointcut = "timeLogPointcut()", returning = "result")
public void logAfterReturning(JoinPoint joinPoint, Object result) {
// 可以在这里记录成功次数等指标
log.debug("方法执行成功");
}
// 异常通知:记录失败指标
@AfterThrowing(pointcut = "timeLogPointcut()", throwing = "error")
public void logAfterThrowing(JoinPoint joinPoint, Throwable error) {
log.error("方法执行失败: {}", error.getMessage());
}
}
// 监控指标收集
@Component
public class MetricsCollector {
private final MeterRegistry meterRegistry;
public MetricsCollector(MeterRegistry meterRegistry) {
this.meterRegistry = meterRegistry;
}
@Around("@annotation(timeLog)")
public Object collectMetrics(ProceedingJoinPoint joinPoint, TimeLog timeLog)
throws Throwable {
String methodName = joinPoint.getSignature().getName();
Timer.Sample sample = Timer.start(meterRegistry);
try {
Object result = joinPoint.proceed();
// 记录成功
sample.stop(Timer.builder("method.execution.time")
.tag("method", methodName)
.tag("status", "success")
.register(meterRegistry));
return result;
} catch (Exception e) {
// 记录失败
sample.stop(Timer.builder("method.execution.time")
.tag("method", methodName)
.tag("status", "error")
.register(meterRegistry));
throw e;
}
}
}
// 配置AOP自动代理
@Configuration
@EnableAspectJAutoProxy
@ComponentScan(basePackages = "com.example")
public class AppConfig {
@Bean
public TimeLogAspect timeLogAspect() {
return new TimeLogAspect();
}
@Bean
public MetricsCollector metricsCollector(MeterRegistry meterRegistry) {
return new MetricsCollector(meterRegistry);
}
}
系统设计(5题)
41. 接口幂等性
java
// 幂等性实现方案
@Service
public class IdempotentService {
@Autowired
private RedisTemplate<String, String> redisTemplate;
// 方案1:Token机制(防止重复提交)
public String generateToken() {
String token = UUID.randomUUID().toString();
// 存储token,设置较短过期时间
redisTemplate.opsForValue().set(
"token:" + token, "1", 5, TimeUnit.MINUTES);
return token;
}
@PostMapping("/createOrder")
public Response createOrder(@RequestHeader("X-Token") String token,
@RequestBody OrderRequest request) {
// 检查token是否存在
String key = "token:" + token;
Boolean exists = redisTemplate.hasKey(key);
if (exists == null || !exists) {
return Response.error("重复提交或token已过期");
}
// 删除token,确保只能使用一次
redisTemplate.delete(key);
// 处理业务逻辑
return orderService.createOrder(request);
}
// 方案2:唯一索引(数据库层面)
@PostMapping("/createUser")
public Response createUser(@RequestBody UserRequest request) {
// 使用业务唯一标识(如手机号)作为幂等键
try {
userService.createUser(request);
return Response.success();
} catch (DuplicateKeyException e) {
// 捕获唯一索引冲突
return Response.error("用户已存在");
}
}
// 方案3:状态机(防止重复状态变更)
@Transactional
public Response updateOrderStatus(Long orderId, String newStatus) {
Order order = orderRepository.findById(orderId)
.orElseThrow(() -> new RuntimeException("订单不存在"));
// 检查状态流转是否合法
if (!canTransition(order.getStatus(), newStatus)) {
return Response.error("状态变更不合法");
}
// 使用乐观锁防止并发更新
int rows = orderRepository.updateStatus(
orderId, order.getStatus(), newStatus, order.getVersion());
if (rows == 0) {
// 更新失败,可能是并发修改
return Response.error("请重试");
}
return Response.success();
}
// 方案4:分布式锁 + 请求ID
public Response processWithRequestId(String requestId,
BusinessRequest request) {
// 使用请求ID作为幂等键
String lockKey = "req:" + requestId;
// 尝试设置锁,如果已存在则说明已处理
Boolean success = redisTemplate.opsForValue()
.setIfAbsent(lockKey, "processing", 1, TimeUnit.HOURS);
if (Boolean.FALSE.equals(success)) {
// 已处理过,返回之前的结果(或直接返回成功)
String result = redisTemplate.opsForValue()
.get("result:" + requestId);
return result != null ?
Response.success(result) : Response.error("请求处理中");
}
try {
// 执行业务逻辑
String result = businessService.process(request);
// 存储结果
redisTemplate.opsForValue().set(
"result:" + requestId, result, 24, TimeUnit.HOURS);
return Response.success(result);
} finally {
// 释放锁(或设置完成状态)
redisTemplate.opsForValue().set(
lockKey, "completed", 23, TimeUnit.HOURS);
}
}
// 方案5:消息队列幂等消费
@RabbitListener(queues = "order.create")
public void handleOrderCreate(OrderMessage message,
Channel channel,
@Header(AmqpHeaders.DELIVERY_TAG) long tag) {
String messageId = message.getMessageId();
// 检查是否已处理
if (isMessageProcessed(messageId)) {
// 已处理,直接ACK
channel.basicAck(tag, false);
return;
}
try {
// 处理订单创建
orderService.createOrder(message);
// 记录已处理的消息ID
markMessageAsProcessed(messageId);
// 确认消息
channel.basicAck(tag, false);
} catch (Exception e) {
// 处理失败,重试或进入死信队列
channel.basicNack(tag, false, true);
}
}
private boolean isMessageProcessed(String messageId) {
return redisTemplate.opsForValue()
.get("msg:" + messageId) != null;
}
private void markMessageAsProcessed(String messageId) {
redisTemplate.opsForValue().set(
"msg:" + messageId, "1", 7, TimeUnit.DAYS);
}
}
// 幂等注解
@Target(ElementType.METHOD)
@Retention(RetentionPolicy.RUNTIME)
public @interface Idempotent {
String key() default ""; // 幂等键表达式
long expire() default 3600; // 过期时间(秒)
String message() default "请勿重复操作";
}
// 幂等切面
@Aspect
@Component
public class IdempotentAspect {
@Autowired
private RedisTemplate<String, String> redisTemplate;
@Around("@annotation(idempotent)")
public Object around(ProceedingJoinPoint joinPoint, Idempotent idempotent)
throws Throwable {
// 解析幂等键
String key = resolveKey(idempotent.key(), joinPoint);
// 尝试获取锁
Boolean success = redisTemplate.opsForValue()
.setIfAbsent(key, "1", idempotent.expire(), TimeUnit.SECONDS);
if (Boolean.FALSE.equals(success)) {
throw new RuntimeException(idempotent.message());
}
try {
return joinPoint.proceed();
} finally {
// 可选:业务完成后删除key,或者等待自动过期
// redisTemplate.delete(key);
}
}
private String resolveKey(String keyExpression, JoinPoint joinPoint) {
// 使用SpEL解析表达式
// 例如: #request.userId + ':' + #request.orderNo
return "idempotent:" + keyExpression.hashCode();
}
}
42. 秒杀系统详细设计
java
// 秒杀系统架构设计
@Component
public class SpikeSystemArchitecture {
// 1. 流量削峰层
@Service
public class TrafficShapingService {
// 入口限流
public boolean isRequestAllowed(String clientIp) {
String key = "rate:ip:" + clientIp;
Long count = redisTemplate.opsForValue()
.increment(key, 1);
if (count != null && count == 1) {
redisTemplate.expire(key, 1, TimeUnit.SECONDS);
}
return count != null && count <= 100; // 每秒100请求
}
// 排队机制
public String enqueue(Long userId, Long productId) {
String queueId = UUID.randomUUID().toString();
// 加入队列
redisTemplate.opsForList()
.leftPush("spike:queue:" + productId,
userId + ":" + queueId);
// 返回排队号
return queueId;
}
// 获取排队位置
public Long getQueuePosition(String queueId, Long productId) {
List<Object> queue = redisTemplate.opsForList()
.range("spike:queue:" + productId, 0, -1);
if (queue != null) {
for (int i = 0; i < queue.size(); i++) {
if (queue.get(i).toString().contains(queueId)) {
return (long) i;
}
}
}
return -1L;
}
}
// 2. 库存服务层
@Service
public class InventoryService {
// 预扣库存(Redis原子操作)
public boolean preDeductStock(Long productId, Integer quantity) {
String stockKey = "spike:stock:" + productId;
// 使用Lua脚本保证原子性
String luaScript =
"local stock = tonumber(redis.call('GET', KEYS[1])) " +
"if not stock or stock < tonumber(ARGV[1]) then " +
" return 0 " +
"end " +
"redis.call('DECRBY', KEYS[1], ARGV[1]) " +
"return 1";
Long result = redisTemplate.execute(
new DefaultRedisScript<>(luaScript, Long.class),
Collections.singletonList(stockKey),
quantity.toString()
);
return result != null && result == 1;
}
// 库存回滚
public void rollbackStock(Long productId, Integer quantity) {
String stockKey = "spike:stock:" + productId;
redisTemplate.opsForValue().increment(stockKey, quantity);
}
// 库存同步到数据库
@Async
public void syncStockToDB(Long productId) {
String stockKey = "spike:stock:" + productId;
Integer stock = (Integer) redisTemplate.opsForValue().get(stockKey);
if (stock != null) {
productRepository.updateStock(productId, stock);
}
}
}
// 3. 订单服务层
@Service
public class OrderService {
// 生成订单(异步)
@Async
public void createOrderAsync(SpikeOrder order) {
// 校验库存
if (!inventoryService.checkStock(order.getProductId())) {
throw new RuntimeException("库存不足");
}
// 创建订单
Order orderEntity = convertToOrderEntity(order);
orderRepository.save(orderEntity);
// 扣减数据库库存
productRepository.decrementStock(order.getProductId(), 1);
// 发送订单创建事件
eventPublisher.publishEvent(new OrderCreatedEvent(orderEntity));
}
// 订单状态补偿
@Scheduled(fixedDelay = 60000) // 每分钟执行
public void compensateOrders() {
// 查找超时未支付的订单
List<Order> timeoutOrders = orderRepository
.findTimeoutOrders(OrderStatus.CREATED, 30);
for (Order order : timeoutOrders) {
// 取消订单
order.setStatus(OrderStatus.CANCELLED);
orderRepository.save(order);
// 恢复库存
inventoryService.rollbackStock(order.getProductId(), 1);
}
}
}
// 4. 防刷服务层
@Service
public class AntiBrushService {
// 用户购买次数限制
public boolean checkPurchaseLimit(Long userId, Long productId) {
String key = "spike:limit:" + productId + ":" + userId;
Long count = redisTemplate.opsForValue().increment(key, 1);
if (count != null && count == 1) {
redisTemplate.expire(key, 24, TimeUnit.HOURS);
}
return count != null && count <= 1; // 每人限购1件
}
// IP限制
public boolean checkIpLimit(String ip, Long productId) {
String key = "spike:ip:" + productId + ":" + ip;
Long count = redisTemplate.opsForValue().increment(key, 1);
if (count != null && count == 1) {
redisTemplate.expire(key, 1, TimeUnit.HOURS);
}
return count != null && count <= 10; // 每个IP限购10件
}
// 设备指纹
public boolean checkDeviceFingerprint(String fingerprint,
Long productId) {
String key = "spike:device:" + productId + ":" + fingerprint;
return !redisTemplate.hasKey(key);
}
}
// 5. 降级熔断层
@Service
@Slf4j
public class CircuitBreakerService {
private final Map<String, CircuitBreaker> breakers =
new ConcurrentHashMap<>();
// 执行受保护的方法
public <T> T executeProtected(String serviceName,
Supplier<T> supplier,
Supplier<T> fallback) {
CircuitBreaker breaker = breakers.computeIfAbsent(
serviceName, k -> new CircuitBreaker());
if (breaker.isOpen()) {
log.warn("断路器已打开,执行降级逻辑: {}", serviceName);
return fallback.get();
}
try {
T result = supplier.get();
breaker.recordSuccess();
return result;
} catch (Exception e) {
breaker.recordFailure();
log.error("服务调用失败: {}", serviceName, e);
if (breaker.isOpen()) {
return fallback.get();
}
throw e;
}
}
// 简单的断路器实现
private static class CircuitBreaker {
private static final int FAILURE_THRESHOLD = 5;
private static final long TIMEOUT = 30000; // 30秒
private int failureCount = 0;
private long lastFailureTime = 0;
private State state = State.CLOSED;
enum State { CLOSED, OPEN, HALF_OPEN }
synchronized void recordSuccess() {
failureCount = 0;
state = State.CLOSED;
}
synchronized void recordFailure() {
failureCount++;
lastFailureTime = System.currentTimeMillis();
if (failureCount >= FAILURE_THRESHOLD) {
state = State.OPEN;
}
}
synchronized boolean isOpen() {
if (state == State.OPEN) {
// 检查是否应该进入半开状态
if (System.currentTimeMillis() - lastFailureTime > TIMEOUT) {
state = State.HALF_OPEN;
return false;
}
return true;
}
return false;
}
}
// 降级方法示例
public SpikeResult fallbackSpike(Long userId, Long productId) {
return SpikeResult.fail("系统繁忙,请稍后重试");
}
}
// 6. 监控告警层
@Service
public class MonitorService {
// 关键指标监控
public void monitorSpikeMetrics(Long productId) {
// QPS监控
Long qps = getCurrentQPS(productId);
if (qps > 10000) {
sendAlert("QPS过高: " + qps);
}
// 成功率监控
Double successRate = getSuccessRate(productId);
if (successRate < 0.8) {
sendAlert("成功率过低: " + successRate);
}
// 库存监控
Integer stock = getRemainingStock(productId);
if (stock < 100) {
sendAlert("库存不足: " + stock);
}
}
// 实时大屏数据
public DashboardData getDashboardData(Long productId) {
DashboardData data = new DashboardData();
data.setTotalRequests(getTotalRequests(productId));
data.setSuccessOrders(getSuccessOrders(productId));
data.setCurrentQPS(getCurrentQPS(productId));
data.setRemainingStock(getRemainingStock(productId));
data.setAvgResponseTime(getAvgResponseTime(productId));
return data;
}
}
}
// 秒杀入口接口
@RestController
@RequestMapping("/spike")
public class SpikeController {
@Autowired
private TrafficShapingService trafficShapingService;
@Autowired
private InventoryService inventoryService;
@Autowired
private AntiBrushService antiBrushService;
@Autowired
private CircuitBreakerService circuitBreakerService;
@Autowired
private OrderService orderService;
@PostMapping("/{productId}")
public Response spike(@PathVariable Long productId,
@RequestHeader("X-User-Id") Long userId,
@RequestHeader("X-Client-Ip") String clientIp) {
// 1. 流量削峰(入口限流)
if (!trafficShapingService.isRequestAllowed(clientIp)) {
return Response.error("请求过于频繁");
}
// 2. 防刷校验
if (!antiBrushService.checkPurchaseLimit(userId, productId)) {
return Response.error("您已购买过该商品");
}
if (!antiBrushService.checkIpLimit(clientIp, productId)) {
return Response.error("IP限制");
}
// 3. 使用断路器保护核心逻辑
return circuitBreakerService.executeProtected(
"spike-core",
() -> {
// 4. 预扣库存
if (!inventoryService.preDeductStock(productId, 1)) {
return Response.error("库存不足");
}
try {
// 5. 创建订单
SpikeOrder order = new SpikeOrder();
order.setUserId(userId);
order.setProductId(productId);
order.setOrderId(generateOrderId());
orderService.createOrderAsync(order);
return Response.success(order.getOrderId());
} catch (Exception e) {
// 6. 异常时回滚库存
inventoryService.rollbackStock(productId, 1);
throw e;
}
},
() -> Response.error("系统繁忙,请稍后重试")
);
}
// 秒杀结果查询
@GetMapping("/result/{orderId}")
public Response getSpikeResult(@PathVariable String orderId) {
Order order = orderRepository.findByOrderId(orderId);
if (order == null) {
return Response.error("订单不存在");
}
SpikeResult result = new SpikeResult();
result.setOrderId(orderId);
result.setStatus(order.getStatus());
result.setCreateTime(order.getCreateTime());
return Response.success(result);
}
}
// 数据预热脚本
@Component
public class DataWarmUp {
@PostConstruct
public void warmUp() {
// 预热商品库存到Redis
List<Product> spikeProducts = productRepository.findSpikeProducts();
for (Product product : spikeProducts) {
redisTemplate.opsForValue().set(
"spike:stock:" + product.getId(),
product.getStock());
// 设置过期时间,避免内存泄漏
redisTemplate.expire(
"spike:stock:" + product.getId(),
2, TimeUnit.HOURS);
}
// 预热静态资源
warmUpStaticResources();
// 预热数据库连接池
warmUpConnectionPool();
}
}
43. 短URL系统设计
java
// 短URL系统核心组件
@Component
public class ShortUrlSystem {
// 1. 短码生成器
@Service
public class ShortCodeGenerator {
private static final String ALPHABET =
"0123456789abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ";
private static final int BASE = ALPHABET.length();
// 方法1:自增ID转62进制
public String encodeFromId(Long id) {
StringBuilder sb = new StringBuilder();
while (id > 0) {
sb.append(ALPHABET.charAt((int) (id % BASE)));
id /= BASE;
}
return sb.reverse().toString();
}
// 方法2:哈希算法(MD5取前6位)
public String encodeFromUrl(String longUrl) {
try {
MessageDigest md = MessageDigest.getInstance("MD5");
byte[] digest = md.digest(longUrl.getBytes());
// 取前6个字节转62进制
long hash = 0;
for (int i = 0; i < 6; i++) {
hash = (hash << 8) | (digest[i] & 0xFF);
}
return encodeFromId(hash);
} catch (NoSuchAlgorithmException e) {
throw new RuntimeException(e);
}
}
// 方法3:雪花算法生成ID
public String generateBySnowflake() {
SnowflakeIdWorker idWorker = new SnowflakeIdWorker(0, 0);
long id = idWorker.nextId();
return encodeFromId(id);
}
// 冲突检测与重试
public String generateUniqueCode(String longUrl) {
String shortCode;
int retryCount = 0;
do {
if (retryCount == 0) {
// 第一次使用URL哈希
shortCode = encodeFromUrl(longUrl);
} else {
// 冲突时添加随机盐
shortCode = encodeFromUrl(longUrl + System.currentTimeMillis());
}
retryCount++;
// 防止无限重试
if (retryCount > 10) {
throw new RuntimeException("生成短码失败");
}
} while (isCodeExists(shortCode));
return shortCode;
}
}
// 2. 存储服务
@Service
public class StorageService {
// MySQL存储结构
@Entity
@Table(name = "short_urls")
@Data
public class ShortUrl {
@Id
@GeneratedValue(strategy = GenerationType.IDENTITY)
private Long id;
@Column(name = "short_code", unique = true)
private String shortCode;
@Column(name = "long_url", length = 2000)
private String longUrl;
@Column(name = "create_time")
private Date createTime;
@Column(name = "expire_time")
private Date expireTime;
@Column(name = "click_count")
private Integer clickCount = 0;
@Column(name = "user_id")
private Long userId;
@Column(name = "status")
private Integer status = 1; // 1:有效, 0:无效
}
// Redis缓存设计
public void cacheUrlMapping(String shortCode, String longUrl) {
// 缓存短码到长URL的映射
redisTemplate.opsForValue().set(
"short:url:" + shortCode,
longUrl,
24, TimeUnit.HOURS);
// 缓存长URL到短码的映射(用于去重)
redisTemplate.opsForValue().set(
"long:url:" + longUrl.hashCode(),
shortCode,
24, TimeUnit.HOURS);
}
// 分库分表策略
public String getTableName(String shortCode) {
// 根据短码hash分表
int hash = Math.abs(shortCode.hashCode());
int tableIndex = hash % 64;
return "short_urls_" + tableIndex;
}
}
// 3. 重定向服务
@Service
public class RedirectService {
// 301永久重定向 vs 302临时重定向
public HttpStatus getRedirectType(String shortCode) {
// 根据业务需求决定
// 301:搜索引擎会更新索引,适合长期有效的链接
// 302:每次都会访问短链服务,适合统计点击等
return HttpStatus.MOVED_PERMANENTLY;
}
// 重定向处理
@GetMapping("/{shortCode}")
public void redirect(@PathVariable String shortCode,
HttpServletResponse response) {
// 从缓存获取
String longUrl = getFromCache(shortCode);
if (longUrl == null) {
// 从数据库获取
longUrl = getFromDatabase(shortCode);
if (longUrl == null) {
response.setStatus(404);
return;
}
// 回填缓存
cacheUrl(shortCode, longUrl);
}
// 统计点击
incrementClickCount(shortCode);
// 记录访问日志
logAccess(shortCode);
// 重定向
response.setStatus(getRedirectType(shortCode).value());
response.setHeader("Location", longUrl);
}
// 高频访问优化
public String getFromCache(String shortCode) {
// 一级缓存:本地缓存(热点数据)
String url = localCache.get(shortCode);
if (url != null) {
return url;
}
// 二级缓存:Redis
url = redisTemplate.opsForValue().get("short:url:" + shortCode);
if (url != null) {
// 回填本地缓存
localCache.put(shortCode, url);
}
return url;
}
}
// 4. 统计服务
@Service
public class AnalyticsService {
// 实时统计点击量
public void incrementClickCount(String shortCode) {
// 使用Redis计数器
String key = "stats:click:" + shortCode;
redisTemplate.opsForValue().increment(key);
// 异步同步到数据库
asyncUpdateClickCount(shortCode);
}
// 访问日志记录
public void logAccess(String shortCode) {
AccessLog log = new AccessLog();
log.setShortCode(shortCode);
log.setAccessTime(new Date());
log.setIp(getClientIp());
log.setUserAgent(getUserAgent());
log.setReferer(getReferer());
// 发送到消息队列异步处理
kafkaTemplate.send("short-url-access-logs", log);
}
// 数据分析
public AnalyticsData getAnalytics(String shortCode, DateRange range) {
AnalyticsData data = new AnalyticsData();
// 总点击量
data.setTotalClicks(getTotalClicks(shortCode));
// 时间分布
data.setHourlyDistribution(getHourlyClicks(shortCode, range));
data.setDailyDistribution(getDailyClicks(shortCode, range));
// 来源分析
data.setRefererStats(getRefererStats(shortCode, range));
data.setDeviceStats(getDeviceStats(shortCode, range));
data.setGeoStats(getGeoStats(shortCode, range));
return data;
}
}
// 5. 管理服务
@Service
public class ManagementService {
// 创建短链
public ShortUrl createShortUrl(CreateRequest request) {
// 参数验证
validateRequest(request);
// 检查是否已存在
ShortUrl existing = findByLongUrl(request.getLongUrl());
if (existing != null) {
return existing;
}
// 生成短码
String shortCode = shortCodeGenerator.generateUniqueCode(
request.getLongUrl());
// 保存到数据库
ShortUrl shortUrl = new ShortUrl();
shortUrl.setShortCode(shortCode);
shortUrl.setLongUrl(request.getLongUrl());
shortUrl.setCreateTime(new Date());
shortUrl.setExpireTime(request.getExpireTime());
shortUrl.setUserId(request.getUserId());
shortUrl.setCustomDomain(request.getCustomDomain());
shortUrlRepository.save(shortUrl);
// 缓存映射
cacheUrlMapping(shortCode, request.getLongUrl());
return shortUrl;
}
// 批量创建
public BatchResult createBatch(List<CreateRequest> requests) {
BatchResult result = new BatchResult();
List<ShortUrl> success = new ArrayList<>();
List<ErrorItem> errors = new ArrayList<>();
for (int i = 0; i < requests.size(); i++) {
try {
ShortUrl shortUrl = createShortUrl(requests.get(i));
success.add(shortUrl);
} catch (Exception e) {
errors.add(new ErrorItem(i, e.getMessage()));
}
}
result.setSuccess(success);
result.setErrors(errors);
return result;
}
// 链接失效
public void disableUrl(String shortCode) {
// 更新数据库状态
shortUrlRepository.updateStatus(shortCode, 0);
// 删除缓存
redisTemplate.delete("short:url:" + shortCode);
}
// 链接恢复
public void enableUrl(String shortCode) {
// 更新数据库状态
shortUrlRepository.updateStatus(shortCode, 1);
// 重新缓存
ShortUrl shortUrl = shortUrlRepository.findByShortCode(shortCode);
if (shortUrl != null) {
cacheUrlMapping(shortCode, shortUrl.getLongUrl());
}
}
}
}
// 高性能优化方案
@Configuration
public class PerformanceOptimization {
// 1. 多级缓存
@Bean
public CacheManager cacheManager() {
CaffeineCacheManager cacheManager = new CaffeineCacheManager();
cacheManager.setCaffeine(Caffeine.newBuilder()
.maximumSize(10000)
.expireAfterWrite(10, TimeUnit.MINUTES)
.recordStats());
return cacheManager;
}
// 2. 连接池优化
@Bean
public DataSource dataSource() {
HikariDataSource dataSource = new HikariDataSource();
dataSource.setMaximumPoolSize(50);
dataSource.setMinimumIdle(10);
dataSource.setConnectionTimeout(3000);
dataSource.setIdleTimeout(600000);
dataSource.setMaxLifetime(1800000);
return dataSource;
}
// 3. 线程池配置
@Bean
public ThreadPoolTaskExecutor taskExecutor() {
ThreadPoolTaskExecutor executor = new ThreadPoolTaskExecutor();
executor.setCorePoolSize(20);
executor.setMaxPoolSize(100);
executor.setQueueCapacity(500);
executor.setKeepAliveSeconds(60);
executor.setThreadNamePrefix("short-url-");
executor.setRejectedExecutionHandler(new ThreadPoolExecutor.CallerRunsPolicy());
return executor;
}
// 4. Redis管道优化批量操作
public List<Object> batchGet(List<String> keys) {
return redisTemplate.executePipelined(
(RedisCallback<Object>) connection -> {
for (String key : keys) {
connection.stringCommands().get(key.getBytes());
}
return null;
}
);
}
}
44. 实时排行榜系统
java
// 实时排行榜系统设计
@Component
public class RealTimeRankingSystem {
// 1. 数据模型设计
@Data
public class RankingItem {
private String userId;
private String userName;
private Double score; // 排名分数
private Long timestamp; // 最后更新时间
private Integer rank; // 当前排名
private Map<String, Object> extraData; // 扩展数据
}
// 2. 存储设计:Redis Sorted Set + MySQL
@Service
public class RankingStorage {
// Redis Sorted Set存储实时排名
public void updateScore(String rankingKey, String userId, double score) {
// 更新分数,如果成员不存在则添加
redisTemplate.opsForZSet().add(
rankingKey, userId, score);
// 可选:记录更新时间
redisTemplate.opsForHash().put(
"ranking:meta:" + userId,
"last_update",
String.valueOf(System.currentTimeMillis()));
}
// 批量更新分数
public void batchUpdateScores(String rankingKey,
Map<String, Double> scoreMap) {
redisTemplate.executePipelined((RedisCallback<Object>) connection -> {
for (Map.Entry<String, Double> entry : scoreMap.entrySet()) {
connection.zSetCommands().zAdd(
rankingKey.getBytes(),
entry.getValue(),
entry.getKey().getBytes());
}
return null;
});
}
// MySQL持久化存储
@Entity
@Table(name = "ranking_history")
@Data
public class RankingHistory {
@Id
@GeneratedValue(strategy = GenerationType.IDENTITY)
private Long id;
@Column(name = "ranking_key")
private String rankingKey;
@Column(name = "user_id")
private String userId;
@Column(name = "score")
private Double score;
@Column(name = "rank")
private Integer rank;
@Column(name = "snapshot_time")
private Date snapshotTime;
@Column(name = "create_time")
private Date createTime;
}
}
// 3. 排名计算服务
@Service
public class RankingCalculator {
// 获取用户排名
public Long getUserRank(String rankingKey, String userId) {
// ZREVRANK获取降序排名(分数大的在前面)
Long rank = redisTemplate.opsForZSet()
.reverseRank(rankingKey, userId);
// Redis排名从0开始,转换为从1开始
return rank != null ? rank + 1 : null;
}
// 获取用户分数
public Double getUserScore(String rankingKey, String userId) {
return redisTemplate.opsForZSet().score(rankingKey, userId);
}
// 获取排名区间
public List<RankingItem> getRankRange(String rankingKey,
long start, long end) {
Set<ZSetOperations.TypedTuple<String>> tuples =
redisTemplate.opsForZSet()
.reverseRangeWithScores(rankingKey, start, end);
return convertToRankingItems(tuples, start);
}
// 获取用户所在页的数据
public List<RankingItem> getUserPage(String rankingKey,
String userId,
int pageSize) {
Long userRank = getUserRank(rankingKey, userId);
if (userRank == null) {
return Collections.emptyList();
}
// 计算用户所在页的开始位置
long pageNum = (userRank - 1) / pageSize;
long start = pageNum * pageSize;
long end = start + pageSize - 1;
return getRankRange(rankingKey, start, end);
}
// 与前后的用户比较
public RankingContext getUserContext(String rankingKey,
String userId,
int contextSize) {
Long userRank = getUserRank(rankingKey, userId);
if (userRank == null) {
return null;
}
RankingContext context = new RankingContext();
context.setCurrentRank(userRank);
context.setCurrentScore(getUserScore(rankingKey, userId));
// 获取前面的用户
long beforeStart = Math.max(0, userRank - contextSize - 1);
long beforeEnd = userRank - 2;
if (beforeStart <= beforeEnd) {
context.setBeforeUsers(getRankRange(
rankingKey, beforeStart, beforeEnd));
}
// 获取后面的用户
long afterStart = userRank;
long afterEnd = userRank + contextSize - 1;
context.setAfterUsers(getRankRange(
rankingKey, afterStart, afterEnd));
return context;
}
}
// 4. 分数更新策略
@Service
public class ScoreUpdateStrategy {
// 策略1:直接覆盖(适合游戏积分)
public void updateByOverwrite(String rankingKey,
String userId,
double newScore) {
redisTemplate.opsForZSet().add(rankingKey, userId, newScore);
}
// 策略2:增量更新(适合每日签到)
public void updateByIncrement(String rankingKey,
String userId,
double delta) {
redisTemplate.opsForZSet().incrementScore(
rankingKey, userId, delta);
}
// 策略3:取最大值(适合最高分)
public void updateByMax(String rankingKey,
String userId,
double newScore) {
Double currentScore = redisTemplate.opsForZSet()
.score(rankingKey, userId);
if (currentScore == null || newScore > currentScore) {
redisTemplate.opsForZSet().add(rankingKey, userId, newScore);
}
}
// 策略4:衰减更新(适合热度排行)
public void updateWithDecay(String rankingKey,
String userId,
double increment) {
// 获取当前分数
Double currentScore = redisTemplate.opsForZSet()
.score(rankingKey, userId);
if (currentScore == null) {
currentScore = 0.0;
}
// 应用衰减
long lastUpdate = getLastUpdateTime(userId);
long now = System.currentTimeMillis();
long hoursPassed = (now - lastUpdate) / (1000 * 3600);
double decayedScore = currentScore * Math.pow(0.9, hoursPassed);
double newScore = decayedScore + increment;
// 更新分数和时间
redisTemplate.opsForZSet().add(rankingKey, userId, newScore);
updateLastUpdateTime(userId, now);
}
}
// 5. 多维度排行
@Service
public class MultiDimensionRanking {
// 多维度分数组合
public double calculateCompositeScore(Map<String, Double> dimensions,
Map<String, Double> weights) {
double totalScore = 0;
for (Map.Entry<String, Double> dim : dimensions.entrySet()) {
Double weight = weights.get(dim.getKey());
if (weight != null) {
totalScore += dim.getValue() * weight;
}
}
return totalScore;
}
// 创建复合排行榜
public void createCompositeRanking(String compositeKey,
List<String> dimensionKeys,
Map<String, Double> weights) {
// 使用ZUNIONSTORE创建复合排行榜
redisTemplate.opsForZSet().unionAndStore(
compositeKey,
dimensionKeys,
weights
);
// 设置过期时间
redisTemplate.expire(compositeKey, 7, TimeUnit.DAYS);
}
// 分维度排行榜
public Map<String, List<RankingItem>> getDimensionRankings(
String userId) {
Map<String, List<RankingItem>> result = new HashMap<>();
// 获取用户在每个维度的排名
for (String dimension : dimensions) {
String key = "ranking:" + dimension;
Long rank = rankingCalculator.getUserRank(key, userId);
Double score = rankingCalculator.getUserScore(key, userId);
if (rank != null && score != null) {
RankingItem item = new RankingItem();
item.setUserId(userId);
item.setScore(score);
item.setRank(rank);
result.computeIfAbsent(dimension, k -> new ArrayList<>())
.add(item);
}
}
return result;
}
}
// 6. 排行榜类型
@Service
public class RankingTypeService {
// 全球总榜
public List<RankingItem> getGlobalRanking(String type, int limit) {
String key = "ranking:global:" + type;
return rankingCalculator.getRankRange(key, 0, limit - 1);
}
// 好友榜
public List<RankingItem> getFriendRanking(String userId,
String type,
int limit) {
// 获取好友列表
Set<String> friendIds = getFriendIds(userId);
if (friendIds.isEmpty()) {
return Collections.emptyList();
}
// 创建临时排行榜
String tempKey = "ranking:temp:" + userId + ":" + System.currentTimeMillis();
String sourceKey = "ranking:global:" + type;
// 使用ZINTERSTORE创建交集排行榜
redisTemplate.opsForZSet().intersectAndStore(
sourceKey, friendIds, tempKey);
// 获取排名
List<RankingItem> ranking = rankingCalculator
.getRankRange(tempKey, 0, limit - 1);
// 删除临时key
redisTemplate.delete(tempKey);
return ranking;
}
// 地区榜
public List<RankingItem> getRegionalRanking(String region,
String type,
int limit) {
String key = "ranking:region:" + region + ":" + type;
return rankingCalculator.getRankRange(key, 0, limit - 1);
}
// 时间周期榜(日榜、周榜、月榜)
public List<RankingItem> getPeriodRanking(String period,
String type,
int limit) {
String key = "ranking:" + period + ":" + type + ":" + getPeriodKey();
return rankingCalculator.getRankRange(key, 0, limit - 1);
}
private String getPeriodKey() {
SimpleDateFormat sdf = new SimpleDateFormat("yyyyMMdd");
return sdf.format(new Date());
}
}
// 7. 实时更新与推送
@Service
public class RealTimeUpdateService {
// WebSocket推送排名变化
@Autowired
private SimpMessagingTemplate messagingTemplate;
// 分数更新时的实时推送
public void onScoreUpdated(String rankingKey,
String userId,
double newScore) {
// 计算排名变化
RankingUpdate update = calculateRankingUpdate(
rankingKey, userId, newScore);
if (update.getRankChanged() || update.getScoreChanged()) {
// 推送给用户
messagingTemplate.convertAndSend(
"/topic/ranking/" + userId,
update);
// 如果进入前十,全局广播
if (update.getNewRank() <= 10) {
messagingTemplate.convertAndSend(
"/topic/ranking/top10",
update);
}
}
}
// 定时刷新排行榜
@Scheduled(fixedRate = 60000) // 每分钟刷新
public void refreshRankings() {
// 刷新所有活跃的排行榜
Set<String> activeRankings = getActiveRankings();
for (String rankingKey : activeRankings) {
// 清理过期数据
cleanExpiredData(rankingKey);
// 重新计算排名
recalculateRanking(rankingKey);
// 生成排行榜快照
takeSnapshot(rankingKey);
}
}
}
}
// 高性能优化
@Configuration
public class RankingOptimization {
// 1. Redis分片
@Bean
public RedisConnectionFactory redisConnectionFactory() {
// 根据rankingKey进行分片
Map<String, RedisConnectionFactory> factories = new HashMap<>();
factories.put("shard1", new LettuceConnectionFactory(
new RedisStandaloneConfiguration("redis1", 6379)));
factories.put("shard2", new LettuceConnectionFactory(
new RedisStandaloneConfiguration("redis2", 6379)));
return new ShardingRedisConnectionFactory(factories, key -> {
// 根据key决定使用哪个分片
int hash = Math.abs(key.hashCode());
return hash % 2 == 0 ? "shard1" : "shard2";
});
}
// 2. 本地缓存热点数据
@Bean
public LoadingCache<String, List<RankingItem>> topRankingCache() {
return Caffeine.newBuilder()
.maximumSize(100)
.expireAfterWrite(10, TimeUnit.SECONDS)
.refreshAfterWrite(5, TimeUnit.SECONDS)
.build(key -> {
// 从Redis加载数据
return rankingCalculator.getRankRange(key, 0, 99);
});
}
// 3. 异步持久化
@Async
public void asyncPersistRanking(String rankingKey) {
// 获取排行榜数据
List<RankingItem> items = rankingCalculator.getRankRange(
rankingKey, 0, 9999);
// 批量保存到MySQL
saveRankingSnapshot(rankingKey, items);
}
}
45. 分布式配置中心设计
java
// 分布式配置中心核心组件
@Component
public class ConfigCenterSystem {
// 1. 配置模型
@Data
public class ConfigItem {
private String namespace; // 命名空间
private String key; // 配置键
private String value; // 配置值
private String version; // 版本号
private String description; // 描述
private ConfigType type; // 类型:STRING, JSON, XML, YAML等
private String environment; // 环境:dev, test, prod
private String application; // 应用名
private boolean encrypted; // 是否加密
private Date createTime;
private Date updateTime;
private String operator; // 操作人
}
// 2. 存储设计
@Service
public class ConfigStorage {
// MySQL持久化存储
@Entity
@Table(name = "config_items",
indexes = {
@Index(name = "idx_ns_key", columnList = "namespace,config_key"),
@Index(name = "idx_app_env", columnList = "application,environment")
})
@Data
public class ConfigItemEntity {
@Id
@GeneratedValue(strategy = GenerationType.IDENTITY)
private Long id;
@Column(name = "namespace", length = 100)
private String namespace;
@Column(name = "config_key", length = 200)
private String configKey;
@Column(name = "config_value", columnDefinition = "TEXT")
private String configValue;
@Column(name = "version", length = 50)
private String version;
@Column(name = "environment", length = 50)
private String environment;
@Column(name = "application", length = 100)
private String application;
@Column(name = "is_encrypted")
private Boolean encrypted = false;
@Column(name = "create_time")
private Date createTime;
@Column(name = "update_time")
private Date updateTime;
}
// Redis缓存配置
public void cacheConfig(String cacheKey, ConfigItem config) {
String json = objectMapper.writeValueAsString(config);
redisTemplate.opsForValue().set(cacheKey, json);
// 发布配置变更通知
redisTemplate.convertAndSend("config:change", cacheKey);
}
// 获取配置(缓存优先)
public ConfigItem getConfig(String namespace,
String key,
String environment,
String application) {
String cacheKey = buildCacheKey(namespace, key, environment, application);
// 先从缓存获取
String cached = redisTemplate.opsForValue().get(cacheKey);
if (cached != null) {
return objectMapper.readValue(cached, ConfigItem.class);
}
// 缓存未命中,从数据库获取
ConfigItem config = configRepository.findByUniqueKey(
namespace, key, environment, application);
if (config != null) {
// 回填缓存
cacheConfig(cacheKey, config);
}
return config;
}
}
// 3. 配置推送服务
@Service
public class ConfigPushService {
@Autowired
private WebSocketHandler webSocketHandler;
// 长轮询配置变更
@GetMapping("/config/poll")
public DeferredResult<ConfigChange> pollConfigChange(
@RequestParam String clientId,
@RequestParam String namespace,
@RequestParam String environment,
@RequestParam String application) {
DeferredResult<ConfigChange> deferredResult =
new DeferredResult<>(30000L); // 30秒超时
// 监听配置变更
ConfigListener listener = new ConfigListener() {
@Override
public void onConfigChanged(ConfigChange change) {
if (matches(change, namespace, environment, application)) {
deferredResult.setResult(change);
}
}
};
// 注册监听器
configListenerRegistry.register(clientId, listener);
// 超时处理
deferredResult.onTimeout(() -> {
configListenerRegistry.unregister(clientId);
deferredResult.setResult(new ConfigChange()); // 返回空变更
});
return deferredResult;
}
// WebSocket实时推送
@Component
public class ConfigWebSocketHandler extends TextWebSocketHandler {
private final Map<String, WebSocketSession> sessions =
new ConcurrentHashMap<>();
@Override
public void afterConnectionEstablished(WebSocketSession session) {
String clientId = extractClientId(session);
sessions.put(clientId, session);
// 发送当前配置
sendCurrentConfigs(clientId, session);
}
@Override
protected void handleTextMessage(WebSocketSession session,
TextMessage message) {
// 处理客户端消息(如订阅配置)
handleClientMessage(session, message.getPayload());
}
// 推送配置变更
public void pushConfigChange(String clientId, ConfigChange change) {
WebSocketSession session = sessions.get(clientId);
if (session != null && session.isOpen()) {
String json = objectMapper.writeValueAsString(change);
session.sendMessage(new TextMessage(json));
}
}
}
// HTTP长连接
@GetMapping(value = "/config/stream",
produces = MediaType.TEXT_EVENT_STREAM_VALUE)
public Flux<ServerSentEvent<ConfigChange>> streamConfigChanges(
@RequestParam String clientId,
@RequestParam String namespace) {
return Flux.create(sink -> {
ConfigListener listener = change -> {
if (change.getNamespace().equals(namespace)) {
sink.next(ServerSentEvent.builder(change).build());
}
};
configListenerRegistry.register(clientId, listener);
sink.onDispose(() -> {
configListenerRegistry.unregister(clientId);
});
});
}
}
// 4. 配置监听与刷新
@Service
public class ConfigRefreshService {
// Spring Cloud Config客户端兼容
@ConfigurationProperties
@RefreshScope
@Component
public class DynamicConfig {
@Value("${config.center.url}")
private String configCenterUrl;
// 自动刷新的配置项
@Value("${app.name:default}")
private String appName;
@Value("${app.version:1.0.0}")
private String appVersion;
// 通过@RefreshScope注解,这些配置可以动态更新
public void refresh() {
// 配置刷新逻辑
}
}
// 配置变更监听器
@Component
public class ConfigChangeListener {
// 监听Redis发布订阅
@RedisListener(channel = "config:change")
public void onConfigChange(String cacheKey) {
// 解析缓存键
ConfigKey key = parseConfigKey(cacheKey);
// 通知相关应用
notifyApplications(key);
// 刷新本地缓存
refreshLocalCache(key);
// 记录变更日志
logConfigChange(key);
}
// 应用级配置刷新
private void notifyApplications(ConfigKey key) {
// 获取订阅该配置的应用
List<ApplicationInstance> instances =
discoveryClient.getInstances(key.getApplication());
for (ApplicationInstance instance : instances) {
// 发送HTTP通知
restTemplate.postForEntity(
instance.getUri() + "/actuator/refresh",
null,
Void.class);
}
}
}
// 本地配置缓存
@Component
public class LocalConfigCache {
private final Map<String, ConfigItem> cache =
new ConcurrentHashMap<>();
private final ScheduledExecutorService scheduler =
Executors.newScheduledThreadPool(1);
@PostConstruct
public void init() {
// 定时刷新缓存
scheduler.scheduleAtFixedRate(this::refreshCache,
0, 30, TimeUnit.SECONDS);
}
// 获取配置(本地缓存优先)
public ConfigItem getConfig(String key) {
ConfigItem config = cache.get(key);
if (config == null || isExpired(config)) {
// 从远程获取并更新缓存
config = fetchFromRemote(key);
if (config != null) {
cache.put(key, config);
}
}
return config;
}
// 批量获取配置
public Map<String, ConfigItem> batchGetConfigs(List<String> keys) {
Map<String, ConfigItem> result = new HashMap<>();
List<String> missingKeys = new ArrayList<>();
// 先查本地缓存
for (String key : keys) {
ConfigItem config = cache.get(key);
if (config != null && !isExpired(config)) {
result.put(key, config);
} else {
missingKeys.add(key);
}
}
// 批量获取缺失的配置
if (!missingKeys.isEmpty()) {
Map<String, ConfigItem> remoteConfigs =
batchFetchFromRemote(missingKeys);
result.putAll(remoteConfigs);
// 更新本地缓存
cache.putAll(remoteConfigs);
}
return result;
}
}
}
// 5. 版本管理与回滚
@Service
public class VersionManagementService {
// 配置版本管理
@Entity
@Table(name = "config_versions")
@Data
public class ConfigVersion {
@Id
@GeneratedValue(strategy = GenerationType.IDENTITY)
private Long id;
@Column(name = "namespace", length = 100)
private String namespace;
@Column(name = "config_key", length = 200)
private String configKey;
@Column(name = "config_value", columnDefinition = "TEXT")
private String configValue;
@Column(name = "version", length = 50)
private String version;
@Column(name = "environment", length = 50)
private String environment;
@Column(name = "application", length = 100)
private String application;
@Column(name = "operator", length = 100)
private String operator;
@Column(name = "operation", length = 50)
private String operation; // CREATE, UPDATE, DELETE, ROLLBACK
@Column(name = "previous_version", length = 50)
private String previousVersion;
@Column(name = "create_time")
private Date createTime;
@Column(name = "comment", length = 500)
private String comment;
}
// 创建新版本
@Transactional
public ConfigItem createVersion(ConfigItem config, String operator, String comment) {
// 生成版本号
String version = generateVersion(config);
// 保存版本记录
ConfigVersion versionRecord = new ConfigVersion();
versionRecord.setNamespace(config.getNamespace());
versionRecord.setConfigKey(config.getKey());
versionRecord.setConfigValue(config.getValue());
versionRecord.setVersion(version);
versionRecord.setEnvironment(config.getEnvironment());
versionRecord.setApplication(config.getApplication());
versionRecord.setOperator(operator);
versionRecord.setOperation("CREATE");
versionRecord.setComment(comment);
versionRecord.setCreateTime(new Date());
configVersionRepository.save(versionRecord);
// 更新当前配置
config.setVersion(version);
configRepository.save(config);
return config;
}
// 版本回滚
@Transactional
public ConfigItem rollbackToVersion(String namespace,
String key,
String environment,
String application,
String targetVersion,
String operator) {
// 查找目标版本
ConfigVersion target = configVersionRepository
.findByUniqueKeyAndVersion(namespace, key, environment,
application, targetVersion);
if (target == null) {
throw new RuntimeException("目标版本不存在");
}
// 创建回滚版本
ConfigItem current = configRepository.findByUniqueKey(
namespace, key, environment, application);
ConfigVersion rollbackVersion = new ConfigVersion();
rollbackVersion.setNamespace(namespace);
rollbackVersion.setConfigKey(key);
rollbackVersion.setConfigValue(target.getConfigValue());
rollbackVersion.setVersion(generateVersion(current));
rollbackVersion.setEnvironment(environment);
rollbackVersion.setApplication(application);
rollbackVersion.setOperator(operator);
rollbackVersion.setOperation("ROLLBACK");
rollbackVersion.setPreviousVersion(current.getVersion());
rollbackVersion.setComment("回滚到版本: " + targetVersion);
rollbackVersion.setCreateTime(new Date());
configVersionRepository.save(rollbackVersion);
// 更新配置
ConfigItem config = new ConfigItem();
config.setNamespace(namespace);
config.setKey(key);
config.setValue(target.getConfigValue());
config.setVersion(rollbackVersion.getVersion());
config.setEnvironment(environment);
config.setApplication(application);
config.setUpdateTime(new Date());
configRepository.save(config);
// 发布变更通知
configPushService.publishChange(config);
return config;
}
// 版本比较
public VersionDiff compareVersions(String namespace,
String key,
String environment,
String application,
String version1,
String version2) {
ConfigVersion v1 = configVersionRepository
.findByUniqueKeyAndVersion(namespace, key, environment,
application, version1);
ConfigVersion v2 = configVersionRepository
.findByUniqueKeyAndVersion(namespace, key, environment,
application, version2);
VersionDiff diff = new VersionDiff();
diff.setVersion1(version1);
diff.setVersion2(version2);
if (v1 != null && v2 != null) {
diff.setValue1(v1.getConfigValue());
diff.setValue2(v2.getConfigValue());
diff.setDiff(calculateDiff(v1.getConfigValue(), v2.getConfigValue()));
}
return diff;
}
// 版本历史查询
public Page<ConfigVersion> getVersionHistory(String namespace,
String key,
String environment,
String application,
Pageable pageable) {
return configVersionRepository.findHistoryByUniqueKey(
namespace, key, environment, application, pageable);
}
}
// 6. 权限与审计
@Service
public class SecurityAuditService {
// 权限控制
@Entity
@Table(name = "config_permissions")
@Data
public class ConfigPermission {
@Id
@GeneratedValue(strategy = GenerationType.IDENTITY)
private Long id;
@Column(name = "namespace", length = 100)
private String namespace;
@Column(name = "user_id", length = 100)
private String userId;
@Column(name = "role", length = 50)
private String role; // VIEWER, EDITOR, ADMIN
@Column(name = "create_time")
private Date createTime;
@Column(name = "expire_time")
private Date expireTime;
}
// 权限验证
public boolean hasPermission(String userId,
String namespace,
String action) {
// 检查用户权限
ConfigPermission permission = permissionRepository
.findByUserAndNamespace(userId, namespace);
if (permission == null) {
return false;
}
// 检查权限级别
return canPerformAction(permission.getRole(), action);
}
// 操作审计
@Entity
@Table(name = "config_audit_logs")
@Data
public class AuditLog {
@Id
@GeneratedValue(strategy = GenerationType.IDENTITY)
private Long id;
@Column(name = "operator", length = 100)
private String operator;
@Column(name = "operation", length = 50)
private String operation; // CREATE, READ, UPDATE, DELETE
@Column(name = "namespace", length = 100)
private String namespace;
@Column(name = "config_key", length = 200)
private String configKey;
@Column(name = "old_value", columnDefinition = "TEXT")
private String oldValue;
@Column(name = "new_value", columnDefinition = "TEXT")
private String newValue;
@Column(name = "ip_address", length = 50)
private String ipAddress;
@Column(name = "user_agent", length = 500)
private String userAgent;
@Column(name = "result", length = 50)
private String result; // SUCCESS, FAILURE
@Column(name = "error_message", columnDefinition = "TEXT")
private String errorMessage;
@Column(name = "operation_time")
private Date operationTime;
}
// 记录审计日志
@Aspect
@Component
public class AuditAspect {
@Around("@annotation(auditable)")
public Object audit(ProceedingJoinPoint joinPoint, Auditable auditable)
throws Throwable {
AuditLog log = new AuditLog();
log.setOperationTime(new Date());
log.setOperation(auditable.value());
// 获取操作人信息
Authentication auth = SecurityContextHolder.getContext()
.getAuthentication();
if (auth != null) {
log.setOperator(auth.getName());
}
// 获取请求信息
HttpServletRequest request =
((ServletRequestAttributes) RequestContextHolder
.getRequestAttributes()).getRequest();
log.setIpAddress(request.getRemoteAddr());
log.setUserAgent(request.getHeader("User-Agent"));
try {
// 执行业务方法
Object result = joinPoint.proceed();
log.setResult("SUCCESS");
// 记录操作详情
recordOperationDetails(joinPoint, log);
return result;
} catch (Exception e) {
log.setResult("FAILURE");
log.setErrorMessage(e.getMessage());
throw e;
} finally {
// 保存审计日志
auditLogRepository.save(log);
}
}
}
}
}
// 客户端SDK
@Component
public class ConfigClientSDK {
private final ConfigCenterClient client;
private final LocalConfigCache cache;
// 初始化配置
@PostConstruct
public void init() {
// 拉取初始配置
pullConfigs();
// 启动配置监听
startConfigListener();
}
// 获取配置值
public String getValue(String key, String defaultValue) {
ConfigItem config = cache.getConfig(key);
if (config != null) {
return config.getValue();
}
// 尝试从远程获取
config = client.getConfig(key);
if (config != null) {
cache.putConfig(key, config);
return config.getValue();
}
return defaultValue;
}
// 监听配置变更
public void addChangeListener(String key, ConfigChangeListener listener) {
client.subscribe(key, (newValue) -> {
// 更新本地缓存
cache.updateConfig(key, newValue);
// 通知监听器
listener.onChange(newValue);
});
}
// 配置自动刷新(Spring集成)
@ConfigurationProperties
@RefreshScope
@Component
public class AppConfig {
@Value("${app.name:#{null}}")
private String appName;
@Value("${app.version:#{null}}")
private String appVersion;
@Value("${server.port:8080}")
private Integer serverPort;
// 配置变更时自动刷新
@PostConstruct
public void refresh() {
// 重新初始化相关组件
reinitializeComponents();
}
}
}
这份详细的Java后端开发面试题解答涵盖了50个核心问题的深度解析。每个问题都提供了:
- 概念解释:清晰的理论说明
- 代码示例:实际的Java代码实现
- 最佳实践:生产环境中的经验总结
- 对比分析:不同方案的优缺点比较
- 应用场景:何时使用哪种方案
这些内容不仅可以帮助准备面试,更是Java后端开发工程师需要掌握的核心知识体系。建议结合实际项目经验深入理解,并在实践中不断巩固和提高。