Zookeeper 分布式锁实战版(可直接用)
目标:给你一份能直接塞进项目里的实现方案,包含环境准备、依赖、两种写法(Curator 推荐、原生 ZK 也给)、代码结构、注意事项。
使用场景:扣库存、生成订单号、单号去重、消息幂等等。
记得点赞加收藏哦😁😁😁
1. 前置条件
-
你有 Zookeeper 集群,至少 3 台,假设地址是:
text192.168.1.10:2181,192.168.1.11:2181,192.168.1.12:2181 -
你的应用能连上 ZK,即网络通、端口没被拦。
-
你用的是 Java/SpringBoot,下面以 Maven 为例。
-
我们统一把锁的根目录定为:
text/distributed-locks这个节点建议你先用 zkCli 手动建一下:
bashzkCli.sh -server 192.168.1.10:2181 create /distributed-locks ""
2. 推荐方案:用 Apache Curator 来做锁
Curator 是对 ZK 的高级封装,已经帮你把"排队、监听前驱、会话重连、异常释放"这些细节都做好了。生产上优先用它。
2.1 Maven 依赖
xml
<dependencies>
<!-- Curator Framework -->
<dependency>
<groupId>org.apache.curator</groupId>
<artifactId>curator-framework</artifactId>
<version>5.6.0</version>
</dependency>
<!-- Curator Recipes(里面有分布式锁实现) -->
<dependency>
<groupId>org.apache.curator</groupId>
<artifactId>curator-recipes</artifactId>
<version>5.6.0</version>
</dependency>
</dependencies>
版本你可以按你项目来,保持 framework 和 recipes 版本一致。
2.2 建立 Curator 客户端(SpringBoot 写法)
新建一个配置类,比如 CuratorConfig.java:
java
import org.apache.curator.RetryPolicy;
import org.apache.curator.framework.CuratorFramework;
import org.apache.curator.framework.CuratorFrameworkFactory;
import org.apache.curator.retry.ExponentialBackoffRetry;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
@Configuration
public class CuratorConfig {
/**
* 创建一个 Curator 客户端
*/
@Bean(initMethod = "start", destroyMethod = "close")
public CuratorFramework curatorFramework() {
// ZK 连接串
String connectString = "192.168.1.10:2181,192.168.1.11:2181,192.168.1.12:2181";
// 重试策略:初始1秒,最多重试3次
RetryPolicy retryPolicy = new ExponentialBackoffRetry(1000, 3);
return CuratorFrameworkFactory.builder()
.connectString(connectString)
.sessionTimeoutMs(60_000)
.connectionTimeoutMs(15_000)
.retryPolicy(retryPolicy)
.namespace("") // 这里不用 namespace,方便看到真实路径
.build();
}
}
这样 SpringBoot 启动时就会连上 ZK。
2.3 封装一个"分布式锁工具类"
我们不想每个业务都写 new InterProcessMutex(...),可以包装一下。新建 ZkDistributedLock.java:
java
import org.apache.curator.framework.CuratorFramework;
import org.apache.curator.framework.recipes.locks.InterProcessMutex;
import java.util.concurrent.TimeUnit;
/**
* 基于 Zookeeper + Curator 的分布式锁
*/
public class ZkDistributedLock {
private final InterProcessMutex mutex;
/**
* @param client Curator 客户端
* @param lockPath 锁路径,比如 /distributed-locks/order-no
*/
public ZkDistributedLock(CuratorFramework client, String lockPath) {
// InterProcessMutex 是可重入锁
this.mutex = new InterProcessMutex(client, lockPath);
}
/**
* 阻塞获取锁
*/
public void lock() {
try {
this.mutex.acquire();
} catch (Exception e) {
throw new RuntimeException("acquire lock error", e);
}
}
/**
* 带超时获取锁
*
* @param time 超时时间
* @param unit 时间单位
* @return true 成功,false 超时
*/
public boolean tryLock(long time, TimeUnit unit) {
try {
return this.mutex.acquire(time, unit);
} catch (Exception e) {
throw new RuntimeException("acquire lock with timeout error", e);
}
}
/**
* 释放锁
*/
public void unlock() {
try {
if (this.mutex.isAcquiredInThisProcess()) {
this.mutex.release();
}
} catch (Exception e) {
throw new RuntimeException("release lock error", e);
}
}
}
2.4 真正业务里怎么用?
比如你有一个下单服务,需要"同一商品扣库存时串行化",写个 Service:
java
import org.apache.curator.framework.CuratorFramework;
import org.springframework.stereotype.Service;
import java.util.concurrent.TimeUnit;
@Service
public class StockService {
private final CuratorFramework curatorFramework;
public StockService(CuratorFramework curatorFramework) {
this.curatorFramework = curatorFramework;
}
/**
* 扣库存示例
*/
public void decreaseStock(Long skuId) {
// 每个 sku 一把独立的锁,保证并行度
String lockPath = "/distributed-locks/stock/sku-" + skuId;
ZkDistributedLock lock = new ZkDistributedLock(curatorFramework, lockPath);
boolean locked = lock.tryLock(5, TimeUnit.SECONDS);
if (!locked) {
// 你也可以抛业务异常
throw new RuntimeException("获取分布式锁失败,sku=" + skuId);
}
try {
// ====== 临界区开始 ======
// TODO: 查库存、扣减、落库
// 这里写你真实的业务逻辑
System.out.println("扣库存中... sku=" + skuId);
// ====== 临界区结束 ======
} finally {
lock.unlock();
}
}
}
要点:
- 锁路径要分业务 + 分ID,这样不会所有请求都堵在一把锁上
- 用
tryLock能防止一直卡住 finally一定要释放
3. 不想用 Curator?给你原生 ZK 实现
有时候项目不能加依赖,或者想看底层原理,那就自己写一版基于"临时有序节点 + 监听前驱"的锁。下面给的是可以直接用的版本。
3.1 Maven 依赖(原生 ZK)
xml
<dependency>
<groupId>org.apache.zookeeper</groupId>
<artifactId>zookeeper</artifactId>
<version>3.9.2</version>
</dependency>
3.2 建立 ZK 客户端
java
import org.apache.zookeeper.WatchedEvent;
import org.apache.zookeeper.Watcher;
import org.apache.zookeeper.ZooKeeper;
import java.io.IOException;
import java.util.concurrent.CountDownLatch;
public class ZkClientFactory {
private static final String CONNECT_STRING = "192.168.1.10:2181,192.168.1.11:2181,192.168.1.12:2181";
private static final int SESSION_TIMEOUT = 60_000;
public static ZooKeeper create() {
CountDownLatch latch = new CountDownLatch(1);
try {
ZooKeeper zooKeeper = new ZooKeeper(CONNECT_STRING, SESSION_TIMEOUT, new Watcher() {
@Override
public void process(WatchedEvent event) {
if (event.getState() == Event.KeeperState.SyncConnected) {
latch.countDown();
}
}
});
latch.await();
return zooKeeper;
} catch (IOException | InterruptedException e) {
throw new RuntimeException(e);
}
}
}
3.3 分布式锁实现(可落项目的)
java
import org.apache.zookeeper.*;
import org.apache.zookeeper.data.Stat;
import java.io.IOException;
import java.util.Collections;
import java.util.List;
import java.util.concurrent.CountDownLatch;
/**
* 原生 Zookeeper 分布式锁
* 原理:临时顺序节点 + 监听前驱节点
*/
public class ZkDistributedLockRaw {
private static final String ROOT_LOCK_PATH = "/distributed-locks";
private final ZooKeeper zooKeeper;
private final String lockName; // 比如 "order-no" / "stock/sku-1001"
private String currentNodePath; // 当前客户端创建的节点全路径
public ZkDistributedLockRaw(ZooKeeper zooKeeper, String lockName) {
this.zooKeeper = zooKeeper;
this.lockName = lockName;
}
/**
* 阻塞加锁
*/
public void lock() {
try {
// 1. 确保业务锁目录存在
String businessLockPath = ROOT_LOCK_PATH + "/" + lockName;
ensurePathExists(businessLockPath);
// 2. 创建临时有序节点
String nodePath = zooKeeper.create(
businessLockPath + "/lock-",
new byte[0],
ZooDefs.Ids.OPEN_ACL_UNSAFE,
CreateMode.EPHEMERAL_SEQUENTIAL
);
this.currentNodePath = nodePath;
// 3. 尝试获得锁
attemptLock(businessLockPath);
} catch (KeeperException | InterruptedException e) {
throw new RuntimeException(e);
}
}
/**
* 释放锁
*/
public void unlock() {
if (this.currentNodePath == null) {
return;
}
try {
zooKeeper.delete(this.currentNodePath, -1);
this.currentNodePath = null;
} catch (InterruptedException | KeeperException e) {
// 可以打日志,不要影响主流程
e.printStackTrace();
}
}
private void ensurePathExists(String path) throws KeeperException, InterruptedException {
Stat stat = zooKeeper.exists(path, false);
if (stat == null) {
// 多客户端并发时可能冲突,这里简单处理一下
try {
zooKeeper.create(path, new byte[0], ZooDefs.Ids.OPEN_ACL_UNSAFE, CreateMode.PERSISTENT);
} catch (KeeperException.NodeExistsException ignore) {
}
}
}
private void attemptLock(String businessLockPath) throws KeeperException, InterruptedException {
while (true) {
// 获取所有子节点
List<String> children = zooKeeper.getChildren(businessLockPath, false);
Collections.sort(children);
// 当前节点名
String currentNodeName = this.currentNodePath.substring(businessLockPath.length() + 1);
int index = children.indexOf(currentNodeName);
if (index == -1) {
throw new RuntimeException("current node not found in children, maybe deleted");
}
if (index == 0) {
// 排在第一个,拿到锁
return;
}
// 不是第一个,监听前一个节点
String prevNodeName = children.get(index - 1);
String prevNodePath = businessLockPath + "/" + prevNodeName;
CountDownLatch latch = new CountDownLatch(1);
Stat stat = zooKeeper.exists(prevNodePath, new Watcher() {
@Override
public void process(WatchedEvent event) {
if (event.getType() == Event.EventType.NodeDeleted) {
latch.countDown();
}
}
});
// 前一个节点可能刚好没了,就再来一轮
if (stat == null) {
continue;
}
// 等待前一个节点删除
latch.await();
// 删除后再进入下一轮 while,重新判断自己是不是最小的
}
}
public static void main(String[] args) throws IOException {
ZooKeeper zk = ZkClientFactory.create();
ZkDistributedLockRaw lock = new ZkDistributedLockRaw(zk, "demo-lock");
lock.lock();
try {
System.out.println("我拿到锁了,执行业务");
// do something
} finally {
lock.unlock();
}
}
}
这版代码已经能直接跑,逻辑就是:
- 创建业务锁目录
- 创建临时顺序节点
- 拿不到锁就监听前驱
- 前驱删了就再试
- finally 里删自己
4. 项目里怎么分锁?
别所有业务共用 /distributed-locks/mylock 一把锁,那会被串行化。建议按下面这样分:
- 订单号生成:
/distributed-locks/order-no - 库存:
/distributed-locks/stock/sku-{id} - 优惠券核销:
/distributed-locks/coupon/{couponId} - 定时任务互斥:
/distributed-locks/job/{jobName}
做到"不同业务不同路径",锁就细了。
5. 线上要注意的点
-
一定要 finally 释放
不管你是 Curator 还是原生,都要用 try/finally 把锁释放掉。
-
会话过期要重建
ZK 是基于会话的,临时节点会跟着会话销毁。如果你用 Curator,它帮你处理了;原生的要自己判断
KeeperState.Expired。 -
不要搞一个大锁
锁路径设计不好,三个接口都用一把锁,就等于自己给自己加串行。
-
锁只是控制并发,不保证业务成功
该事务的还是要事务,锁不等于事务。
-
高并发要注意 ZK 压力
ZK 不适合做那种"每秒几万次加锁释放"的业务,锁粒度要设计好,必要时用 Redis 锁。
6. 总结一句话版本
- 想快:用 Curator 的
InterProcessMutex - 想懂:看原生"临时顺序节点 + 监听前驱"
- 想稳:锁路径分业务 + finally 释放 + 超时兜底