导购APP高可用数据库设计:主从分离与分库分表在返利系统中的应用
大家好,我是省赚客APP研发者阿可!省赚客APP(juwatech.cn)作为日活百万级的导购返利平台,核心数据如用户、订单、佣金记录等面临高写入、高查询压力。为保障系统高可用与低延迟,我们在MySQL层面实施了主从读写分离 + ShardingSphere分库分表的混合架构。本文将结合实际代码,详解关键设计与实现。
读写分离:基于ShardingSphere-JDBC动态路由
我们使用ShardingSphere-JDBC实现透明读写分离,写操作路由至主库,读操作负载均衡至多个从库:
yaml
# application.yml
spring:
shardingsphere:
datasource:
names: ds-master,ds-slave0,ds-slave1
ds-master:
type: com.zaxxer.hikari.HikariDataSource
driver-class-name: com.mysql.cj.jdbc.Driver
jdbc-url: jdbc:mysql://master.db.juwatech.cn:3306/juwatech_commission
username: root
password: ***
ds-slave0:
type: com.zaxxer.hikari.HikariDataSource
driver-class-name: com.mysql.cj.jdbc.Driver
jdbc-url: jdbc:mysql://slave0.db.juwatech.cn:3306/juwatech_commission
username: root
password: ***
ds-slave1:
# 类似配置
rules:
readwrite-splitting:
data-sources:
rw_ds:
write-data-source-name: ds-master
read-data-source-names: ds-slave0, ds-slave1
load-balancer-name: round-robin
load-balancers:
round-robin:
type: ROUND_ROBIN
业务代码无需感知底层拓扑:
java
package juwatech.cn.service;
@Service
public class CommissionService {
@Autowired
private CommissionMapper commissionMapper;
// 自动走主库
@Transactional
public void createCommission(Commission record) {
commissionMapper.insert(record);
}
// 自动走从库(除非事务中)
public List<Commission> getCommissionByUser(Long userId) {
return commissionMapper.selectByUserId(userId);
}
}

分库分表:按用户ID哈希拆分佣金表
佣金记录表 commission_record 数据量超亿级,采用 user_id % 4 分4库,每库再按 order_id % 16 分16表:
yaml
spring:
shardingsphere:
rules:
sharding:
tables:
commission_record:
actual-data-nodes: ds-master-${0..3}.commission_record_${0..15}
table-strategy:
standard:
sharding-column: order_id
sharding-algorithm-name: table-inline
database-strategy:
standard:
sharding-column: user_id
sharding-algorithm-name: db-inline
sharding-algorithms:
db-inline:
type: INLINE
props:
algorithm-expression: ds-master-$->{user_id % 4}
table-inline:
type: INLINE
props:
algorithm-expression: commission_record_$->{order_id % 16}
实体类与Mapper保持原样:
java
package juwatech.cn.entity;
@TableName("commission_record")
public class CommissionRecord {
private Long id;
private Long userId; // 分库键
private String orderId; // 分表键(存储为字符串,但取模时转Long)
private BigDecimal amount;
// getter/setter
}
java
package juwatech.cn.mapper;
@Mapper
public interface CommissionRecordMapper {
@Insert("INSERT INTO commission_record (user_id, order_id, amount) VALUES (#{userId}, #{orderId}, #{amount})")
int insert(CommissionRecord record);
// 注意:必须带 user_id 和 order_id 才能精准路由
@Select("SELECT * FROM commission_record WHERE user_id = #{userId} AND order_id = #{orderId}")
CommissionRecord selectByUserAndOrder(@Param("userId") Long userId, @Param("orderId") String orderId);
}
全局唯一ID生成:雪花算法避免冲突
分库分表后自增主键失效,我们采用改良版雪花算法生成分布式ID:
java
package juwatech.cn.util;
@Component
public class SnowflakeIdGenerator {
private final long datacenterId; // 从配置获取,0-31
private final long machineId; // 0-31
private long sequence = 0L;
private long lastTimestamp = -1L;
public synchronized long nextId() {
long timestamp = System.currentTimeMillis();
if (timestamp < lastTimestamp) {
throw new RuntimeException("Clock moved backwards");
}
if (timestamp == lastTimestamp) {
sequence = (sequence + 1) & 4095L;
if (sequence == 0) {
timestamp = waitNextMillis(lastTimestamp);
}
} else {
sequence = 0L;
}
lastTimestamp = timestamp;
return ((timestamp - 1672502400000L) << 22)
| (datacenterId << 17)
| (machineId << 12)
| sequence;
}
private long waitNextMillis(long lastTimestamp) {
long timestamp = System.currentTimeMillis();
while (timestamp <= lastTimestamp) {
timestamp = System.currentTimeMillis();
}
return timestamp;
}
}
在插入前赋值:
java
CommissionRecord record = new CommissionRecord();
record.setId(snowflakeIdGenerator.nextId());
record.setUserId(userId);
record.setOrderId(orderId);
commissionRecordMapper.insert(record);
跨分片查询优化:异步并行+结果归并
对于管理后台的全局统计需求,我们通过异步任务预聚合,避免实时跨分片扫描:
java
@Scheduled(cron = "0 0 1 * * ?")
public void aggregateDailyCommission() {
List<CompletableFuture<Void>> futures = new ArrayList<>();
for (int db = 0; db < 4; db++) {
for (int tbl = 0; tbl < 16; tbl++) {
final int finalDb = db;
final int finalTbl = tbl;
CompletableFuture<Void> future = CompletableFuture.runAsync(() -> {
String sql = "INSERT INTO daily_commission_summary (date, total_amount) " +
"SELECT '2025-04-05', SUM(amount) FROM ds-master-" + finalDb + ".commission_record_" + finalTbl +
" WHERE create_time >= '2025-04-05 00:00:00' AND create_time < '2025-04-06 00:00:00'";
jdbcTemplate.update(sql);
}, executor);
futures.add(future);
}
}
CompletableFuture.allOf(futures.toArray(new CompletableFuture[0])).join();
}
数据一致性保障:Binlog监听补偿
为确保主从延迟不导致读取脏数据,关键路径(如提现前查询)强制走主库:
java
public class MasterRouteHintManager {
public static void forceMaster() {
HintManager.getInstance().setWriteRouteOnly(true);
}
}
// 使用
public BigDecimal getAvailableCommission(Long userId) {
MasterRouteHintManager.forceMaster();
return commissionMapper.selectAvailableByUser(userId);
}
同时,通过 Canal 监听 Binlog,将佣金变动同步至 Elasticsearch 供复杂查询使用,形成多副本冗余。
本文著作权归聚娃科技省赚客app开发者团队,转载请注明出处!