MySQL 优化实战

MySQL 优化实战指南:从原理到生产实践

前言

MySQL 作为最流行的关系型数据库之一,在实际生产环境中经常面临性能瓶颈。本文将从索引优化、SQL 优化、架构优化等多个维度,结合实际案例和 Java 代码示例,深入探讨 MySQL 优化的最佳实践。


一、MySQL 架构概览

1.1 整体架构

sql 复制代码
+------------------------------------------------------------------+
|                        MySQL Server 架构                          |
+------------------------------------------------------------------+
|                                                                   |
|   客户端连接层                                                     |
|   +----------------------------------------------------------+   |
|   |  连接池  |  认证  |  线程管理  |  连接处理  |  缓存      |   |
|   +----------------------------------------------------------+   |
|                              |                                    |
|   SQL 层                     v                                    |
|   +----------------------------------------------------------+   |
|   |  解析器  ->  预处理器  ->  优化器  ->  执行器             |   |
|   +----------------------------------------------------------+   |
|                              |                                    |
|   存储引擎层                 v                                    |
|   +----------------------------------------------------------+   |
|   |   InnoDB   |   MyISAM   |   Memory   |   Archive   | ... |   |
|   +----------------------------------------------------------+   |
|                              |                                    |
|   文件系统层                 v                                    |
|   +----------------------------------------------------------+   |
|   |  数据文件  |  日志文件  |  配置文件  |  Socket文件        |   |
|   +----------------------------------------------------------+   |
|                                                                   |
+------------------------------------------------------------------+

1.2 InnoDB 存储引擎架构

sql 复制代码
+------------------------------------------------------------------+
|                      InnoDB 架构                                  |
+------------------------------------------------------------------+
|                                                                   |
|   内存结构                                                        |
|   +-------------------------+  +-----------------------------+   |
|   |     Buffer Pool        |  |     Log Buffer              |   |
|   |  +------------------+  |  |  (Redo Log 缓冲)             |   |
|   |  | 数据页 | 索引页  |  |  +-----------------------------+   |
|   |  +------------------+  |                                     |
|   |  | Change Buffer   |  |  +-----------------------------+   |
|   |  +------------------+  |  |   Adaptive Hash Index      |   |
|   +-------------------------+  +-----------------------------+   |
|                                                                   |
|   磁盘结构                                                        |
|   +-------------------------+  +-----------------------------+   |
|   |     System Tablespace  |  |     Redo Log Files          |   |
|   |     (ibdata1)          |  |     (ib_logfile0/1)         |   |
|   +-------------------------+  +-----------------------------+   |
|   +-------------------------+  +-----------------------------+   |
|   |   File-Per-Table       |  |     Undo Tablespaces        |   |
|   |   (.ibd files)         |  |                             |   |
|   +-------------------------+  +-----------------------------+   |
|                                                                   |
+------------------------------------------------------------------+

二、索引优化

2.1 索引数据结构

scss 复制代码
B+ 树索引结构:

                    [  15  |  28  |  45  ]        <- 根节点
                   /        |        \
                  /         |         \
   [3|6|9|12]           [18|21|25]          [32|38|42]    <- 分支节点
    /  |  \              /  |  \              /  |  \
   v   v   v            v   v   v            v   v   v
  [1,2,3] [4,5,6]...  [16,17,18]...       [30,31,32]...   <- 叶子节点
     |                    |                    |
     +--------------------+--------------------+          <- 双向链表

特点:
1. 非叶子节点只存储索引,不存储数据
2. 叶子节点存储所有索引和数据
3. 叶子节点之间用双向链表连接
4. 查询效率稳定 O(log n)

2.2 聚簇索引 vs 非聚簇索引

diff 复制代码
聚簇索引(主键索引):
+--------+--------+--------+--------+
|   ID   |  Name  |  Age   |  City  |
+--------+--------+--------+--------+
|   1    |  张三  |   25   |  北京  |    <- 数据行
|   2    |  李四  |   30   |  上海  |
|   3    |  王五  |   28   |  广州  |
+--------+--------+--------+--------+

非聚簇索引(二级索引):
+--------+--------+
|  Name  |   ID   |    <- 只存储索引列 + 主键
+--------+--------+
|  李四  |   2    |
|  王五  |   3    |
|  张三  |   1    |
+--------+--------+
       |
       v
  回表查询:通过 ID 再查聚簇索引获取完整数据

2.3 索引设计原则

sql 复制代码
-- 1. 选择性高的列优先(区分度 > 0.1)
SELECT COUNT(DISTINCT column) / COUNT(*) AS selectivity FROM table;

-- 2. 联合索引遵循最左前缀原则
CREATE INDEX idx_a_b_c ON table(a, b, c);
-- 有效:WHERE a=1 / WHERE a=1 AND b=2 / WHERE a=1 AND b=2 AND c=3
-- 无效:WHERE b=2 / WHERE c=3 / WHERE b=2 AND c=3

-- 3. 覆盖索引减少回表
-- 需要查询的列都在索引中
CREATE INDEX idx_name_age ON user(name, age);
SELECT name, age FROM user WHERE name = '张三';  -- 覆盖索引,无需回表

-- 4. 前缀索引节省空间
CREATE INDEX idx_email ON user(email(10));  -- 只索引前10个字符

2.4 索引失效场景

sql 复制代码
-- 创建测试表和索引
CREATE TABLE `user` (
    `id` BIGINT PRIMARY KEY AUTO_INCREMENT,
    `name` VARCHAR(50),
    `age` INT,
    `email` VARCHAR(100),
    `status` TINYINT,
    `create_time` DATETIME,
    INDEX idx_name(name),
    INDEX idx_age(age),
    INDEX idx_status(status),
    INDEX idx_create_time(create_time)
);

-- 1. 对索引列进行函数操作
EXPLAIN SELECT * FROM user WHERE YEAR(create_time) = 2024;  -- 索引失效
EXPLAIN SELECT * FROM user WHERE create_time >= '2024-01-01'
                             AND create_time < '2025-01-01';  -- 索引生效

-- 2. 隐式类型转换
EXPLAIN SELECT * FROM user WHERE name = 123;  -- name是VARCHAR,索引失效
EXPLAIN SELECT * FROM user WHERE name = '123';  -- 索引生效

-- 3. LIKE 左模糊
EXPLAIN SELECT * FROM user WHERE name LIKE '%张';   -- 索引失效
EXPLAIN SELECT * FROM user WHERE name LIKE '张%';   -- 索引生效

-- 4. OR 条件(部分列无索引)
EXPLAIN SELECT * FROM user WHERE name = '张三' OR email = 'test@test.com';
-- email 无索引,整体索引失效

-- 5. NOT IN / NOT EXISTS
EXPLAIN SELECT * FROM user WHERE status NOT IN (1, 2);  -- 可能失效

-- 6. != / <> 操作符
EXPLAIN SELECT * FROM user WHERE status != 1;  -- 可能失效

2.5 Java 代码示例:索引优化工具类

java 复制代码
import lombok.Data;
import lombok.extern.slf4j.Slf4j;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.jdbc.core.JdbcTemplate;
import org.springframework.stereotype.Component;

import java.util.List;
import java.util.Map;

@Slf4j
@Component
public class IndexAnalyzer {

    @Autowired
    private JdbcTemplate jdbcTemplate;

    /**
     * 分析表的索引使用情况
     */
    public List<Map<String, Object>> analyzeIndexUsage(String schema, String tableName) {
        String sql = """
            SELECT
                index_name,
                column_name,
                cardinality,
                sub_part,
                nullable
            FROM information_schema.statistics
            WHERE table_schema = ? AND table_name = ?
            ORDER BY index_name, seq_in_index
            """;
        return jdbcTemplate.queryForList(sql, schema, tableName);
    }

    /**
     * 查找重复索引
     */
    public List<Map<String, Object>> findDuplicateIndexes(String schema) {
        String sql = """
            SELECT
                t.table_name,
                GROUP_CONCAT(index_name) AS duplicate_indexes,
                GROUP_CONCAT(column_name ORDER BY seq_in_index) AS columns
            FROM information_schema.statistics t
            WHERE table_schema = ?
            GROUP BY table_name, column_name
            HAVING COUNT(*) > 1
            """;
        return jdbcTemplate.queryForList(sql, schema);
    }

    /**
     * 分析索引选择性
     */
    public double calculateSelectivity(String tableName, String columnName) {
        String sql = String.format(
            "SELECT COUNT(DISTINCT %s) / COUNT(*) AS selectivity FROM %s",
            columnName, tableName
        );
        Double selectivity = jdbcTemplate.queryForObject(sql, Double.class);
        return selectivity != null ? selectivity : 0.0;
    }

    /**
     * 获取未使用的索引(基于 sys schema)
     */
    public List<Map<String, Object>> findUnusedIndexes(String schema) {
        String sql = """
            SELECT
                object_schema,
                object_name,
                index_name
            FROM sys.schema_unused_indexes
            WHERE object_schema = ?
            """;
        return jdbcTemplate.queryForList(sql, schema);
    }

    /**
     * 分析慢查询中的索引问题
     */
    public List<Map<String, Object>> analyzeSlowQueryIndexes() {
        String sql = """
            SELECT
                digest_text,
                count_star AS exec_count,
                avg_timer_wait / 1000000000 AS avg_time_ms,
                sum_rows_examined / count_star AS avg_rows_examined,
                sum_rows_sent / count_star AS avg_rows_sent
            FROM performance_schema.events_statements_summary_by_digest
            WHERE schema_name IS NOT NULL
              AND avg_timer_wait > 1000000000
            ORDER BY avg_timer_wait DESC
            LIMIT 20
            """;
        return jdbcTemplate.queryForList(sql);
    }
}

三、SQL 优化

3.1 执行计划分析

sql 复制代码
EXPLAIN SELECT * FROM user WHERE name = '张三';

+----+-------------+-------+------+---------------+----------+---------+-------+------+-------+
| id | select_type | table | type | possible_keys | key      | key_len | ref   | rows | Extra |
+----+-------------+-------+------+---------------+----------+---------+-------+------+-------+
| 1  | SIMPLE      | user  | ref  | idx_name      | idx_name | 153     | const | 1    | NULL  |
+----+-------------+-------+------+---------------+----------+---------+-------+------+-------+

关键字段说明:
+------------+------------------------------------------------------------+
|   字段     |                         说明                                |
+------------+------------------------------------------------------------+
| type       | 访问类型(性能从好到差):                                    |
|            | system > const > eq_ref > ref > range > index > ALL        |
+------------+------------------------------------------------------------+
| key        | 实际使用的索引                                              |
+------------+------------------------------------------------------------+
| rows       | 预估扫描行数                                                |
+------------+------------------------------------------------------------+
| Extra      | 额外信息:                                                  |
|            | Using index - 覆盖索引                                      |
|            | Using where - 使用 WHERE 过滤                               |
|            | Using temporary - 使用临时表                                |
|            | Using filesort - 文件排序(需优化)                         |
+------------+------------------------------------------------------------+

3.2 常见 SQL 优化技巧

sql 复制代码
-- 1. 分页优化
-- 问题:深分页性能差
SELECT * FROM user ORDER BY id LIMIT 1000000, 10;

-- 优化方案1:延迟关联
SELECT u.* FROM user u
INNER JOIN (SELECT id FROM user ORDER BY id LIMIT 1000000, 10) t
ON u.id = t.id;

-- 优化方案2:记录上次位置
SELECT * FROM user WHERE id > 1000000 ORDER BY id LIMIT 10;

-- 2. COUNT 优化
-- 问题:COUNT(*) 全表扫描
SELECT COUNT(*) FROM user WHERE status = 1;

-- 优化方案:使用缓存或近似值
-- Redis 维护计数器,或使用 EXPLAIN 预估

-- 3. JOIN 优化
-- 小表驱动大表
SELECT * FROM small_table s
INNER JOIN big_table b ON s.id = b.small_id;

-- 确保关联字段有索引
CREATE INDEX idx_small_id ON big_table(small_id);

-- 4. 子查询优化为 JOIN
-- 问题
SELECT * FROM orders
WHERE user_id IN (SELECT id FROM user WHERE status = 1);

-- 优化
SELECT o.* FROM orders o
INNER JOIN user u ON o.user_id = u.id
WHERE u.status = 1;

-- 5. 避免 SELECT *
-- 问题
SELECT * FROM user WHERE id = 1;

-- 优化:只查需要的列
SELECT id, name, age FROM user WHERE id = 1;

-- 6. 批量操作
-- 问题:循环单条插入
INSERT INTO user (name, age) VALUES ('张三', 25);
INSERT INTO user (name, age) VALUES ('李四', 30);

-- 优化:批量插入
INSERT INTO user (name, age) VALUES
    ('张三', 25),
    ('李四', 30),
    ('王五', 28);

-- 7. 使用 UNION ALL 替代 UNION
-- UNION 会去重排序,UNION ALL 不会
SELECT name FROM user WHERE age > 30
UNION ALL
SELECT name FROM user WHERE status = 1;

3.3 Java 代码示例:SQL 执行分析器

java 复制代码
import lombok.Data;
import lombok.extern.slf4j.Slf4j;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.jdbc.core.JdbcTemplate;
import org.springframework.stereotype.Component;

import java.util.List;
import java.util.Map;

@Slf4j
@Component
public class SqlAnalyzer {

    @Autowired
    private JdbcTemplate jdbcTemplate;

    /**
     * 分析 SQL 执行计划
     */
    public List<Map<String, Object>> explainSql(String sql) {
        String explainSql = "EXPLAIN " + sql;
        return jdbcTemplate.queryForList(explainSql);
    }

    /**
     * 获取 SQL 执行详细信息(JSON 格式)
     */
    public String explainJsonFormat(String sql) {
        String explainSql = "EXPLAIN FORMAT=JSON " + sql;
        return jdbcTemplate.queryForObject(explainSql, String.class);
    }

    /**
     * 分析执行计划并给出优化建议
     */
    public SqlAnalysisResult analyzeSql(String sql) {
        List<Map<String, Object>> explainResult = explainSql(sql);
        SqlAnalysisResult result = new SqlAnalysisResult();
        result.setOriginalSql(sql);
        result.setExplainResult(explainResult);

        for (Map<String, Object> row : explainResult) {
            String type = (String) row.get("type");
            String extra = (String) row.get("Extra");
            Long rows = (Long) row.get("rows");

            // 检查全表扫描
            if ("ALL".equals(type)) {
                result.addSuggestion("检测到全表扫描,建议添加合适的索引");
            }

            // 检查文件排序
            if (extra != null && extra.contains("Using filesort")) {
                result.addSuggestion("检测到文件排序,建议优化 ORDER BY 或添加排序索引");
            }

            // 检查临时表
            if (extra != null && extra.contains("Using temporary")) {
                result.addSuggestion("检测到使用临时表,可能影响性能");
            }

            // 检查扫描行数
            if (rows != null && rows > 10000) {
                result.addSuggestion("扫描行数较多(" + rows + "),建议优化查询条件或索引");
            }
        }

        return result;
    }

    @Data
    public static class SqlAnalysisResult {
        private String originalSql;
        private List<Map<String, Object>> explainResult;
        private List<String> suggestions = new java.util.ArrayList<>();

        public void addSuggestion(String suggestion) {
            suggestions.add(suggestion);
        }
    }
}

3.4 MyBatis Plus 分页优化

java 复制代码
import com.baomidou.mybatisplus.core.metadata.IPage;
import com.baomidou.mybatisplus.extension.plugins.pagination.Page;
import org.apache.ibatis.annotations.Mapper;
import org.apache.ibatis.annotations.Param;
import org.apache.ibatis.annotations.Select;

@Mapper
public interface UserMapper extends BaseMapper<User> {

    /**
     * 普通分页(深分页性能差)
     */
    IPage<User> selectPageNormal(Page<User> page);

    /**
     * 优化分页:延迟关联
     */
    @Select("""
        SELECT u.* FROM user u
        INNER JOIN (
            SELECT id FROM user
            WHERE status = #{status}
            ORDER BY id
            LIMIT #{offset}, #{limit}
        ) t ON u.id = t.id
        """)
    List<User> selectPageOptimized(@Param("status") Integer status,
                                    @Param("offset") long offset,
                                    @Param("limit") long limit);

    /**
     * 游标分页(推荐)
     */
    @Select("""
        SELECT * FROM user
        WHERE id > #{lastId} AND status = #{status}
        ORDER BY id
        LIMIT #{limit}
        """)
    List<User> selectPageByCursor(@Param("lastId") Long lastId,
                                   @Param("status") Integer status,
                                   @Param("limit") long limit);
}
java 复制代码
@Service
@Slf4j
public class UserService {

    @Autowired
    private UserMapper userMapper;

    /**
     * 游标分页查询
     */
    public PageResult<User> queryByCursor(Long lastId, Integer status, int pageSize) {
        // lastId 为空则从头开始
        if (lastId == null) {
            lastId = 0L;
        }

        List<User> users = userMapper.selectPageByCursor(lastId, status, pageSize + 1);

        PageResult<User> result = new PageResult<>();

        if (users.size() > pageSize) {
            // 还有下一页
            result.setHasNext(true);
            users = users.subList(0, pageSize);
            result.setNextCursor(users.get(users.size() - 1).getId());
        } else {
            result.setHasNext(false);
        }

        result.setData(users);
        return result;
    }

    @Data
    public static class PageResult<T> {
        private List<T> data;
        private boolean hasNext;
        private Long nextCursor;
    }
}

四、连接池优化

4.1 连接池原理

lua 复制代码
+------------------------------------------------------------------+
|                      连接池工作原理                                |
+------------------------------------------------------------------+
|                                                                   |
|   应用程序                                                        |
|      |                                                            |
|      v                                                            |
|   +------------------+                                            |
|   |   连接池         |                                            |
|   | +----+ +----+   |        +------------------+                 |
|   | |conn| |conn|   | <----> |    MySQL Server  |                 |
|   | +----+ +----+   |        +------------------+                 |
|   | +----+ +----+   |                                             |
|   | |conn| |idle|   |                                             |
|   | +----+ +----+   |                                             |
|   +------------------+                                            |
|        |                                                          |
|   +----+----+                                                     |
|   v         v                                                     |
| 获取连接  归还连接                                                 |
|                                                                   |
+------------------------------------------------------------------+

4.2 HikariCP 配置优化

yaml 复制代码
# application.yml
spring:
  datasource:
    type: com.zaxxer.hikari.HikariDataSource
    hikari:
      # 连接池名称
      pool-name: HikariPool-Master

      # 最小空闲连接数
      minimum-idle: 10

      # 最大连接数 = (核心数 * 2) + 有效磁盘数
      # 一般设置 20-50
      maximum-pool-size: 20

      # 空闲连接超时时间(毫秒)
      idle-timeout: 600000

      # 连接最大存活时间(毫秒)
      max-lifetime: 1800000

      # 获取连接超时时间(毫秒)
      connection-timeout: 30000

      # 连接测试查询
      connection-test-query: SELECT 1

      # 自动提交
      auto-commit: true

      # 连接初始化 SQL
      connection-init-sql: SET NAMES utf8mb4

4.3 连接池监控

java 复制代码
import com.zaxxer.hikari.HikariDataSource;
import com.zaxxer.hikari.HikariPoolMXBean;
import lombok.extern.slf4j.Slf4j;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.scheduling.annotation.Scheduled;
import org.springframework.stereotype.Component;

import javax.sql.DataSource;

@Slf4j
@Component
public class ConnectionPoolMonitor {

    @Autowired
    private DataSource dataSource;

    /**
     * 每分钟监控连接池状态
     */
    @Scheduled(fixedRate = 60000)
    public void monitorPool() {
        if (dataSource instanceof HikariDataSource) {
            HikariDataSource hikariDataSource = (HikariDataSource) dataSource;
            HikariPoolMXBean poolMXBean = hikariDataSource.getHikariPoolMXBean();

            log.info("=== HikariCP 连接池状态 ===");
            log.info("活跃连接数: {}", poolMXBean.getActiveConnections());
            log.info("空闲连接数: {}", poolMXBean.getIdleConnections());
            log.info("总连接数: {}", poolMXBean.getTotalConnections());
            log.info("等待连接的线程数: {}", poolMXBean.getThreadsAwaitingConnection());

            // 告警:等待连接的线程数过多
            if (poolMXBean.getThreadsAwaitingConnection() > 5) {
                log.warn("等待连接的线程数过多,可能需要增加连接池大小!");
            }

            // 告警:连接池使用率过高
            double usageRate = (double) poolMXBean.getActiveConnections()
                    / hikariDataSource.getMaximumPoolSize();
            if (usageRate > 0.8) {
                log.warn("连接池使用率超过80%: {}%", usageRate * 100);
            }
        }
    }
}

五、锁优化

5.1 InnoDB 锁类型

sql 复制代码
+------------------------------------------------------------------+
|                      InnoDB 锁类型                                |
+------------------------------------------------------------------+
|                                                                   |
|   行锁类型:                                                       |
|   +------------------+------------------------------------------+ |
|   |  共享锁 (S)      |  SELECT ... LOCK IN SHARE MODE           | |
|   +------------------+------------------------------------------+ |
|   |  排他锁 (X)      |  SELECT ... FOR UPDATE / DML 操作         | |
|   +------------------+------------------------------------------+ |
|                                                                   |
|   意向锁:                                                        |
|   +------------------+------------------------------------------+ |
|   |  意向共享锁 (IS) |  事务想要获取表中某几行的共享锁             | |
|   +------------------+------------------------------------------+ |
|   |  意向排他锁 (IX) |  事务想要获取表中某几行的排他锁             | |
|   +------------------+------------------------------------------+ |
|                                                                   |
|   间隙锁 & 临键锁(防止幻读):                                     |
|   +------------------+------------------------------------------+ |
|   |  Gap Lock        |  锁定索引记录之间的间隙                    | |
|   +------------------+------------------------------------------+ |
|   |  Next-Key Lock   |  行锁 + 间隙锁                            | |
|   +------------------+------------------------------------------+ |
|                                                                   |
+------------------------------------------------------------------+

5.2 死锁分析与预防

sql 复制代码
-- 查看当前锁等待情况
SELECT * FROM information_schema.INNODB_LOCK_WAITS;

-- 查看当前锁信息
SELECT * FROM information_schema.INNODB_LOCKS;

-- 查看最近一次死锁信息
SHOW ENGINE INNODB STATUS;

-- 死锁预防原则:
-- 1. 按固定顺序访问表和行
-- 2. 大事务拆分成小事务
-- 3. 合理设置锁等待超时 innodb_lock_wait_timeout
-- 4. 使用较低的隔离级别

5.3 Java 代码示例:乐观锁实现

java 复制代码
import com.baomidou.mybatisplus.annotation.Version;
import lombok.Data;

@Data
@TableName("product")
public class Product {
    @TableId(type = IdType.AUTO)
    private Long id;
    private String name;
    private Integer stock;
    private BigDecimal price;

    @Version
    private Integer version;  // 乐观锁版本号
}
java 复制代码
@Service
@Slf4j
public class ProductService {

    @Autowired
    private ProductMapper productMapper;

    /**
     * 乐观锁扣减库存
     */
    public boolean decreaseStock(Long productId, int quantity) {
        int maxRetry = 3;
        int retry = 0;

        while (retry < maxRetry) {
            // 查询当前库存
            Product product = productMapper.selectById(productId);
            if (product == null) {
                throw new RuntimeException("商品不存在");
            }

            if (product.getStock() < quantity) {
                throw new RuntimeException("库存不足");
            }

            // 乐观锁更新
            product.setStock(product.getStock() - quantity);
            int affected = productMapper.updateById(product);

            if (affected > 0) {
                log.info("库存扣减成功,productId: {}, quantity: {}", productId, quantity);
                return true;
            }

            // 更新失败,重试
            retry++;
            log.warn("库存扣减冲突,重试第 {} 次", retry);

            try {
                Thread.sleep(100 * retry);  // 退避重试
            } catch (InterruptedException e) {
                Thread.currentThread().interrupt();
            }
        }

        throw new RuntimeException("库存扣减失败,请稍后重试");
    }

    /**
     * 悲观锁扣减库存
     */
    @Transactional
    public boolean decreaseStockPessimistic(Long productId, int quantity) {
        // SELECT ... FOR UPDATE 加排他锁
        Product product = productMapper.selectForUpdate(productId);

        if (product == null) {
            throw new RuntimeException("商品不存在");
        }

        if (product.getStock() < quantity) {
            throw new RuntimeException("库存不足");
        }

        product.setStock(product.getStock() - quantity);
        productMapper.updateById(product);

        return true;
    }
}
java 复制代码
@Mapper
public interface ProductMapper extends BaseMapper<Product> {

    @Select("SELECT * FROM product WHERE id = #{id} FOR UPDATE")
    Product selectForUpdate(@Param("id") Long id);
}

六、慢查询优化

6.1 慢查询日志配置

sql 复制代码
-- 查看慢查询配置
SHOW VARIABLES LIKE 'slow_query%';
SHOW VARIABLES LIKE 'long_query_time';

-- 开启慢查询日志
SET GLOBAL slow_query_log = 1;
SET GLOBAL long_query_time = 1;  -- 超过1秒记录
SET GLOBAL slow_query_log_file = '/var/log/mysql/slow.log';

-- my.cnf 配置
[mysqld]
slow_query_log = 1
slow_query_log_file = /var/log/mysql/slow.log
long_query_time = 1
log_queries_not_using_indexes = 1

6.2 慢查询分析工具

bash 复制代码
# 使用 mysqldumpslow 分析
mysqldumpslow -s t -t 10 /var/log/mysql/slow.log

# 参数说明:
# -s: 排序方式 (t: 查询时间, c: 次数, r: 返回行数)
# -t: 显示前 N 条

6.3 Java 慢 SQL 监控

java 复制代码
import lombok.extern.slf4j.Slf4j;
import org.aspectj.lang.ProceedingJoinPoint;
import org.aspectj.lang.annotation.Around;
import org.aspectj.lang.annotation.Aspect;
import org.springframework.stereotype.Component;

@Aspect
@Component
@Slf4j
public class SlowSqlMonitorAspect {

    private static final long SLOW_SQL_THRESHOLD = 1000;  // 1秒

    @Around("execution(* com.example..mapper.*Mapper.*(..))")
    public Object monitorSql(ProceedingJoinPoint joinPoint) throws Throwable {
        long startTime = System.currentTimeMillis();
        String methodName = joinPoint.getSignature().toShortString();

        try {
            return joinPoint.proceed();
        } finally {
            long executionTime = System.currentTimeMillis() - startTime;

            if (executionTime > SLOW_SQL_THRESHOLD) {
                log.warn("慢SQL警告 - 方法: {}, 执行时间: {}ms, 参数: {}",
                        methodName, executionTime, joinPoint.getArgs());

                // 可以发送告警通知
                sendSlowSqlAlert(methodName, executionTime);
            }
        }
    }

    private void sendSlowSqlAlert(String method, long time) {
        // 发送钉钉/邮件告警
    }
}

6.4 P6Spy SQL 分析

xml 复制代码
<!-- pom.xml -->
<dependency>
    <groupId>p6spy</groupId>
    <artifactId>p6spy</artifactId>
    <version>3.9.1</version>
</dependency>
properties 复制代码
# spy.properties
modulelist=com.baomidou.mybatisplus.extension.p6spy.MybatisPlusLogFactory,com.p6spy.engine.outage.P6OutageFactory
logMessageFormat=com.baomidou.mybatisplus.extension.p6spy.P6SpyLogger
appender=com.p6spy.engine.spy.appender.Slf4JLogger
deregisterdrivers=true
useprefix=true
excludecategories=info,debug,result,resultset
dateformat=yyyy-MM-dd HH:mm:ss
outagedetection=true
outagedetectioninterval=2

七、分库分表

7.1 分库分表策略

sql 复制代码
+------------------------------------------------------------------+
|                      分库分表策略                                  |
+------------------------------------------------------------------+
|                                                                   |
|   垂直拆分:                                                       |
|   +----------+              +----------+  +----------+            |
|   |  大表    |    ---->    |  用户表   |  |  订单表  |            |
|   | 用户     |              +----------+  +----------+            |
|   | 订单     |              |  用户库   |  |  订单库  |            |
|   | 商品     |              +----------+  +----------+            |
|   +----------+                                                    |
|                                                                   |
|   水平拆分:                                                       |
|   +----------+              +----------+  +----------+            |
|   |  订单表  |    ---->    | 订单表_0 |  | 订单表_1 |            |
|   | 1000万行 |              +----------+  +----------+            |
|   +----------+              | 500万行  |  | 500万行  |            |
|                             +----------+  +----------+            |
|                                                                   |
|   分片策略:                                                       |
|   +------------------+--------------------------------------+     |
|   |  Hash 取模       |  user_id % 4 = 分片号                |     |
|   +------------------+--------------------------------------+     |
|   |  Range 范围      |  0-1000万 -> 分片0, 1000万-2000万 -> 分片1 |
|   +------------------+--------------------------------------+     |
|   |  时间分片        |  按月/年分表                          |     |
|   +------------------+--------------------------------------+     |
|                                                                   |
+------------------------------------------------------------------+

7.2 ShardingSphere 配置

yaml 复制代码
# application.yml
spring:
  shardingsphere:
    datasource:
      names: ds0,ds1
      ds0:
        type: com.zaxxer.hikari.HikariDataSource
        driver-class-name: com.mysql.cj.jdbc.Driver
        jdbc-url: jdbc:mysql://localhost:3306/db0
        username: root
        password: root
      ds1:
        type: com.zaxxer.hikari.HikariDataSource
        driver-class-name: com.mysql.cj.jdbc.Driver
        jdbc-url: jdbc:mysql://localhost:3306/db1
        username: root
        password: root

    rules:
      sharding:
        tables:
          t_order:
            actual-data-nodes: ds$->{0..1}.t_order_$->{0..1}
            database-strategy:
              standard:
                sharding-column: user_id
                sharding-algorithm-name: database-inline
            table-strategy:
              standard:
                sharding-column: order_id
                sharding-algorithm-name: table-inline
            key-generate-strategy:
              column: order_id
              key-generator-name: snowflake

        sharding-algorithms:
          database-inline:
            type: INLINE
            props:
              algorithm-expression: ds$->{user_id % 2}
          table-inline:
            type: INLINE
            props:
              algorithm-expression: t_order_$->{order_id % 2}

        key-generators:
          snowflake:
            type: SNOWFLAKE

    props:
      sql-show: true

7.3 分布式 ID 生成

java 复制代码
/**
 * 雪花算法 ID 生成器
 */
public class SnowflakeIdGenerator {

    // 起始时间戳 (2024-01-01)
    private final long START_TIMESTAMP = 1704067200000L;

    // 各部分位数
    private final long SEQUENCE_BIT = 12;   // 序列号
    private final long MACHINE_BIT = 5;     // 机器标识
    private final long DATACENTER_BIT = 5;  // 数据中心

    // 最大值
    private final long MAX_SEQUENCE = ~(-1L << SEQUENCE_BIT);
    private final long MAX_MACHINE = ~(-1L << MACHINE_BIT);
    private final long MAX_DATACENTER = ~(-1L << DATACENTER_BIT);

    // 位移
    private final long MACHINE_SHIFT = SEQUENCE_BIT;
    private final long DATACENTER_SHIFT = SEQUENCE_BIT + MACHINE_BIT;
    private final long TIMESTAMP_SHIFT = SEQUENCE_BIT + MACHINE_BIT + DATACENTER_BIT;

    private final long datacenterId;
    private final long machineId;
    private long sequence = 0L;
    private long lastTimestamp = -1L;

    public SnowflakeIdGenerator(long datacenterId, long machineId) {
        if (datacenterId > MAX_DATACENTER || datacenterId < 0) {
            throw new IllegalArgumentException("Datacenter ID 超出范围");
        }
        if (machineId > MAX_MACHINE || machineId < 0) {
            throw new IllegalArgumentException("Machine ID 超出范围");
        }
        this.datacenterId = datacenterId;
        this.machineId = machineId;
    }

    public synchronized long nextId() {
        long currentTimestamp = System.currentTimeMillis();

        if (currentTimestamp < lastTimestamp) {
            throw new RuntimeException("时钟回拨,拒绝生成ID");
        }

        if (currentTimestamp == lastTimestamp) {
            sequence = (sequence + 1) & MAX_SEQUENCE;
            if (sequence == 0) {
                // 同一毫秒内序列号用完,等待下一毫秒
                currentTimestamp = waitNextMillis(lastTimestamp);
            }
        } else {
            sequence = 0L;
        }

        lastTimestamp = currentTimestamp;

        return ((currentTimestamp - START_TIMESTAMP) << TIMESTAMP_SHIFT)
                | (datacenterId << DATACENTER_SHIFT)
                | (machineId << MACHINE_SHIFT)
                | sequence;
    }

    private long waitNextMillis(long lastTimestamp) {
        long timestamp = System.currentTimeMillis();
        while (timestamp <= lastTimestamp) {
            timestamp = System.currentTimeMillis();
        }
        return timestamp;
    }
}

八、读写分离

8.1 读写分离架构

lua 复制代码
+------------------------------------------------------------------+
|                      读写分离架构                                  |
+------------------------------------------------------------------+
|                                                                   |
|                        应用程序                                    |
|                           |                                       |
|                           v                                       |
|                    +-------------+                                |
|                    |  数据源路由  |                                |
|                    +------+------+                                |
|                           |                                       |
|              +------------+------------+                          |
|              |                         |                          |
|              v                         v                          |
|        +----------+             +----------+                      |
|        |  Master  |  -------->  |  Slave   |                      |
|        |  写操作   |    同步     |  读操作   |                      |
|        +----------+             +----------+                      |
|                                      |                            |
|                                      v                            |
|                                 +----------+                      |
|                                 |  Slave   |                      |
|                                 |  读操作   |                      |
|                                 +----------+                      |
|                                                                   |
+------------------------------------------------------------------+

8.2 动态数据源实现

java 复制代码
/**
 * 数据源类型枚举
 */
public enum DataSourceType {
    MASTER,
    SLAVE
}

/**
 * 数据源上下文
 */
public class DataSourceContextHolder {
    private static final ThreadLocal<DataSourceType> CONTEXT = new ThreadLocal<>();

    public static void setDataSourceType(DataSourceType type) {
        CONTEXT.set(type);
    }

    public static DataSourceType getDataSourceType() {
        return CONTEXT.get() == null ? DataSourceType.MASTER : CONTEXT.get();
    }

    public static void clear() {
        CONTEXT.remove();
    }
}

/**
 * 动态数据源
 */
public class DynamicDataSource extends AbstractRoutingDataSource {

    @Override
    protected Object determineCurrentLookupKey() {
        return DataSourceContextHolder.getDataSourceType();
    }
}
java 复制代码
/**
 * 读写分离注解
 */
@Target({ElementType.METHOD, ElementType.TYPE})
@Retention(RetentionPolicy.RUNTIME)
@Documented
public @interface ReadOnly {
}

/**
 * 读写分离切面
 */
@Aspect
@Component
@Order(-1)  // 保证在事务之前执行
public class DataSourceAspect {

    @Around("@annotation(readOnly)")
    public Object around(ProceedingJoinPoint point, ReadOnly readOnly) throws Throwable {
        try {
            DataSourceContextHolder.setDataSourceType(DataSourceType.SLAVE);
            return point.proceed();
        } finally {
            DataSourceContextHolder.clear();
        }
    }
}
java 复制代码
@Service
public class UserService {

    @Autowired
    private UserMapper userMapper;

    /**
     * 写操作 - 使用主库
     */
    @Transactional
    public void createUser(User user) {
        userMapper.insert(user);
    }

    /**
     * 读操作 - 使用从库
     */
    @ReadOnly
    public User getUser(Long id) {
        return userMapper.selectById(id);
    }

    /**
     * 读操作 - 使用从库
     */
    @ReadOnly
    public List<User> listUsers(Integer status) {
        return userMapper.selectByStatus(status);
    }
}

九、MySQL 参数优化

9.1 关键参数配置

ini 复制代码
# my.cnf

[mysqld]
# ==================== 基础配置 ====================
server-id = 1
port = 3306
datadir = /var/lib/mysql
socket = /var/lib/mysql/mysql.sock
character-set-server = utf8mb4
collation-server = utf8mb4_unicode_ci

# ==================== 连接配置 ====================
max_connections = 500              # 最大连接数
max_connect_errors = 100           # 最大连接错误数
wait_timeout = 28800               # 非交互连接超时(8小时)
interactive_timeout = 28800        # 交互连接超时

# ==================== InnoDB 配置 ====================
innodb_buffer_pool_size = 8G       # 缓冲池大小(物理内存的 60-80%)
innodb_buffer_pool_instances = 8   # 缓冲池实例数
innodb_log_file_size = 1G          # redo log 文件大小
innodb_log_buffer_size = 64M       # redo log 缓冲大小
innodb_flush_log_at_trx_commit = 1 # 事务提交刷盘策略
innodb_flush_method = O_DIRECT     # 刷盘方式
innodb_file_per_table = 1          # 独立表空间
innodb_io_capacity = 2000          # IO 能力
innodb_read_io_threads = 8         # 读线程数
innodb_write_io_threads = 8        # 写线程数

# ==================== 查询缓存(MySQL 8.0 已移除) ====================
# query_cache_type = 0             # 关闭查询缓存

# ==================== 临时表配置 ====================
tmp_table_size = 64M               # 内存临时表大小
max_heap_table_size = 64M          # MEMORY 引擎表大小

# ==================== 排序和连接配置 ====================
sort_buffer_size = 4M              # 排序缓冲
join_buffer_size = 4M              # 连接缓冲
read_buffer_size = 4M              # 顺序读缓冲
read_rnd_buffer_size = 8M          # 随机读缓冲

# ==================== 日志配置 ====================
slow_query_log = 1                 # 开启慢查询日志
slow_query_log_file = /var/log/mysql/slow.log
long_query_time = 1                # 慢查询阈值(秒)
log_queries_not_using_indexes = 1  # 记录未使用索引的查询

# ==================== 二进制日志 ====================
log_bin = mysql-bin                # 开启 binlog
binlog_format = ROW                # binlog 格式
expire_logs_days = 7               # binlog 保留天数
max_binlog_size = 500M             # 单个 binlog 文件大小

9.2 参数调优监控

java 复制代码
@Component
@Slf4j
public class MySqlStatusMonitor {

    @Autowired
    private JdbcTemplate jdbcTemplate;

    /**
     * 获取关键状态指标
     */
    public Map<String, Object> getKeyStatus() {
        Map<String, Object> status = new HashMap<>();

        // 连接数
        status.put("threads_connected", getVariable("Threads_connected"));
        status.put("threads_running", getVariable("Threads_running"));
        status.put("max_used_connections", getVariable("Max_used_connections"));

        // 缓冲池命中率
        Long reads = getLongVariable("Innodb_buffer_pool_reads");
        Long readRequests = getLongVariable("Innodb_buffer_pool_read_requests");
        if (readRequests > 0) {
            double hitRate = (1 - (double) reads / readRequests) * 100;
            status.put("buffer_pool_hit_rate", String.format("%.2f%%", hitRate));
        }

        // 慢查询数
        status.put("slow_queries", getVariable("Slow_queries"));

        // QPS / TPS
        status.put("questions", getVariable("Questions"));
        status.put("com_commit", getVariable("Com_commit"));
        status.put("com_rollback", getVariable("Com_rollback"));

        return status;
    }

    private String getVariable(String name) {
        String sql = "SHOW GLOBAL STATUS LIKE ?";
        List<Map<String, Object>> result = jdbcTemplate.queryForList(sql, name);
        if (!result.isEmpty()) {
            return result.get(0).get("Value").toString();
        }
        return "0";
    }

    private Long getLongVariable(String name) {
        return Long.parseLong(getVariable(name));
    }

    /**
     * 定期检查并告警
     */
    @Scheduled(fixedRate = 60000)
    public void checkAndAlert() {
        Map<String, Object> status = getKeyStatus();

        // 连接数告警
        int threadsConnected = Integer.parseInt(status.get("threads_connected").toString());
        if (threadsConnected > 400) {
            log.warn("MySQL 连接数过高: {}", threadsConnected);
        }

        // 缓冲池命中率告警
        String hitRateStr = status.get("buffer_pool_hit_rate").toString();
        double hitRate = Double.parseDouble(hitRateStr.replace("%", ""));
        if (hitRate < 95) {
            log.warn("InnoDB Buffer Pool 命中率过低: {}", hitRateStr);
        }

        log.info("MySQL 状态: {}", status);
    }
}

十、总结与最佳实践

10.1 优化 Checklist

css 复制代码
+------------------------------------------------------------------+
|                    MySQL 优化 Checklist                           |
+------------------------------------------------------------------+
|                                                                   |
|  索引优化:                                                        |
|  [ ] 确保 WHERE、JOIN、ORDER BY 列有索引                          |
|  [ ] 避免索引失效场景                                              |
|  [ ] 使用覆盖索引减少回表                                          |
|  [ ] 定期清理冗余索引                                              |
|                                                                   |
|  SQL 优化:                                                        |
|  [ ] 避免 SELECT *                                                |
|  [ ] 分页查询使用游标或延迟关联                                    |
|  [ ] 批量操作替代循环单条                                          |
|  [ ] 子查询优化为 JOIN                                             |
|                                                                   |
|  架构优化:                                                        |
|  [ ] 读写分离                                                     |
|  [ ] 数据量大时考虑分库分表                                        |
|  [ ] 热点数据使用缓存                                              |
|                                                                   |
|  参数优化:                                                        |
|  [ ] innodb_buffer_pool_size 设置合理                             |
|  [ ] 开启慢查询日志                                                |
|  [ ] 连接池参数调优                                                |
|                                                                   |
|  监控告警:                                                        |
|  [ ] 监控连接数、QPS、慢查询                                       |
|  [ ] 监控缓冲池命中率                                              |
|  [ ] 设置合理的告警阈值                                            |
|                                                                   |
+------------------------------------------------------------------+

10.2 优化原则

  1. 先定位后优化:使用 EXPLAIN、慢查询日志、监控工具找到瓶颈
  2. 避免过早优化:先保证功能正确,再进行性能优化
  3. 数据说话:优化前后对比数据,验证优化效果
  4. 整体思维:SQL、索引、架构、参数综合考虑
  5. 持续监控:建立完善的监控体系,及时发现问题

10.3 常见误区

误区 正确做法
盲目加索引 分析 SQL 后有针对性添加
索引越多越好 索引会影响写入性能,控制数量
只看单条 SQL 考虑并发场景和整体负载
忽视锁问题 关注锁等待和死锁
过度分库分表 先优化单库,必要时再拆分

相关推荐
程序员西西33 分钟前
详细介绍Spring Boot中用到的JSON序列化技术?
java·后端
豆豆的java之旅37 分钟前
深入浅出Activity工作流:从理论到实践,让业务流转自动化
java·运维·自动化·activity·工作流
3***891938 分钟前
开放自己本机的mysql允许别人连接
数据库·mysql·adb
一点 内容43 分钟前
深度解析OurBMC后端模式:全栈技术架构与运维实践
java·开发语言
q***235744 分钟前
MySQL 篇 - Java 连接 MySQL 数据库并实现数据交互
java·数据库·mysql
W***95241 小时前
在Spring Boot项目中使用MySQL数据库
数据库·spring boot·mysql
合方圆~小文1 小时前
球型摄像机作为现代监控系统的核心设备
java·数据库·c++·人工智能
椎4951 小时前
苍穹外卖资源点整理+个人错误解析-Day10-订单状态定时处理(Spring Task)、来单提醒和客户催单
java·后端·spring
Y***h1871 小时前
eclipse配置Spring
java·spring·eclipse