MyBatis高级特性与性能优化:从入门到精通的实战指南

🚀 MyBatis高级特性与性能优化:从入门到精通的实战指南

作为一名资深的后端开发工程师,我深知MyBatis在企业级应用中的重要地位。今天,我将带你深入探索MyBatis的高级特性,并分享一些实战中的性能优化技巧。准备好了吗?让我们开始这场技术之旅!


文章目录


⚡ 缓存机制深度剖析:让你的应用飞起来

一级缓存(SqlSession级别):默认开启的性能利器

一级缓存是MyBatis默认开启的缓存机制,它存在于SqlSession的生命周期内。当你在同一个SqlSession中执行相同的查询时,MyBatis会直接从缓存中返回结果,而不是再次访问数据库。

java 复制代码
// 一级缓存示例
SqlSession sqlSession = sqlSessionFactory.openSession();
UserMapper userMapper = sqlSession.getMapper(UserMapper.class);

// 第一次查询,会访问数据库
User user1 = userMapper.selectById(1);
// 第二次查询,直接从一级缓存获取
User user2 = userMapper.selectById(1);

// user1 == user2 返回true,说明是同一个对象
sqlSession.close();

💡 Pro Tips:

  • 一级缓存在增删改操作后会自动清空
  • 可以通过sqlSession.clearCache()手动清空缓存
  • 在分布式环境下要特别注意一级缓存的数据一致性问题
一级缓存的工作原理深度解析

一级缓存的实现基于HashMap,存储在BaseExecutor中:

java 复制代码
// MyBatis源码中的一级缓存实现
public abstract class BaseExecutor implements Executor {
    protected PerpetualCache localCache;
    
    @Override
    public <E> List<E> query(MappedStatement ms, Object parameter, 
                           RowBounds rowBounds, ResultHandler resultHandler) {
        BoundSql boundSql = ms.getBoundSql(parameter);
        CacheKey key = createCacheKey(ms, parameter, rowBounds, boundSql);
        return query(ms, parameter, rowBounds, resultHandler, key, boundSql);
    }
}
一级缓存失效场景详解
java 复制代码
@Test
public void testFirstLevelCacheInvalidation() {
    SqlSession sqlSession = sqlSessionFactory.openSession();
    UserMapper userMapper = sqlSession.getMapper(UserMapper.class);
    
    // 第一次查询
    User user1 = userMapper.selectById(1);
    
    // 执行更新操作,一级缓存会被清空
    userMapper.updateUser(new User(2, "张三", "zhangsan@example.com"));
    
    // 再次查询,会重新访问数据库
    User user2 = userMapper.selectById(1);
    
    // user1 != user2,因为缓存已被清空
    assertNotSame(user1, user2);
}

二级缓存(Mapper级别):跨SqlSession的数据共享

二级缓存是Mapper级别的缓存,可以在多个SqlSession之间共享数据。要启用二级缓存,需要进行以下配置:

xml 复制代码
<!-- 在mybatis-config.xml中开启二级缓存 -->
<settings>
    <setting name="cacheEnabled" value="true"/>
</settings>

<!-- 在Mapper.xml中配置缓存 -->
<cache eviction="LRU" flushInterval="60000" size="512" readOnly="true"/>
java 复制代码
// 实体类需要实现Serializable接口
public class User implements Serializable {
    private static final long serialVersionUID = 1L;
    // ... 属性和方法
}
二级缓存配置参数详解
xml 复制代码
<!-- 详细的二级缓存配置 -->
<cache 
    eviction="LRU"           <!-- 缓存回收策略:LRU、FIFO、SOFT、WEAK -->
    flushInterval="60000"    <!-- 缓存刷新间隔,单位毫秒 -->
    size="512"               <!-- 缓存对象数量 -->
    readOnly="false"         <!-- 是否只读 -->
    blocking="true"          <!-- 是否阻塞 -->
    type="org.mybatis.caches.ehcache.EhcacheCache"/> <!-- 自定义缓存实现 -->
二级缓存使用注意事项
java 复制代码
// 正确使用二级缓存的示例
@Test
public void testSecondLevelCache() {
    // 第一个SqlSession
    SqlSession sqlSession1 = sqlSessionFactory.openSession();
    UserMapper userMapper1 = sqlSession1.getMapper(UserMapper.class);
    User user1 = userMapper1.selectById(1);
    sqlSession1.commit(); // 必须提交才能将数据放入二级缓存
    sqlSession1.close();
    
    // 第二个SqlSession
    SqlSession sqlSession2 = sqlSessionFactory.openSession();
    UserMapper userMapper2 = sqlSession2.getMapper(UserMapper.class);
    User user2 = userMapper2.selectById(1); // 从二级缓存获取
    sqlSession2.close();
    
    // 数据相同但对象不同(如果readOnly=false)
    assertEquals(user1.getName(), user2.getName());
}

自定义缓存实现:打造专属的缓存策略

当内置缓存无法满足需求时,我们可以实现自定义缓存:

java 复制代码
public class CustomCache implements Cache {
    private final String id;
    private final Map<Object, Object> cache = new ConcurrentHashMap<>();
    private final ReadWriteLock readWriteLock = new ReentrantReadWriteLock();
    
    public CustomCache(String id) {
        this.id = id;
    }
    
    @Override
    public String getId() {
        return id;
    }
    
    @Override
    public void putObject(Object key, Object value) {
        readWriteLock.writeLock().lock();
        try {
            cache.put(key, value);
            // 添加日志记录
            log.debug("Cache PUT: key={}, value={}", key, value);
        } finally {
            readWriteLock.writeLock().unlock();
        }
    }
    
    @Override
    public Object getObject(Object key) {
        readWriteLock.readLock().lock();
        try {
            Object value = cache.get(key);
            log.debug("Cache GET: key={}, hit={}", key, value != null);
            return value;
        } finally {
            readWriteLock.readLock().unlock();
        }
    }
    
    @Override
    public Object removeObject(Object key) {
        readWriteLock.writeLock().lock();
        try {
            return cache.remove(key);
        } finally {
            readWriteLock.writeLock().unlock();
        }
    }
    
    @Override
    public void clear() {
        readWriteLock.writeLock().lock();
        try {
            cache.clear();
            log.info("Cache cleared for: {}", id);
        } finally {
            readWriteLock.writeLock().unlock();
        }
    }
    
    @Override
    public int getSize() {
        return cache.size();
    }
    
    @Override
    public ReadWriteLock getReadWriteLock() {
        return readWriteLock;
    }
}
基于LRU算法的自定义缓存
java 复制代码
public class LRUCache implements Cache {
    private final String id;
    private final LinkedHashMap<Object, Object> cache;
    private final int maxSize;
    
    public LRUCache(String id) {
        this.id = id;
        this.maxSize = 1000; // 默认最大容量
        this.cache = new LinkedHashMap<Object, Object>(16, 0.75f, true) {
            @Override
            protected boolean removeEldestEntry(Map.Entry<Object, Object> eldest) {
                return size() > maxSize;
            }
        };
    }
    
    // ... 实现Cache接口的其他方法
}

缓存失效策略:智能管理缓存生命周期

MyBatis提供了多种缓存失效策略:

  • LRU(Least Recently Used):移除最长时间不被使用的对象
  • FIFO(First In First Out):按对象进入缓存的顺序来移除
  • SOFT:基于垃圾回收器状态和软引用规则移除对象
  • WEAK:基于垃圾回收器状态和弱引用规则移除对象
缓存失效策略实现原理
java 复制代码
// LRU策略实现示例
public class LruEvictionPolicy implements EvictionPolicy {
    private final Map<Object, Object> keyMap;
    private final LinkedList<Object> keyList;
    
    @Override
    public void recordAccess(Object key) {
        // 将访问的key移到链表头部
        keyList.remove(key);
        keyList.addFirst(key);
    }
    
    @Override
    public Object pollLastEntry() {
        // 移除链表尾部的元素(最久未使用)
        return keyList.removeLast();
    }
}

Redis集成缓存方案:分布式缓存的最佳实践

在分布式环境中,我们通常选择Redis作为二级缓存的实现:

java 复制代码
@Component
public class RedisCache implements Cache {
    private final RedisTemplate<String, Object> redisTemplate;
    private final String id;
    private final ReadWriteLock readWriteLock = new ReentrantReadWriteLock();
    
    public RedisCache(String id) {
        this.id = id;
        this.redisTemplate = SpringContextHolder.getBean(RedisTemplate.class);
    }
    
    @Override
    public void putObject(Object key, Object value) {
        try {
            String redisKey = getKey(key);
            redisTemplate.opsForValue().set(redisKey, value, 30, TimeUnit.MINUTES);
            // 设置过期时间,避免内存泄漏
            log.debug("Redis cache put: {}", redisKey);
        } catch (Exception e) {
            log.error("Redis cache put error", e);
        }
    }
    
    @Override
    public Object getObject(Object key) {
        try {
            String redisKey = getKey(key);
            Object value = redisTemplate.opsForValue().get(redisKey);
            log.debug("Redis cache get: {}, hit: {}", redisKey, value != null);
            return value;
        } catch (Exception e) {
            log.error("Redis cache get error", e);
            return null;
        }
    }
    
    @Override
    public Object removeObject(Object key) {
        try {
            String redisKey = getKey(key);
            Object value = redisTemplate.opsForValue().get(redisKey);
            redisTemplate.delete(redisKey);
            return value;
        } catch (Exception e) {
            log.error("Redis cache remove error", e);
            return null;
        }
    }
    
    @Override
    public void clear() {
        try {
            Set<String> keys = redisTemplate.keys(id + ":*");
            if (keys != null && !keys.isEmpty()) {
                redisTemplate.delete(keys);
            }
            log.info("Redis cache cleared for: {}", id);
        } catch (Exception e) {
            log.error("Redis cache clear error", e);
        }
    }
    
    private String getKey(Object key) {
        return id + ":" + DigestUtils.md5Hex(key.toString());
    }
    
    @Override
    public ReadWriteLock getReadWriteLock() {
        return readWriteLock;
    }
}
Redis缓存配置优化
yaml 复制代码
# application.yml中的Redis配置
spring:
  redis:
    host: localhost
    port: 6379
    database: 0
    timeout: 2000ms
    lettuce:
      pool:
        max-active: 20
        max-idle: 10
        min-idle: 5
        max-wait: 2000ms
    # 缓存配置
  cache:
    type: redis
    redis:
      time-to-live: 1800000  # 30分钟
      cache-null-values: false
      key-prefix: "mybatis:cache:"
缓存预热策略
java 复制代码
@Component
public class CacheWarmupService {
    
    @Autowired
    private UserMapper userMapper;
    
    @PostConstruct
    public void warmupCache() {
        log.info("开始缓存预热...");
        
        // 预热热点数据
        List<Long> hotUserIds = getHotUserIds();
        for (Long userId : hotUserIds) {
            userMapper.selectById(userId);
        }
        
        log.info("缓存预热完成,预热数据量: {}", hotUserIds.size());
    }
    
    private List<Long> getHotUserIds() {
        // 从统计数据或配置中获取热点用户ID
        return Arrays.asList(1L, 2L, 3L, 4L, 5L);
    }
}

🎯 性能优化实战:让你的应用性能翻倍

SQL执行分析与调优:找出性能瓶颈

使用MyBatis的SQL执行分析功能,可以帮助我们定位性能问题:

xml 复制代码
<!-- 开启SQL执行日志 -->
<settings>
    <setting name="logImpl" value="STDOUT_LOGGING"/>
    <!-- 或者使用SLF4J -->
    <!-- <setting name="logImpl" value="SLF4J"/> -->
</settings>
java 复制代码
// 使用@Select注解时的性能监控
@Select("SELECT * FROM user WHERE age > #{age}")
@Options(useCache = true, flushCache = Options.FlushCachePolicy.FALSE)
List<User> selectUsersByAge(@Param("age") int age);
自定义SQL执行时间监控
java 复制代码
@Intercepts({
    @Signature(type = Executor.class, method = "query", 
               args = {MappedStatement.class, Object.class, RowBounds.class, ResultHandler.class})
})
public class SqlExecutionTimeInterceptor implements Interceptor {
    
    private static final Logger log = LoggerFactory.getLogger(SqlExecutionTimeInterceptor.class);
    
    @Override
    public Object intercept(Invocation invocation) throws Throwable {
        long startTime = System.currentTimeMillis();
        
        try {
            Object result = invocation.proceed();
            long endTime = System.currentTimeMillis();
            long executionTime = endTime - startTime;
            
            // 记录执行时间超过阈值的SQL
            if (executionTime > 1000) { // 1秒阈值
                MappedStatement ms = (MappedStatement) invocation.getArgs()[0];
                log.warn("慢SQL检测 - 执行时间: {}ms, SQL ID: {}", executionTime, ms.getId());
            }
            
            return result;
        } catch (Exception e) {
            log.error("SQL执行异常", e);
            throw e;
        }
    }
    
    @Override
    public Object plugin(Object target) {
        return Plugin.wrap(target, this);
    }
}
SQL性能分析工具集成
java 复制代码
@Configuration
public class MyBatisConfig {
    
    @Bean
    public SqlSessionFactory sqlSessionFactory(DataSource dataSource) throws Exception {
        SqlSessionFactoryBean sessionFactory = new SqlSessionFactoryBean();
        sessionFactory.setDataSource(dataSource);
        
        // 添加性能监控插件
        sessionFactory.setPlugins(new Interceptor[]{
            new SqlExecutionTimeInterceptor(),
            new PageInterceptor() // 分页插件
        });
        
        return sessionFactory.getObject();
    }
}

批量操作优化:告别单条SQL的低效

批量操作是提升性能的重要手段:

xml 复制代码
<!-- 批量插入优化 -->
<insert id="batchInsert" parameterType="list">
    INSERT INTO user (name, email, age) VALUES
    <foreach collection="list" item="user" separator=",">
        (#{user.name}, #{user.email}, #{user.age})
    </foreach>
</insert>

<!-- 批量更新优化 -->
<update id="batchUpdate" parameterType="list">
    <foreach collection="list" item="user" separator=";">
        UPDATE user SET name = #{user.name}, email = #{user.email} 
        WHERE id = #{user.id}
    </foreach>
</update>

<!-- 使用CASE WHEN进行批量更新 -->
<update id="batchUpdateWithCase" parameterType="list">
    UPDATE user SET 
    name = CASE id
        <foreach collection="list" item="user">
            WHEN #{user.id} THEN #{user.name}
        </foreach>
    END,
    email = CASE id
        <foreach collection="list" item="user">
            WHEN #{user.id} THEN #{user.email}
        </foreach>
    END
    WHERE id IN
    <foreach collection="list" item="user" open="(" separator="," close=")">
        #{user.id}
    </foreach>
</update>
java 复制代码
// Java代码中的批量操作
@Service
public class UserService {
    
    @Autowired
    private UserMapper userMapper;
    
    @Transactional
    public void batchSaveUsers(List<User> users) {
        if (users == null || users.isEmpty()) {
            return;
        }
        
        // 分批处理,避免内存溢出和SQL过长
        int batchSize = 1000;
        for (int i = 0; i < users.size(); i += batchSize) {
            int end = Math.min(i + batchSize, users.size());
            List<User> batch = users.subList(i, end);
            userMapper.batchInsert(batch);
            
            // 每批次后清理一级缓存,避免内存溢出
            if (i % (batchSize * 10) == 0) {
                sqlSession.clearCache();
            }
        }
    }
    
    @Transactional
    public void batchUpdateUsers(List<User> users) {
        if (users == null || users.isEmpty()) {
            return;
        }
        
        // 使用CASE WHEN方式批量更新,性能更好
        int batchSize = 500; // 更新操作建议更小的批次
        for (int i = 0; i < users.size(); i += batchSize) {
            int end = Math.min(i + batchSize, users.size());
            List<User> batch = users.subList(i, end);
            userMapper.batchUpdateWithCase(batch);
        }
    }
}
批量操作性能对比测试
java 复制代码
@Test
public void testBatchPerformance() {
    List<User> users = generateTestUsers(10000);
    
    // 单条插入测试
    long startTime = System.currentTimeMillis();
    for (User user : users) {
        userMapper.insert(user);
    }
    long singleInsertTime = System.currentTimeMillis() - startTime;
    
    // 批量插入测试
    startTime = System.currentTimeMillis();
    userMapper.batchInsert(users);
    long batchInsertTime = System.currentTimeMillis() - startTime;
    
    log.info("单条插入耗时: {}ms", singleInsertTime);
    log.info("批量插入耗时: {}ms", batchInsertTime);
    log.info("性能提升: {}倍", (double) singleInsertTime / batchInsertTime);
}

分页查询最佳实践:高效处理大数据量

使用PageHelper插件实现高效分页:

java 复制代码
// 使用PageHelper进行分页
@Service
public class UserService {
    
    public PageInfo<User> getUsersByPage(int pageNum, int pageSize) {
        // 设置分页参数
        PageHelper.startPage(pageNum, pageSize);
        
        // 执行查询
        List<User> users = userMapper.selectAllUsers();
        
        // 封装分页信息
        return new PageInfo<>(users);
    }
    
    // 自定义分页查询,避免count查询
    public List<User> getUsersWithLimit(int offset, int limit) {
        return userMapper.selectUsersWithLimit(offset, limit);
    }
    
    // 游标分页,适用于大数据量场景
    public List<User> getUsersByCursor(Long lastId, int limit) {
        return userMapper.selectUsersByCursor(lastId, limit);
    }
}
xml 复制代码
<!-- 游标分页查询 -->
<select id="selectUsersByCursor" resultType="User">
    SELECT * FROM user 
    <where>
        <if test="lastId != null">
            id > #{lastId}
        </if>
    </where>
    ORDER BY id ASC
    LIMIT #{limit}
</select>

<!-- 优化的分页查询,使用覆盖索引 -->
<select id="selectUsersOptimized" resultType="User">
    SELECT u.* FROM user u
    INNER JOIN (
        SELECT id FROM user 
        ORDER BY id 
        LIMIT #{offset}, #{limit}
    ) t ON u.id = t.id
</select>
分页性能优化策略
java 复制代码
@Service
public class OptimizedPagingService {
    
    // 深分页优化:使用子查询
    public PageInfo<User> getDeepPageUsers(int pageNum, int pageSize) {
        if (pageNum > 100) { // 深分页阈值
            // 使用优化的查询方式
            int offset = (pageNum - 1) * pageSize;
            List<User> users = userMapper.selectUsersOptimized(offset, pageSize);
            
            // 手动构建PageInfo,避免count查询
            PageInfo<User> pageInfo = new PageInfo<>();
            pageInfo.setList(users);
            pageInfo.setPageNum(pageNum);
            pageInfo.setPageSize(pageSize);
            pageInfo.setHasNextPage(users.size() == pageSize);
            
            return pageInfo;
        } else {
            // 正常分页
            PageHelper.startPage(pageNum, pageSize);
            List<User> users = userMapper.selectAllUsers();
            return new PageInfo<>(users);
        }
    }
    
    // 缓存总数,避免重复count查询
    @Cacheable(value = "userCount", key = "'total'")
    public long getTotalUserCount() {
        return userMapper.countUsers();
    }
}

连接池配置优化:合理配置数据库连接

yaml 复制代码
# application.yml中的连接池配置
spring:
  datasource:
    type: com.zaxxer.hikari.HikariDataSource
    hikari:
      # 核心配置
      minimum-idle: 10                    # 最小空闲连接数
      maximum-pool-size: 20               # 最大连接池大小
      idle-timeout: 300000                # 空闲连接超时时间(5分钟)
      max-lifetime: 1800000               # 连接最大生命周期(30分钟)
      connection-timeout: 30000           # 连接超时时间(30秒)
      
      # 性能优化配置
      validation-timeout: 5000            # 验证连接有效性超时时间
      leak-detection-threshold: 60000     # 连接泄漏检测阈值(1分钟)
      initialization-fail-timeout: 1      # 初始化失败超时时间
      
      # 连接测试配置
      connection-test-query: SELECT 1     # 连接测试查询
      
      # 其他配置
      pool-name: "HikariCP-MyBatis"       # 连接池名称
      auto-commit: true                   # 自动提交
      read-only: false                    # 只读模式
      
      # 数据库特定配置
      data-source-properties:
        cachePrepStmts: true              # 缓存预编译语句
        prepStmtCacheSize: 250            # 预编译语句缓存大小
        prepStmtCacheSqlLimit: 2048       # 预编译语句SQL长度限制
        useServerPrepStmts: true          # 使用服务器端预编译
        useLocalSessionState: true       # 使用本地会话状态
        rewriteBatchedStatements: true    # 重写批量语句
        cacheResultSetMetadata: true      # 缓存结果集元数据
        cacheServerConfiguration: true   # 缓存服务器配置
        elideSetAutoCommits: true         # 省略自动提交设置
        maintainTimeStats: false          # 维护时间统计
连接池监控和调优
java 复制代码
@Component
public class HikariPoolMonitor {
    
    @Autowired
    private HikariDataSource dataSource;
    
    @Scheduled(fixedRate = 60000) // 每分钟监控一次
    public void monitorConnectionPool() {
        HikariPoolMXBean poolMXBean = dataSource.getHikariPoolMXBean();
        
        log.info("连接池监控 - 活跃连接: {}, 空闲连接: {}, 等待连接: {}, 总连接: {}",
                poolMXBean.getActiveConnections(),
                poolMXBean.getIdleConnections(),
                poolMXBean.getThreadsAwaitingConnection(),
                poolMXBean.getTotalConnections());
        
        // 连接池使用率告警
        int activeConnections = poolMXBean.getActiveConnections();
        int maxPoolSize = dataSource.getMaximumPoolSize();
        double usageRate = (double) activeConnections / maxPoolSize;
        
        if (usageRate > 0.8) {
            log.warn("连接池使用率过高: {:.2f}%, 建议增加连接池大小", usageRate * 100);
        }
    }
}

懒加载策略调优:按需加载关联数据

合理使用懒加载可以显著提升性能:

xml 复制代码
<!-- 开启懒加载 -->
<settings>
    <setting name="lazyLoadingEnabled" value="true"/>
    <setting name="aggressiveLazyLoading" value="false"/>
    <!-- 懒加载触发方法 -->
    <setting name="lazyLoadTriggerMethods" value="equals,clone,hashCode,toString"/>
</settings>

<!-- 配置关联查询的懒加载 -->
<resultMap id="UserResultMap" type="User">
    <id property="id" column="id"/>
    <result property="name" column="name"/>
    <result property="email" column="email"/>
    
    <!-- 一对多关联,懒加载 -->
    <collection property="orders" ofType="Order" 
                select="selectOrdersByUserId" 
                column="id" 
                fetchType="lazy"/>
    
    <!-- 一对一关联,懒加载 -->
    <association property="profile" javaType="UserProfile"
                 select="selectProfileByUserId"
                 column="id"
                 fetchType="lazy"/>
</resultMap>

<!-- 关联查询SQL -->
<select id="selectOrdersByUserId" resultType="Order">
    SELECT * FROM orders WHERE user_id = #{userId}
</select>

<select id="selectProfileByUserId" resultType="UserProfile">
    SELECT * FROM user_profile WHERE user_id = #{userId}
</select>
java 复制代码
// 在需要时才加载关联数据
@Test
public void testLazyLoading() {
    User user = userMapper.selectById(1);
    
    // 此时orders和profile还未加载
    log.info("用户信息: {}", user.getName());
    
    // 访问orders时触发懒加载
    List<Order> orders = user.getOrders(); // 触发懒加载
    log.info("订单数量: {}", orders.size());
    
    // 访问profile时触发懒加载
    UserProfile profile = user.getProfile(); // 触发懒加载
    log.info("用户档案: {}", profile.getBio());
}
懒加载性能优化技巧
java 复制代码
// 批量懒加载优化
@Service
public class UserService {
    
    // 避免N+1问题的懒加载
    public List<User> getUsersWithOrdersOptimized(List<Long> userIds) {
        // 先查询用户信息
        List<User> users = userMapper.selectByIds(userIds);
        
        // 批量查询订单信息
        List<Order> allOrders = orderMapper.selectByUserIds(userIds);
        
        // 手动组装关联关系
        Map<Long, List<Order>> orderMap = allOrders.stream()
                .collect(Collectors.groupingBy(Order::getUserId));
        
        users.forEach(user -> {
            List<Order> userOrders = orderMap.getOrDefault(user.getId(), new ArrayList<>());
            user.setOrders(userOrders);
        });
        
        return users;
    }
}
条件懒加载
xml 复制代码
<!-- 根据条件决定是否懒加载 -->
<resultMap id="UserResultMapConditional" type="User">
    <id property="id" column="id"/>
    <result property="name" column="name"/>
    
    <!-- 只有VIP用户才加载详细档案 -->
    <association property="profile" javaType="UserProfile"
                 select="selectProfileByUserId"
                 column="{userId=id,userType=user_type}"
                 fetchType="lazy">
        <discriminator javaType="string" column="user_type">
            <case value="VIP" resultType="VipUserProfile"/>
            <case value="NORMAL" resultType="NormalUserProfile"/>
        </discriminator>
    </association>
</resultMap>

🔧 高级性能优化技巧

结果集处理优化

java 复制代码
// 使用ResultHandler处理大结果集
@Mapper
public interface UserMapper {
    
    void selectAllUsersWithHandler(ResultHandler<User> handler);
}
xml 复制代码
<select id="selectAllUsersWithHandler" resultType="User">
    SELECT * FROM user
</select>
java 复制代码
// 流式处理大数据量
@Service
public class UserExportService {
    
    public void exportUsers(OutputStream outputStream) {
        try (PrintWriter writer = new PrintWriter(outputStream)) {
            writer.println("ID,Name,Email,Age"); // CSV头部
            
            userMapper.selectAllUsersWithHandler(resultContext -> {
                User user = resultContext.getResultObject();
                writer.printf("%d,%s,%s,%d%n", 
                    user.getId(), user.getName(), user.getEmail(), user.getAge());
                
                // 每1000条刷新一次
                if (resultContext.getResultCount() % 1000 == 0) {
                    writer.flush();
                }
            });
        }
    }
}

动态SQL优化

xml 复制代码
<!-- 优化前:可能产生性能问题的动态SQL -->
<select id="selectUsersBadExample" resultType="User">
    SELECT * FROM user
    <where>
        <if test="name != null and name != ''">
            AND name LIKE CONCAT('%', #{name}, '%')
        </if>
        <if test="email != null and email != ''">
            AND email = #{email}
        </if>
        <if test="ageRange != null">
            AND age BETWEEN #{ageRange.min} AND #{ageRange.max}
        </if>
    </where>
    ORDER BY create_time DESC
</select>

<!-- 优化后:考虑索引和查询效率的动态SQL -->
<select id="selectUsersOptimized" resultType="User">
    SELECT * FROM user
    <where>
        <!-- 精确匹配优先,利用索引 -->
        <if test="email != null and email != ''">
            AND email = #{email}
        </if>
        <if test="ageRange != null">
            AND age BETWEEN #{ageRange.min} AND #{ageRange.max}
        </if>
        <!-- 模糊查询放在最后 -->
        <if test="name != null and name != ''">
            <choose>
                <when test="name.length() >= 3">
                    AND name LIKE CONCAT(#{name}, '%') -- 前缀匹配,可以利用索引
                </when>
                <otherwise>
                    AND name = #{name} -- 短字符串使用精确匹配
                </otherwise>
            </choose>
        </if>
    </where>
    ORDER BY 
    <choose>
        <when test="orderBy != null and orderBy == 'name'">
            name ASC
        </when>
        <when test="orderBy != null and orderBy == 'age'">
            age DESC
        </when>
        <otherwise>
            create_time DESC
        </otherwise>
    </choose>
    <if test="limit != null and limit > 0">
        LIMIT #{limit}
    </if>
</select>

类型处理器优化

java 复制代码
// 自定义类型处理器,优化JSON字段处理
@MappedTypes(UserPreferences.class)
@MappedJdbcTypes(JdbcType.VARCHAR)
public class UserPreferencesTypeHandler extends BaseTypeHandler<UserPreferences> {
    
    private static final ObjectMapper objectMapper = new ObjectMapper();
    
    @Override
    public void setNonNullParameter(PreparedStatement ps, int i, 
                                   UserPreferences parameter, JdbcType jdbcType) throws SQLException {
        try {
            ps.setString(i, objectMapper.writeValueAsString(parameter));
        } catch (JsonProcessingException e) {
            throw new SQLException("Error converting UserPreferences to JSON", e);
        }
    }
    
    @Override
    public UserPreferences getNullableResult(ResultSet rs, String columnName) throws SQLException {
        String json = rs.getString(columnName);
        return parseJson(json);
    }
    
    @Override
    public UserPreferences getNullableResult(ResultSet rs, int columnIndex) throws SQLException {
        String json = rs.getString(columnIndex);
        return parseJson(json);
    }
    
    @Override
    public UserPreferences getNullableResult(CallableStatement cs, int columnIndex) throws SQLException {
        String json = cs.getString(columnIndex);
        return parseJson(json);
    }
    
    private UserPreferences parseJson(String json) {
        if (json == null || json.trim().isEmpty()) {
            return null;
        }
        try {
            return objectMapper.readValue(json, UserPreferences.class);
        } catch (JsonProcessingException e) {
            log.warn("Error parsing JSON to UserPreferences: {}", json, e);
            return null;
        }
    }
}

🔧 实战总结:性能优化的黄金法则

1. 缓存策略金字塔

  1. 缓存优先:合理使用一级和二级缓存,在分布式环境下选择Redis
  2. 批量操作:避免循环执行单条SQL,使用批量操作提升效率
  3. 分页优化:大数据量查询必须分页,避免一次性加载过多数据
  4. 连接池调优:根据业务特点合理配置连接池参数
  5. 懒加载策略:按需加载关联数据,避免不必要的数据传输

💡 写在最后

MyBatis的高级特性和性能优化是一个持续学习和实践的过程。在实际项目中,我们需要根据具体的业务场景和数据特点,选择合适的优化策略。记住,没有银弹,只有最适合的解决方案。

希望这篇文章能够帮助你在MyBatis的道路上走得更远!如果你有任何问题或想法,欢迎在评论区与我交流。


关注我,获取更多后端技术干货!让我们一起在技术的道路上不断前行! 🚀

相关推荐
在未来等你5 小时前
RabbitMQ面试精讲 Day 20:RabbitMQ压测与性能评估
性能优化·消息队列·rabbitmq·压力测试·性能测试·面试题
尘心不灭11 小时前
MyBatis 缓存与 Spring 事务相关笔记
java·spring·mybatis
写bug写bug14 小时前
搞懂MyBatis拦截器的工作原理
java·后端·mybatis
四七伵16 小时前
MyBatis #{} 与 ${} 有什么区别?为什么预编译能防止SQL注入?
前端·sql·mybatis
要开心吖ZSH1 天前
大数据量下分页查询性能优化实践(SpringBoot+MyBatis-Plus)
spring boot·性能优化·mybatis
PP东1 天前
Mybatis学习之动态SQL(八)
mybatis
Code季风1 天前
如果缓存和数据库更新失败,如何实现最终一致性?
数据库·分布式·缓存·微服务·性能优化
小马敲马1 天前
[4.2-2] NCCL新版本的register如何实现的?
开发语言·c++·人工智能·算法·性能优化·nccl