🚀 MyBatis 性能优化最佳实践:从 SQL 到连接池的全面调优指南
⚡ 一、性能优化全景图
💡 MyBatis 性能瓶颈分析
MyBatis性能瓶颈
SQL执行效率
数据库连接管理
数据读写操作
缓存策略
索引缺失
N+1查询问题
复杂连接查询
优化目标:
🚀 降低数据库查询耗时
📉 减少网络IO和磁盘IO
💾 优化内存使用效率
🔄 提高并发处理能力
🗃️ 二、SQL 与查询优化
💡 SQL 优化核心策略
索引优化实践 场景:用户表按状态和创建时间查询
SELECT * FROM users WHERE status = 1 ORDER BY create_time DESC ;
CREATE INDEX idx_status_createtime ON users( status , create_time DESC ) ;
SELECT id, name, email FROM users
WHERE status = 1
ORDER BY create_time DESC
LIMIT 100 ;
索引优化建议:
场景
索引策略
效果
等值查询
单列索引
快速定位
范围查询
范围列放在复合索引后面
避免索引失效
排序操作
排序字段加索引
避免filesort
多条件查询
复合索引
索引覆盖
解决 N+1 查询问题 问题场景
public List < User > getUsersWithOrders ( ) {
List < User > users = userMapper. selectAllUsers ( ) ;
for ( User user : users) {
List < Order > orders = orderMapper. selectByUserId ( user. getId ( ) ) ;
user. setOrders ( orders) ;
}
return users;
}
解决方案:
< select id = " selectUsersWithOrders" resultMap = " userWithOrdersMap" >
SELECT u.*, o.id as order_id, o.amount, o.create_time as order_time
FROM users u
LEFT JOIN orders o ON u.id = o.user_id
WHERE u.status = 1
</ select>
< resultMap id = " userWithOrdersMap" type = " User" >
< id property = " id" column = " id" />
< result property = " name" column = " name" />
< collection property = " orders" ofType = " Order" >
< id property = " id" column = " order_id" />
< result property = " amount" column = " amount" />
< result property = " createTime" column = " order_time" />
</ collection>
</ resultMap>
懒加载配置优化
< settings>
< setting name = " lazyLoadingEnabled" value = " true" />
< setting name = " aggressiveLazyLoading" value = " false" />
< setting name = " lazyLoadTriggerMethods" value = " " />
</ settings>
< resultMap id = " userLazyMap" type = " User" >
< collection property = " orders" ofType = " Order"
select = " selectOrdersByUserId" column = " id"
fetchType = " lazy" />
</ resultMap>
📊 SQL 优化效果对比
优化策略
优化前耗时
优化后耗时
提升幅度
索引优化
1200ms
150ms
8倍
N+1解决
N+1次查询
1次查询
N倍
懒加载
立即加载所有数据
按需加载
2-5倍
📦 三、批量操作优化
💡 批量插入性能对比
BatchExecutor 批量操作
public void batchInsertUsers ( List < User > users) {
SqlSession sqlSession = sqlSessionFactory. openSession ( ExecutorType . BATCH ) ;
try {
UserMapper mapper = sqlSession. getMapper ( UserMapper . class ) ;
for ( int i = 0 ; i < users. size ( ) ; i++ ) {
mapper. insertUser ( users. get ( i) ) ;
if ( i % 1000 == 0 && i > 0 ) {
sqlSession. commit ( ) ;
sqlSession. clearCache ( ) ;
}
}
sqlSession. commit ( ) ;
} finally {
sqlSession. close ( ) ;
}
}
public void batchUpdateUsers ( List < User > users) {
try ( SqlSession sqlSession = sqlSessionFactory. openSession ( ExecutorType . BATCH ) ) {
UserMapper mapper = sqlSession. getMapper ( UserMapper . class ) ;
for ( User user : users) {
mapper. updateUser ( user) ;
}
sqlSession. commit ( ) ;
}
}
foreach 批量插入
< insert id = " batchInsertUsers" >
INSERT INTO users (name, email, status) VALUES
< foreach item = " user" collection = " list" separator = " ," >
(#{user.name}, #{user.email}, #{user.status})
</ foreach>
</ insert>
< update id = " batchUpdateUsers" >
< foreach item = " user" collection = " list" separator = " ;" >
UPDATE users
SET name = #{user.name}, email = #{user.email}
WHERE id = #{user.id}
</ foreach>
</ update>
public void batchInsertUsers ( List < User > users) {
int batchSize = 1000 ;
for ( int i = 0 ; i < users. size ( ) ; i += batchSize) {
List < User > batch = users. subList ( i, Math . min ( i + batchSize, users. size ( ) ) ) ;
userMapper. batchInsertUsers ( batch) ;
}
}
📊 批量操作性能数据
操作方式
1000条数据耗时
内存占用
适用场景
单条插入
15s
低
实时单条插入
BatchExecutor
1.5s
中
中等批量数据
foreach批量
0.8s
高
大数据量导入
🏊 四、连接池深度调优
💡 HikariCP 优化配置
spring :
datasource :
hikari :
maximum-pool-size : 20
minimum-idle : 5
connection-timeout : 30000
idle-timeout : 600000
max-lifetime : 1800000
connection-init-sql : SELECT 1
connection-test-query : SELECT 1
validation-timeout : 3000
register-mbeans : true
leak-detection-threshold : 60000
🔧 连接池参数调优指南
参数
建议值
说明
影响
maximum-pool-size
CPU核心数 * 2 + 1
最大连接数
并发能力
minimum-idle
maximum-pool-size / 2
最小空闲连接
响应速度
connection-timeout
30000ms
连接获取超时
系统韧性
idle-timeout
600000ms
空闲连接超时
资源回收
max-lifetime
1800000ms
连接最大生命周期
连接 freshness
📈 连接池监控配置
@Bean
public MeterRegistryCustomizer < MeterRegistry > metricsCustomizer ( DataSource dataSource) {
return registry -> {
if ( dataSource instanceof HikariDataSource ) {
HikariDataSource hikariDataSource = ( HikariDataSource ) dataSource;
new HikariDataSourceMetrics ( hikariDataSource, "app-datasource" )
. bindTo ( registry) ;
}
} ;
}
management:
endpoints:
web:
exposure:
include: health, metrics, info
endpoint:
health:
show- details: always
probes:
enabled: true
💾 五、缓存策略优化
💡 多级缓存架构
命中
未命中
命中
未命中
查询请求
一级缓存
返回结果
二级缓存
返回结果
数据库查询
写入缓存
返回结果
二级缓存优化配置
< cache
eviction = " LRU"
flushInterval = " 60000"
size = " 1024"
readOnly = " true"
blocking = " false" />
< select id = " selectUserById" resultType = " User" useCache = " true" flushCache = " false" >
SELECT * FROM users WHERE id = #{id}
</ select>
Redis 分布式缓存集成
@Configuration
@EnableCaching
public class RedisConfig extends CachingConfigurerSupport {
@Bean
public RedisTemplate < String , Object > redisTemplate ( RedisConnectionFactory factory) {
RedisTemplate < String , Object > template = new RedisTemplate < > ( ) ;
template. setConnectionFactory ( factory) ;
Jackson2JsonRedisSerializer < Object > serializer =
new Jackson2JsonRedisSerializer < > ( Object . class ) ;
ObjectMapper mapper = new ObjectMapper ( ) ;
mapper. setVisibility ( PropertyAccessor . ALL , JsonAutoDetect. Visibility . ANY ) ;
mapper. activateDefaultTyping ( mapper. getPolymorphicTypeValidator ( ) ,
ObjectMapper. DefaultTyping . NON_FINAL ) ;
serializer. setObjectMapper ( mapper) ;
template. setValueSerializer ( serializer) ;
template. setKeySerializer ( new StringRedisSerializer ( ) ) ;
return template;
}
@Bean
public CacheManager cacheManager ( RedisConnectionFactory factory) {
RedisCacheConfiguration config = RedisCacheConfiguration . defaultCacheConfig ( )
. entryTtl ( Duration . ofHours ( 1 ) )
. disableCachingNullValues ( )
. serializeValuesWith ( RedisSerializationContext. SerializationPair
. fromSerializer ( new GenericJackson2JsonRedisSerializer ( ) ) ) ;
return RedisCacheManager . builder ( factory)
. cacheDefaults ( config)
. build ( ) ;
}
}
缓存使用示例
@Service
public class UserService {
@Cacheable ( value = "users" , key = "#id" )
public User getUserById ( Long id) {
return userMapper. selectById ( id) ;
}
@CacheEvict ( value = "users" , key = "#user.id" )
public void updateUser ( User user) {
userMapper. updateUser ( user) ;
}
@Caching ( evict = {
@CacheEvict ( value = "users" , key = "#user.id" ) ,
@CacheEvict ( value = "user-list" , allEntries = true )
} )
public void updateUserWithCache ( User user) {
userMapper. updateUser ( user) ;
}
}
📊 缓存性能对比
缓存级别
平均响应时间
适用场景
注意事项
一级缓存
0.1ms
会话内重复查询
数据实时性要求高
二级缓存
0.5ms
跨会话共享数据
需要处理缓存一致性
Redis缓存
1.2ms
分布式环境
网络开销需要考虑
🎯 六、综合实践与监控
💡 全链路性能监控
@Aspect
@Component
@Slf4j
public class SqlPerformanceAspect {
private final ThreadLocal < Long > startTime = new ThreadLocal < > ( ) ;
@Around ( "execution(* com.example.mapper.*.*(..))" )
public Object monitorSqlPerformance ( ProceedingJoinPoint joinPoint) throws Throwable {
startTime. set ( System . currentTimeMillis ( ) ) ;
try {
return joinPoint. proceed ( ) ;
} finally {
long cost = System . currentTimeMillis ( ) - startTime. get ( ) ;
String methodName = joinPoint. getSignature ( ) . getName ( ) ;
if ( cost > 1000 ) {
log. warn ( "Slow SQL detected: {} - {}ms" , methodName, cost) ;
Metrics . counter ( "sql.slow.query" ) . increment ( ) ;
}
Metrics . timer ( "sql.execute.time" ) . record ( cost, TimeUnit . MILLISECONDS ) ;
startTime. remove ( ) ;
}
}
}
🛡️ 生产环境配置模板
mybatis :
configuration :
cache-enabled : true
lazy-loading-enabled : true
aggressive-lazy-loading : false
multiple-result-sets-enabled : true
use-column-label : true
use-generated-keys : true
default-executor-type : REUSE
default-statement-timeout : 30
map-underscore-to-camel-case : true
local-cache-scope : SESSION
spring :
datasource :
hikari :
maximum-pool-size : 20
minimum-idle : 5
connection-timeout : 30000
idle-timeout : 600000
max-lifetime : 1800000
leak-detection-threshold : 60000
📈 性能监控指标体系
监控指标
预警阈值
处理策略
SQL执行时间
> 1000ms
优化SQL或添加索引
连接池等待时间
> 500ms
调整连接池大小
缓存命中率
< 80%
优化缓存策略
批量操作耗时
> 预期2倍
调整批量大小或策略
🔚 总结与延伸
📚 优化要点回顾
SQL优化:索引、避免N+1、合理使用连接查询
批量操作:BatchExecutor、foreach批量插入
连接池:HikariCP参数调优、监控配置
缓存策略:多级缓存、分布式缓存集成
🚀 持续优化建议
优化循环:监控 → 分析 → 优化 → 验证 → 持续改进