示例:批量insert多条语句
1、mapper文件中定义sql,your_table 要插入数据的表名,column1, column2, ... 是表中的列名,field1, field2, ... 是对应表的Entity中的属性名,执行的后生成的sql如下:
INSERT INTO your_table (column1, column2, ...) VALUES(column1, column2, ...),(...)
XML
<insert id="batchInsert">
INSERT INTO your_table (column1, column2, ...)
VALUES
<foreach collection="list" item="item" index="index" separator=",">
(#{item.field1}, #{item.field2}, ...)
</foreach>
</insert>
2、程序调用
java
List<YourEntity> entities = new ArrayList<>();
// 填充entities列表
YourMapper mapper = sqlSession.getMapper(YourMapper.class);
mapper.batchInsert(entities);
sqlSession.commit();
上述程序执行后,后台只会发送1条sql语句:INSERT INTO your_table (column1, column2, ...) VALUES(column1, column2, ...),(...)
当1次插入的数据量过大,如100万,受限于网络包大小限制或者有些数据库有最大接受包的限制(如mysql 中max_allowed_packet 参数限制,默认值为4M),会出现如下sql异常:com.mysql.jdbc.PacketTooBigException
解决1次插入数据量过大问题:
解决办法,分批插入,比如每次插入200条
java
public void insertLargeNumberOfRecords(List<YourEntity> entities) {
int batchSize = 200;
int numberOfBatches = (entities.size() + batchSize - 1) / batchSize;
for (int i = 0; i < numberOfBatches; ++i) {
int start = i * batchSize;
int end = Math.min(start + batchSize, entities.size());
List<YourEntity> batchEntities = entities.subList(start, end);
yourMapper.insertBatch(batchEntities);
}
}
示例:批量删除
mapp文件如下:
XML
<!-- 在Mapper XML中定义批量删除的操作 -->
<delete id="batchDelete" parameterType="list">
DELETE FROM your_table WHERE id IN
<foreach item="item" collection="list" open="(" separator="," close=")">
#{item}
</foreach>
</delete>
1次删除数据量过大问题解决办法参考【解决1次插入数据量过大问题】