baomidou / mybatis-plus

An powerful enhanced toolkit of MyBatis for simplify development
https://baomidou.com
Apache License 2.0
16.25k stars 4.29k forks source link

批量插入,短批量间断性对象某一属性为null时,会间断预处理INSERT INTO导致插入变慢很多 #6248

Closed sgps000 closed 2 months ago

sgps000 commented 2 months ago

环境:JDK17,mybatis-plus 3.5.7 达梦数据库,驱动com.dameng:DmJdbcDriver18:8.1.1.193

场景1:模拟了一下,将hospital间隔一条设成null时 部分日志:

==> Parameters: 229(String), null, null, 沈**(String), null, 230000(String), null, 1(Integer), 1(Integer), 2023-03-31 18:13:50.979(Timestamp), 2023-10-31 15:08:51.218(Timestamp), 1698736131218(Long)
==>  Preparing: INSERT INTO center_doctor ( id, archive_code_list, sign_code_list, name, org_code_list, team_code_list, status, u, ctime, utime, utimestamp ) VALUES ( ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ? )
==> Parameters: 230(String), null, null, 周**(String), null, null, 1(Integer), 1(Integer), 2023-03-31 18:13:50.981(Timestamp), 2023-10-31 15:08:51.218(Timestamp), 1698736131218(Long)
==>  Preparing: INSERT INTO center_doctor ( id, archive_code_list, sign_code_list, name, org_code_list, hospital, team_code_list, status, u, ctime, utime, utimestamp ) VALUES ( ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ? )
==> Parameters: 231(String), null, null, 李**(String), null, 232000(String), null, 1(Integer), 1(Integer), 2023-03-31 18:13:50.983(Timestamp), 2023-10-31 15:08:51.218(Timestamp), 1698736131218(Long)
==>  Preparing: INSERT INTO center_doctor ( id, archive_code_list, sign_code_list, name, org_code_list, team_code_list, status, u, ctime, utime, utimestamp ) VALUES ( ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ? )
==> Parameters: 232(String), null, null, 叶**(String), null, null, 1(Integer), 1(Integer), 2023-03-31 18:13:50.984(Timestamp), 2023-10-31 15:08:51.218(Timestamp), 1698736131218(Long)
==>  Preparing: INSERT INTO center_doctor ( id, archive_code_list, sign_code_list, name, org_code_list, hospital, team_code_list, status, u, ctime, utime, utimestamp ) VALUES ( ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ? )

场景2:遇到的真实场景是:将一批4000+的上述字段数据,批量插入(批量插入更新)到库中,其中一半的字段可能有值也可能为null,且无规律,只能保证必填的几个字段,此时空表消耗了20+秒。日志如下图:

image

场景3:后又模拟全量数据插入1万条,保证每个字段同时有值,同时无值,日志中不会中途插入一条Preparing: INSERT INTO,这时就花了1秒多一点。 再之后,将真实场景的数据脱敏,重新测试,耗时不到1秒,日志如下图:

image

这时也会偶尔几个对象会缺少字段值,但并未出现场景2的情况。

场景4:脱敏程度改了下,结果又变的很慢了,日志如下图:

image
Preparing: INSERT INTO center_doctor ( id, archive_code_list, sign_code_list, name, org_code_list, team_code_list, status, u, ctime, utime, utimestamp ) VALUES ( ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ? )

从场景1看出插入语句跳过了null值字段,批量插入(批量插入更新)看实现都是一条条sqlSession.update,然后再一批sqlSession.flushStatements() 之前用的3.5.6版本serviceImpl.saveBatch,改了3.5.7的mapper.insert,都一样的结果 诉求:为啥要跳过Null值?还有其他情况会这样分开预处理插入语句?有什么配置能强制全量字段插入吗?

miemieYaho commented 2 months ago

看文档字段注解,配置insert时忽略null判断

sgps000 commented 2 months ago

看文档字段注解,配置insert时忽略null判断

我这边都没用@TableField注解,在全局配置地方加了 dbConfig.setInsertStrategy(FieldStrategy.ALWAYS); 已经可以了

weijie0 commented 3 weeks ago

遇到同样问题,mark一下