Open spring-projects-issues opened 6 years ago
Anton Reshetnikov commented
+1 for
- Use batched statements for inserts (and updates)\
I was going to create separate issue for batched jdbc operations in Spring DATA JDBC, but found this issue
Jens Schauder commented
Anton Reshetnikov batch inserts will have to wait until we have proper support for Id generation strategies, because the current one (via SERIAL/AUTOINCREMENT in the database) does not work with batches. See https://github.com/spring-projects/spring-framework/issues/6530
+1 for 1 st and 3.1
I think that the 1 st option must be default behavior or may be switched on by some annotation on Aggregate Root. 2 nd option may be must be behavior when developer define which row must be updated which insert and which must be deleted. May be similar to isNew function must be changeOperation(or changeType) function in child aggregate which must be implemented from some interfays(like Persistable) and return values as "INSERT", "UPDATE", "DELETE", "NONE"(means non operation must be performent on reference collection element). Example:
class AgregateRoot implements Persistable{
.........
@UpdateStrategy("UPSERT")
Set<ChileAggregate> aggregate;
}
class ChildAggregate implements SomeInterface{
.......
public changeType(){
return "UPDATE";
}
}
This is an Epic gathering various approaches to improve the writing behaviour of Spring Data JDBC.
If we have an Aggregate Root
A
referencing entitiesB
on update the following SQL statements get executed:B
sA
B
sWe should improve this in multiple ways:
B
s and only delete those not longer presentUse batched statements for inserts (and updates)This is implemented.B
s. There are various possibilities how to do thisAlso, performancetests would be nice and probably a good first step.
Create separate issues to work on any of these. Use this issue to discuss these and other strategies to improve persisting performance.
Jens Schauder opened this ticket as DATAJDBC-210