Closed rajkamal0610 closed 1 year ago
Hello @rajkamal0610 ,
Is there a reason why you use a small batch size (BatchSize = 100;
)? The default is 10,000 for PostgreSQL in our library
Best Regards,
Jon
Hi @JonathanMagnan ,
No specific reason, we tried with the batch size 100, 1000, 10000, 100000 but the result was same.
In the code we are generating 6000 records with or without batch size, processing time 4sec
Any update regarding this @JonathanMagnan
Hello @rajkamal0610 ,
My developer did some performance tests, and the issue seems more related to the uuid_generate_v4()
function being slow. For example, here is some statistics on 6000 inserted rows:
""UnqSerialNo"" uuid NOT NULL DEFAULT uuid_generate_v4(),
: was taking 2.8s""ID"" integer NOT NULL DEFAULT nextval('""entitysimples_id_seq""')
was taking 0.3sObviously, one is a GUID
and the other the identity, but you can already see there is a huge performance difference that, unfortunately, we don't control in this case.
One way to improve the performance is to not longer let the PostgreSQL
database manage the GUID when using bulk operations but instead generate it on the application side and insert the value directly.
Let me know if that explains the cause & possible solution correctly
Best Regards,
Jon
Hi @JonathanMagnan,
Thanks for your effort.
Yes, as you said the cause of slowness is uuid. We will make the changes either by generating it on the application side or removing that column
Hi @JonathanMagnan,
we are facing slowness while inserting the data to Postgres [ basically it's a Timeseries data, also continuous insertion happens to the table based on scheduling period]. I placed here the sample code and table structure. Need your help