Open firebird-automations opened 16 years ago
Commented by: @aafemt
In ISQL it is already implemented as bulk_insert. In DSQL it is less effective than array DML. What's the point?
Commented by: Gustavo Torres (gusta1308)
I want an easy way for to send in one transaction multiples rows, around 500 rows or less.
Commented by: @livius2
this is already implemented feature use execute block with many inserts example:
execute block as insert into t(a, b, c) values(1,2,3); insert into t(a, b, c) values(4,5,6); insert into t(a, b, c) values(7,8,9); ........
end
Commented by: @dyemanov
Karol, the requested multi-value INSERT statement is a standard feature, so I believe the request is perfectly valid regardless of our EXECUTE BLOCK feature (which can be used as a temporary workaround though).
Commented by: Sean Leyne (seanleyne)
Edited the subject to correct terminology (rows vs. values), clarify requirement (single command vs. complex SELECT UNION/Inline PSQL [i.e. EXECUTE BLOCK solution] as well as identifying that the engines which support the single command approach are PostgreSQL and MySQL (MS SQL does not support at all, Oracle has an odd variation).
Commented by: Eli Godoy (agamenon)
I have 200-250 fields, I need to put ALL fields in each insert?
Commented by: Stefan Heymann (stefanheymann)
> What's the point? A smaller command that has to go over the network. Imagine you have a table with only one or two columns and want to insert many rows. The resulting command is much smaller. And it probably is faster for the server to process because it doesn't have to do the parsing for the "into" part everytime.
Commented by: Tommi Prami (mwaltari)
I think this would be very good. Less stuff top write when inserting from some text file etc.
No need to repeat INSERT INTO Table (a, b, c) for all rows. Quite old request most likely missed the train for FB4.0 already, maybe next bigger release.
Commented by: @sim1984
It seems to me that this is part of another more general ticket CORE3880
Commented by: @dyemanov
At least they should share the same underlying implementation (constant multi-row record source), I'd say.
Commented by: @pavel-zotov
Gustavo,
there is undocument command in ISQL, its syntax is like this:
set bulk_insert [ update or ] insert into ... [ matching (...) ]
It runs more than 3x faster than if you will try to use prepared statement and loop with passing of parameters for each row (I compared using Python FDB driver, cursor.executemany( [data_array] ); maybe for other drivers this ratio will differ).
set bulk_insert update or insert into td_full (id, tk, dt, tm, dir, pri, lot, cnt) values ( ?, ?, ?, ?, ?, ?, ?, ?) matching(id); (718457928, 2, '2010-01-11', '10:29:59', -1, 53.75000000, 100, 10) (718457935, 21, '2010-01-11', '10:29:59', -1, 1.17500000, 1000, 6500) (718457942, 21, '2010-01-11', '10:29:59', -1, 1.17500000, 1000, 6500) stop
NB-1: "set bulk_insert update or insert" - must be written in one line NB-2: dates must be written in format YYYY-MM-DD
PS
Note also, that connection using xnet or wnet protocol will run significantkly faster than using inet.
For text file with size \~53 Mb (1'080'980 rows) i got: * xnet: 8518 ms; 8543 ms; 8624 ms * wnet: 9235 ms; 9360 ms; 9391 ms * inet: 12675 ms; 12663 ms; 12668 ms
Commented by: Sean Leyne (seanleyne)
Pavel,
A "Documentation" ticket/issue should be created for this "hidden" feature to be documented.
Separately, does command syntax the full standard INSERT statement syntax for columns/fields [i.e. GEN_ID(), CAST and (SELECT xxx From yyyy)]?
Commented by: @asfernandes
It's a incomplete hack, it's ISQL only.
It has been created only for internal messages scripts.
Better to maintain it undocumented.
Are there any plans to support this?
Related/duplicate of #4217
Submitted by: Gustavo Torres (gusta1308)
Votes: 20
I suggest include INSERT for multi-values
INSERT INTO Table (a, b, c) values(1, 2, 3), (4, 5, 6);
for improvement performance insert