Closed gizmo93 closed 4 years ago
Have you looked at using https://zillow.github.io/ctds/cursor.html#ctds.Cursor.executemany
Yes, but executemany is, compared to the pyodbc implementation using fast_executemany=True much slower. It takes forever to insert a million rows and executemany does a lot of batch requests on the sql server, which is the reason why we prefer to use bulk inserts.
Sorry I haven't responded sooner. I think this is easily addressable if ctds simply ignores IDENTITY
columns, and never passes anything for them. Does that seem reasonable? It could also raise a warning if the caller attempts to specify the identity column in a dict
row
Should be fixed in 1.11.0
Having a table like this:
and trying to insert data using bulk insert from a dataset like the following:
leads to problems because ctds tries to insert the Id Column with a NULL value, instead of just "ignoring" it and let the SQL Server do its work (incrementing the column). Maybe it would be better to build the used INSERT Query in the Dict-case just with the keys from the Dict as column names.