Closed CaptainCsaba closed 1 year ago
Possibly related:
Is the only way to solve this is to send in all my input objects in a JSON array and then use OPENJSON? Can't the same result be achieved by normal methods?
Regular executemany() simply calls execute() in a loop internally, ignoring the result of each one. If you need the result then you may call execute() in a loop yourself and gather the results in your script.
Note that fast_executemany is optimised for bulk insertion (only), so it might not be applicable to your use-case.
.executemany()
is not specifically designed to return result sets. PEP 249 says:
Use of this method for an operation which produces one or more result sets constitutes undefined behavior, and the implementation is permitted (but not required) to raise an exception when it detects that a result set has been created by an invocation of the operation.
So if you want result set(s) you need to use .execute()
. You can either
.execute()
call.Thank you for the explanation, it is much more clear now what is happening. I just have one more question as I am unfamiliar how SQL handles situations. If I need to insert a lot of lines, and I decide to use .execute()
, but I want to improve performance, how bad of an idea is it to use either the threads or async methods of python to run X amount of .execute(
)-s together simultaneously? And if it is not a bad idea, and ordering is not important, what is the safe amount of simultaneous calls one could make? Would all of these require a separate connection and cursor?
Depending on what the bottleneck is, you will have to experiment to find what works best for your situation. The ODBC Driver for SQL Server can definitely handle being called from multiple threads simultaneously, although you may find most benefit with multiple connections.
Thank you very much. I got my answers. The thread can be closed
I have the following stored procedure. I create some variables in the beginning that I select no matter what happens in the end as they indicate information about the success of the procedure. So I want to insert, then have the select return a row in the end. I also call this single stored procedure with
executemany
on multiple objects. Running the stored procedure in SQL Server Management Studio is working fine and gives back the desired select in the end no matter what. Calling the same from pyodbc only returns the select if I run execute or executemany on exactly 1 object.Ideally I would like to receive a response for all objects I give to executemany with fast_executemany set to True. If I set
fast_executemany
to True I get nothing back, just 'Previous SQL was not a query'.I have
SET NOCOUNT ON
in the procedure and I have also setautocommit
on when creating the connection. No matter what I do,.fetchall()
gives back'pyodbc.ProgrammingError: No results. Previous SQL was not a query.'
The row gets inserted correctly and everything works fine, but I am not getting back the return value in Python. Removingfastexecute_many = True
will make the function give back exactly only 1 row of result, but the rest are left out. If I call the executemany function on many rows I would expect to get back a result for each insert. This leaves me with using just execute on rows one-by-one which is not ideal as I am working with many thousands of rows.How can I use executemany with fast_executemany set to True on multiple objects with all of them getting a return value in the end in an array?
As I have stated, the inserts happen, but I get no result back for any of them from the select at the end.