zmeng3 / pyodbc

Automatically exported from code.google.com/p/pyodbc
MIT No Attribution
0 stars 0 forks source link

executemany with SQL Server MERGE statement behaviour incorrect with duplicated keys #372

Open GoogleCodeExporter opened 8 years ago

GoogleCodeExporter commented 8 years ago
What steps will reproduce the problem?
1. Create a table in SQL Server:
CREATE TABLE Table_1(
    colA varchar(50),
    colB varchar(50)
)
2. Run this merge statement to perform an upsert on the new table
SQL = """merge into table_1 as t
USING (
    VALUES 
    (?,?)
) as s (colA,colB)
ON t.colA = s.colA
WHEN MATCHED THEN
    UPDATE SET colB = s.colB
WHEN NOT MATCHED THEN
    INSERT (colA,colB)
    VALUES (colA,colB)
    ;
"""
data = (('a','1'),('a','2'))
executemany(SQL,data)

What is the expected output? What do you see instead?
I expect this to return a SQL error:
"The MERGE statement attempted to UPDATE or DELETE the same row more than once. 
This happens when a target row matches more than one source row. A MERGE 
statement cannot UPDATE/DELETE the same row of the target table multiple times. 
Refine the ON clause to ensure a target row matches at most one source row, or 
use the GROUP BY clause to group the source rows."

Instead, the query succeeds leaving a single row in the table.
colA    colB
a   2

What version of the product are you using? On what operating system?
pyodbc 3.0.7 with Python 2.7 on windows

Please provide any additional information below.
It appears that the merge statement is being executed once per row rather than 
as a set. I understand that this behaviour is allowed by the Python DB-API, 
however in the case of a MERGE statement the result of performing multiple 
single-row merges may not be the same as performing a single set merge

Original issue reported on code.google.com by ajdo...@gmail.com on 26 Jun 2014 at 1:38

GoogleCodeExporter commented 8 years ago
Sorry, left out a step. You need to INSERT INTO Table_1 (colA,colB) VALUES 
('a','1') before the MERGE as well

Original comment by ajdo...@gmail.com on 26 Jun 2014 at 1:48