djhenderson / pyodbc

Automatically exported from code.google.com/p/pyodbc
MIT No Attribution
0 stars 0 forks source link

When the column is a bigint, and the client is a 64-bit python, 1*10^12 gets truncated and is negative #149

Open GoogleCodeExporter opened 9 years ago

GoogleCodeExporter commented 9 years ago
What steps will reproduce the problem?
1. a = 1000000000000
2. c.execute("insert into bigint_tbl values (?)", a)
3. db.commit()
4. select * from bigint_tbl

What is the expected output? What do you see instead?
Expected output is:
  1000000000000
Actual output
  -727379968L

What version of the product are you using? On what operating system?
2.1.7, 2.1.8 Solaris 10 x86 (so far)

Please provide any additional information below.

Python 2.6.5 (r265:79063, Nov 23 2010, 04:04:44)
[GCC 4.3.3] on sunos5
Type "help", "copyright", "credits" or "license" for more information.
>>> import sys
>>> sys.maxint
9223372036854775807

374:1> select @@VERSION
374:2> go

        ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------

        Microsoft SQL Server 2008 R2 (RTM) - 10.50.1720.0 (X64)
        Jun 12 2010 01:34:59
        Copyright (c) Microsoft Corporation
        Enterprise Edition (64-bit) on Windows NT 6.1 <X64> (Build 7600: )

(1 row affected)

Unsurprisingly, the answer is to use 1000000000000L, but it seems that when the 
native int is 4 bits, this is unnecessary.

Original issue reported on code.google.com by pedri...@gmail.com on 5 Jan 2011 at 9:05

GoogleCodeExporter commented 9 years ago
That last sentence should read that when the native int is 8 bytes on python 
[etc.]

Original comment by pedri...@gmail.com on 5 Jan 2011 at 10:37

GoogleCodeExporter commented 9 years ago
Attached is a patch to fix this issue.  The problem was that SQL_C_LONG was 
being used for a C variable of type long, but it actually refers to a 32-bit 
integer regardless of sizeof(long).  The constant is so named by Microsoft 
because their compilers always have the long type being a 32-bit integer.

[Reference: 
http://mailman.unixodbc.org/pipermail/unixodbc-dev/2005-March/000398.html]

Original comment by lukedell...@gmail.com on 15 Jan 2011 at 3:33

Attachments: