Open thouis opened 12 years ago
Comment in Trac by atmention:dwf, 2010-02-26
Comment in Trac by atmention:pv, 2010-02-26
I think this is a casting issue: the function is defined to accept only int32 (= NPY_LONG on 32-bit), but int64 (= NPY_LONG_LONG) is for some reason not automatically cast to the smaller integer.
In other contexts, Numpy does downcast automatically without warnings,
x = numpy.array([1,2,3,], dtype='int32') x[:] = numpy.asarray([212312312333,3,4], dtype='int64') x array([1858914829, 3, 4]) so probably it should do it also here.
Another related issue is that on 64-bit, if you define ufunc to accept only NPY_INT, it won't accept NPY_LONG but fails instead with a similar error.
Comment in Trac by atmention:dwf, 2010-02-26
I see. What is the preferred way to cast to the native NPYLONG type? Using np.int / np.int0? And is there an intrinsic reason why 32-bit machines should only accept 32-bit integers as the 'n' argument here?
Comment in Trac by atmention:pv, 2010-02-26
Using flags NPY_ALIGNED|NPY_FORCECAST
in PyArray_FROM_OTF
should do it.
Original ticket http://projects.scipy.org/numpy/ticket/1413 Reported 2010-02-26 by atmention:dwf, assigned to unknown.
Reported to the list by James Bergstra, confirmed by me:
produces:
It seems to be not only 32-bit specific but x86-specific. On a ppc
machine, 32-bit mode, it behaves as expected:
So it smells a bit like an endianness bug/problem with the definition of
NPY_LONG
.I can confirm the bug on OS X/Intel 32-bit, and Linux x86-32 (both
1.3.0 and most recent svn trunk), as well as its absence on Linux
x86-64. The problem seems to be with this line in [source:trunk/numpy/random/mtrand/mtrand.pyx mtrand.pyx], line
3306 in the trunk: