Closed jepler closed 1 year ago
I think the culprit is the ndarray_new_ndarray
function here: https://github.com/v923z/micropython-ulab/blob/6fcfeda58da8632bb7774858a9bf974afe65d5dd/code/ndarray.c#L612-L616 which assigns to the len
member of the ndarray
structure: https://github.com/v923z/micropython-ulab/blob/6fcfeda58da8632bb7774858a9bf974afe65d5dd/code/ndarray.h#L141-L152 That is declared to be size_t
. Unless we change that, we're going to run into the issue that you raised here.
However, I wonder, whether this is an problem on MCUs. I see that more an more people use micropython
, and ulab
on PCs, but do you expect someone to try to allocated GBs of RAM for an application on an MCU?
What we could still do is check the length in the ndarray_new_ndarray
function, and gracefully bail out, if that is large.
Describe the bug This was a case of "huh, I wonder" rather than something that actually affects a program I want to run. These behaviors are specifically on the 64-bit unix port, different numbers would provoke similar behaviors on 32-bit embedded arm builds.
Expected behavior MemoryError, as it's infeasible to allocate an array with 2**62 elements
Additional context Probably arithmetic on array size overflows the value type it's performed on, a common problem in C programs. Another case that gives an odd error (the actual allocation of 0x20000001800000028 [66 bits] probably gets turned into a requested allocation of 0x1800000028 [64 bits] which fails, but the failure only prints the value 0x28 [32 bits]):
The difference may be that not all the low 64 bits of the size are zero in this case. different behaviors would be seen on 32-bit platforms, etc.
Another case where allocation of 0x20000000000000028 bytes probably gets turned into an erroneously-successful allocation of 40 bytes: