I'm not sure if this should be construed as a bug in the documentation or
in the implementation itself ... When allocating shared memory using using
sharedctypes.RawArray (and potentially others), the memory is not always
initialised to zero. The docs explicitly say that RawArray returns a zeroed
array.
This seems to only occur when one array is deleted and a new one allocated,
and only for relatively large arrays. In this case the new array contains
values from the previous one. I assume this is due to the fact that whilst
the memory in the heap is initially zeroed, no rezeroing occurs when a
segment of the heap is reused.
a simple example which reproduces it in on my machine is as follows:
from multiprocessing import sharedctypes
a = sharedctypes.RawArray('i', 10000)
for i in range(10000):
a[i] = i
print a[1000] ## 1000
del(a)
b = sharedctypes.RawArray('i', 10000)
print b[1000] ## 1000
I ran the commands manually in the console, which I suspect might be
necessary to give the python garbage collector enough time to get around to
cleaning up a before b is allocated.
I'm using multiprocessing 2.6.2.1, python 2.5.2, on ubuntu 8.04 x64
Original issue reported on code.google.com by david.ba...@gmail.com on 13 Mar 2010 at 12:39
Original issue reported on code.google.com by
david.ba...@gmail.com
on 13 Mar 2010 at 12:39