soarpenguin / python-multiprocessing

Automatically exported from code.google.com/p/python-multiprocessing
Other
0 stars 0 forks source link

Shared memory not being initialised to zero #25

Open GoogleCodeExporter opened 9 years ago

GoogleCodeExporter commented 9 years ago
I'm not sure if this should be construed as a bug in the documentation or 
in the implementation itself ... When allocating shared memory using using 
sharedctypes.RawArray (and potentially others), the memory is not always 
initialised to zero. The docs explicitly say that RawArray returns a zeroed 
array.

This seems to only occur when one array is deleted and a new one allocated, 
and only for relatively large arrays. In this case the new array contains 
values from the previous one. I assume this is due to the fact that whilst 
the memory in the heap is initially zeroed, no rezeroing occurs when a 
segment of the heap is reused. 

a simple example which reproduces it in on my machine is as follows:

from multiprocessing import sharedctypes
a = sharedctypes.RawArray('i', 10000)

for i in range(10000):
    a[i] = i

print a[1000] ## 1000
del(a)
b = sharedctypes.RawArray('i', 10000)
print b[1000] ## 1000

I ran the commands manually in the console, which I suspect might be 
necessary to give the python garbage collector enough time to get around to 
cleaning up a before b is allocated.

I'm using multiprocessing 2.6.2.1, python 2.5.2, on ubuntu 8.04 x64

Original issue reported on code.google.com by david.ba...@gmail.com on 13 Mar 2010 at 12:39