Open GYHHAHA opened 2 years ago
Test code usage: Launch 4 processes and make a sum operation on the shared numpy array with shared_memory in each process.
from multiprocessing import shared_memory import time from functools import partial import numpy as np from multiprocessing import Pool def f(shape, dtype, name, n): my_sm = shared_memory.SharedMemory(name=name) arr = np.ndarray(shape=shape, dtype=dtype, buffer=my_sm.buf) time.sleep(n) arr.sum() time.sleep(n) if __name__ == "__main__": p = Pool(4) arr = np.random.rand(int(5e8)) shm = shared_memory.SharedMemory(create=True, size=arr.nbytes) shm_arr = np.ndarray(shape=arr.shape, dtype=arr.dtype, buffer=shm.buf) shm_arr[:] = arr[:] del arr f_ = partial(f, shm_arr.shape, shm_arr.dtype, shm.name) p.map(f_, [10, 10, 10, 10])
Issue: The windows system memory monitor's result is different from memory profiler, and I believe the former one is correct.
Thus some overcounts happen here. Thanks for your attention on this.
Is this issue solved. I am even stuck with the same problem. Is there a flag that I can switch off to count shared variable only once?
Test code usage: Launch 4 processes and make a sum operation on the shared numpy array with shared_memory in each process.
Issue: The windows system memory monitor's result is different from memory profiler, and I believe the former one is correct.
Thus some overcounts happen here. Thanks for your attention on this.