After using heapy/guppy on my daemon application to try and locate memory
problems I found that the futures work item objects we're being held as
references which was preventing the data I had passed through my worker
function from being garbage collected.
After searching about the web I ran into an issue that was logged very recently
(last month) against the futures library in Python 3.
http://bugs.python.org/issue16284
This issue should provide all the details (plus the fix) and simply needs to be
back ported. I made some of the changes locally just to test it out and my
memory has noticably improved.
What steps will reproduce the problem?
1. Use guppy/heapy and start a monitor on a daemon like process
2. Have your daemon process (or just while true main thread) submit work
through to ThreadPoolExecutor
3. submit a couple jobs to warm up the memory then reset the heap reference
point [hp.setref()]
4. check the heap [hp.heap()]
5. run another job through the futures and check the heap
What is the expected output? What do you see instead?
I expected the memory to return back to the previous value after the job was
done, but it grew on each subsequent job instead and digging into the main
culprits turned out to be objects that were being referenced in the futures
work item.
What version of the product are you using? On what operating system?
futures 2.1.3
Please provide any additional information below.
Again, looks like someone else has already found this very problem in the
formal futures library of python 3 and has fixed it, you can find all the
details here: http://bugs.python.org/issue16284
Original issue reported on code.google.com by tmashin...@hotmail.com on 15 Nov 2012 at 7:16
Original issue reported on code.google.com by
tmashin...@hotmail.com
on 15 Nov 2012 at 7:16