Closed westofpluto closed 7 years ago
Have you been able to figure out, with the memory_profiler
library, what objects are leaking?
Do you run your code in multiple threads?
So far I do not see obvious issues with your pysnmp code, however I have a few suggestions:
SnmpEngine
instance unloading to free up resourcesSnmpEngine
instance for all your SNMP queries -- not only it would be more efficient, it might work around possible bugs causing memory leakCan you please elaborate on this answer? How exactly would I do "SnmpEngine instance unloading to free up resources"? Do you just mean that I should call
del snmpEngine
after I use it? Or is it more involved than that?
Yes we run the code in multiple threads.
Let me revise my original answer then. Assuming you are calling SNMPGet
from a thread and you are not doing SNMP I/O asynchronously, my suggestion is to use something like this:
from pysnmp.hlapi import *
from pysnmp.hlapi.lcd import CommandGeneratorLcdConfigurator
class SNMPGet:
"""Instance of this class must be thread-local, not shared between threads."""
def __init__(self, *):
self._snmpEngine = SnmpEngine()
def _internal_execute(self, tmp_object_identifiers):
gen = getCmd(self._snmpEngine,
CommunityData('public'),
UdpTransportTarget(('demo.snmplabs.com', 161)),
ContextData(),
*[ObjectType(ObjectIdentity(oid)) for oid in tmp_identifiers])
errorIndication, errorStatus, errorIndex, varBinds = next(gen)
def cleanup(self):
"""Running SnmpEngine instance against ever increasing set of targets may grow memory.
To drop SnmpEngine's caches you can periodically call this method. The backside is
slight drop in performance as hot caches will have to be repopulated.
"""
CommandGeneratorLcdConfigurator.unconfigure(self._snmpEngine)
As a side note, if you are fetching many OIDs from many targets, async I/O may give you better performance compared to MT one.
If the leak shows up with the later pysnmp versions -- please reopen this issue. Thank you!
We are using v4.3.2 and we are seeing the memory steadily increase when using AsynCommandGenerator. The code is like what is shown below. This is a SNMPGet class that we create and run for each set of OIDs we have. If we have a long running process that does this over and over, that process eventually consumes all the machine memory. We are using the memory_profiler library and @profile decorator as described here: https://pypi.python.org/pypi/memory_profiler
Are we using AsynCommandGenerator incorrectly? How can we make the error growth go away in a long running process?