function 10489
dict 7023
tuple 5639
cell 5024
weakref 2030
type 1359
wrapper_descriptor 1303
getset_descriptor 1194
method_descriptor 1162
builtin_function_or_method 1126
list 540
ModuleSpec 536
module 535
member_descriptor 498
SourceFileLoader 466
property 345
_UnionGenericAlias 336
classmethod 336
set 265
_GenericAlias 170
Snapshot 2 (same command, waiting longer):
function 10489
dict 7023
tuple 5812
cell 5024
weakref 2030
type 1359
wrapper_descriptor 1303
getset_descriptor 1194
method_descriptor 1162
builtin_function_or_method 1126
list 541
ModuleSpec 536
module 535
member_descriptor 498
SourceFileLoader 466
property 345
_UnionGenericAlias 336
classmethod 336
set 265
_GenericAlias 170
Complex to analyze, there is a large variety of usage of tuple ...
Edit: I think there is maybe "something" with DB worker queue management at db.py. I think the queue / objects / process in multiprocessing manager increases all over the time without cleaning process. Not sure about that, I will continue to investigate.
I have noticed a memory leak in Netscan. For the moment, I did not manage to find where it occurs.
In consequence, on very large scope (ex: >150000000 ports), the Netscan process is killed by oom_reaper after a while:
How to reproduce ?
Launch a Netscan portscan on large scope, monitor the RAM, you will notice it gradually increases.
Env. : Docker or from scripts, Debian, Elasticsearch enabled.
Hints
With
pdb
andobjgraph
, with two snapshots, we can just notice thantuple
objects are created without being destructed:Snapshot 1:
Snapshot 2 (same command, waiting longer):
Complex to analyze, there is a large variety of usage of tuple ...
Edit: I think there is maybe "something" with DB worker queue management at
db.py
. I think the queue / objects / process in multiprocessing manager increases all over the time without cleaning process. Not sure about that, I will continue to investigate.