hegusung / netscan

Network scanner
MIT License
34 stars 12 forks source link

Memory leak / overhead #47

Open cosad3s opened 1 year ago

cosad3s commented 1 year ago

I have noticed a memory leak in Netscan. For the moment, I did not manage to find where it occurs.

In consequence, on very large scope (ex: >150000000 ports), the Netscan process is killed by oom_reaper after a while:

[79022.153825] oom-kill:constraint=CONSTRAINT_NONE,nodemask=(null),cpuset=init.scope,mems_allowed=0,global_oom,task_memcg=/user.slice/user-1000.slice/session-55.scope,task=python3,pid=3839128,uid=0
[79022.154057] Out of memory: Killed process 3839128 (python3) total-vm:5686720kB, anon-rss:5448640kB, file-rss:0kB, shmem-rss:8kB, UID:0 pgtables:10748kB oom_score_adj:0
[79022.555182] oom_reaper: reaped process 3839128 (python3), now anon-rss:0kB, file-rss:0kB, shmem-rss:8kB

How to reproduce ?

Launch a Netscan portscan on large scope, monitor the RAM, you will notice it gradually increases.

Env. : Docker or from scripts, Debian, Elasticsearch enabled.

Hints

With pdb and objgraph, with two snapshots, we can just notice than tuple objects are created without being destructed:

python3 -m pdb ./scripts/portscan.py -H ./ips.txt -p1-65535 -w 25
(Pdb) r
# wait
(Pdb) import objgraph
(Pdb) objgraph.show_most_common_types(limit=20)

Snapshot 1:

function                   10489
dict                       7023
tuple                      5639
cell                       5024
weakref                    2030
type                       1359
wrapper_descriptor         1303
getset_descriptor          1194
method_descriptor          1162
builtin_function_or_method 1126
list                       540
ModuleSpec                 536
module                     535
member_descriptor          498
SourceFileLoader           466
property                   345
_UnionGenericAlias         336
classmethod                336
set                        265
_GenericAlias              170

Snapshot 2 (same command, waiting longer):

function                   10489
dict                       7023
tuple                      5812
cell                       5024
weakref                    2030
type                       1359
wrapper_descriptor         1303
getset_descriptor          1194
method_descriptor          1162
builtin_function_or_method 1126
list                       541
ModuleSpec                 536
module                     535
member_descriptor          498
SourceFileLoader           466
property                   345
_UnionGenericAlias         336
classmethod                336
set                        265
_GenericAlias              170

Complex to analyze, there is a large variety of usage of tuple ...

Edit: I think there is maybe "something" with DB worker queue management at db.py. I think the queue / objects / process in multiprocessing manager increases all over the time without cleaning process. Not sure about that, I will continue to investigate.

hegusung commented 11 months ago

Thanks for for feedback, I will investigate this