secdev / scapy

Scapy: the Python-based interactive packet manipulation program & library.
https://scapy.net
GNU General Public License v2.0
10.56k stars 2.01k forks source link

Zombie tcpdump Process After Using sniff() on pcap with Filter #4512

Open romanhu opened 2 weeks ago

romanhu commented 2 weeks ago

Brief description

When using Scapy's sniff() function to read from a pcap file, a zombie tcpdump process is left behind if the filter argument is set to a non-None value. This issue occurs consistently when a BPF filter is applied during packet sniffing, leading to potential resource leakage and system instability due to the accumulation of zombie processes.

Scapy version

2.5

Python version

3.11

Operating system

Linux Debian 6.1.99-1

Additional environment information

No response

How to reproduce

  1. Start scapy interactive console
  2. Call sniff(offline="/path/to/any.pcap", filter="")
  3. In another terminal execute ps x | grep tcpdump

Actual result

The ps command lists the tcpdump process with a <defunct> status, indicating that the process has become a zombie. This status means the process has completed execution but remains in the process table because its parent process has not yet read its exit status.

Expected result

ps should not show any tcpdump processes if sniff is done reading the pcap file

Related resources

No response

p-l- commented 2 weeks ago

Thanks for this report. You're right. Also, Python reports this (at least Python 3.12):

/usr/lib/python3.12/subprocess.py:1127: ResourceWarning: subprocess 123456 is still running
  _warn("subprocess %s is still running" % self.pid,
ResourceWarning: Enable tracemalloc to get the object allocation traceback
p-l- commented 2 weeks ago

An option to patch this would be to prevent tcpdump() from being used with getfd=True without using a with pattern.