damonlynch / showinfilemanager

Platform independent Python module to open the system file manager and select files in it
MIT License
21 stars 1 forks source link

`_launch_file_manager` doesn't clean up its processes #4

Open Lx opened 1 year ago

Lx commented 1 year ago

I have warnings and tracemalloc enabled in my project, which causes _launch_file_manager to emit this message on stderr:

/.../lib/python3.10/subprocess.py:1067: ResourceWarning: subprocess 40902 is still running
  _warn("subprocess %s is still running" % self.pid,
Object allocated at (most recent call last):
  File "/.../lib/python3.10/site-packages/showinfm/showinfm.py", lineno 425
    proc = subprocess.Popen(args)

From the Python v3.6+ docs for subprocess.Popen:

Changed in version 3.6: Popen destructor now emits a ResourceWarning warning if the child process is still running.

No information on why—thankfully though, the relevant source code in the Popen destructor sheds a little more light:

class Popen:
    ...
    def __del__(self, _maxsize=sys.maxsize, _warn=warnings.warn):
        ...
        if self.returncode is None:
            # Not reading subprocess exit status creates a zombie process which
            # is only destroyed at the parent python process exit
            _warn("subprocess %s is still running" % self.pid,
                  ResourceWarning, source=self)
        ...

Indeed, in my case, an open child process hangs around until my main Python process exits.


showinfm should probably clean up its child processes somehow:

Lx commented 1 year ago

If you're okay with bringing in one small dependency (detach3k), then swapping out the subprocess.Popen call here:

https://github.com/damonlynch/showinfilemanager/blob/f9cd5e0273d8924e4faa3c80d06f9a512b6cf78a/src/showinfm/showinfm.py#L425

with detach.call resolves this perfectly for me:

proc = detach.call(args)

Looking at the source for detach.call, it just forks, spawns (using subprocess.Popen with null stdin/​stdout/​stderr streams) and _exits the fork. Popen would probably still complain at destruction if not for the immediate _exit—but hey, it stops the warnings from being emitted, and I have no child processes hanging around (presumably thanks to the null I/O streams).


detach.call docs for your interest:

call(args, stdout=None, stderr=None, stdin=None, daemonize=False, preexec_fn=None, shell=False, cwd=None, env=None)

Run an external command in a separate process and detach it from the current process. Excepting stdout, stderr, and stdin all file descriptors are closed after forking. If daemonize is True then the parent process exits. All stdio is redirected to os.devnull unless specified. The preexec_fn, shell, cwd, and env parameters are the same as their Popen counterparts. Return the PID of the child process if not daemonized.

damonlynch commented 1 year ago

Hello, thanks for your research into this issue. Unfortunately, I developed a serious typing injury while coding early this year (I'm dictating this using a voice recognition program). Avoiding doing any more coding until I can recover has not proved possible, but nonetheless, I would like very much to avoid doing it unless it is absolutely necessary. Is it possible for you to investigate making the change using the external library you suggest, and then testing it on all platforms (Windows, Linux, and macOS)?

jannschu commented 1 year ago

The mentioned library relies on os.fork(…) which is not supported on Windows (see os.fork(…) documentation). The library uses a common trick, a double fork, to create an orphan process that is than handled/reaped by the init process.

The warning can be ignored if it is produced upon terminating the main process in which case all child processes are reaped eventually and no resources leak. But currently it seems there is indeed a memory leak produced by zombie processes.

After a bit of research I think this solution would work:

import subprocess

def run(
    args, shell=False, close_fds=True, stdin=None, stdout=None, stderr=None, **kwargs
):
    process = subprocess.Popen(
        args,
        shell=shell,
        close_fds=close_fds,
        stdin=stdin,
        stdout=stdout,
        stderr=stderr,
        **kwargs,
    )
    # We keep a reference to the process to be able to reap it.
    # This avoids a memory leak (zombie processes) for processes that
    # terminate before their parent.
    run.processes.append(process)
    run.processes = [p for p in run.processes if p.poll() is None]
    return process

run.processes = []

This should avoid the memory leak but there might still be harmless warnings upon termination. Unless there are a lot of processes started this way polling them all every time should not be a performance issue.

Maybe that can be directly implemented inside _launch_file_manager because it is a bit specific.

jannschu commented 1 year ago

Although a bit hacky we could even silence the warning upon interpreter termination (adding to my previous snippet):

import atexit

def _silence_zombie_process_warning():
    for process in run.processes:
        # monkey patch to silence Popen warning
        process.returncode = 0

atexit.register(_silence_zombie_process_warning)

By the way, I think P_DETACH is identical to P_NOWAIT and would give us no warning but still a zombie process. Starting or waiting in a thread does work I think.

jannschu commented 1 year ago

@damonlynch I read on your website (very impressive, by the way!) that your injury might last a while. Would be interested in a pull request with this? I could test it on macOS and Windows.

ADevAJ commented 9 months ago

Any updates on this issue?

apiszcz commented 9 months ago

If the call returned an object to control the process (including termination which would close the window) that would be of interest.