python / cpython

The Python programming language
https://www.python.org
Other
62.52k stars 30.01k forks source link

Allow registering at-fork handlers #60704

Closed tiran closed 7 years ago

tiran commented 11 years ago
BPO 16500
Nosy @malemburg, @Yhg1s, @birkenfeld, @gpshead, @jcea, @amauryfa, @pitrou, @vstinner, @tiran, @asvetlov, @socketpair, @serhiy-storchaka, @1st1, @ajdavis
PRs
  • python/cpython#1715
  • python/cpython#1834
  • python/cpython#1841
  • python/cpython#1843
  • python/cpython#3516
  • python/cpython#3519
  • Files
  • pure-python-atfork.patch
  • Note: these values reflect the state of the issue at the time it was migrated and might not reflect the current state.

    Show more details

    GitHub fields: ```python assignee = 'https://github.com/gpshead' closed_at = created_at = labels = ['extension-modules', 'interpreter-core', 'type-feature', '3.7'] title = 'Allow registering at-fork handlers' updated_at = user = 'https://github.com/tiran' ``` bugs.python.org fields: ```python activity = actor = 'vstinner' assignee = 'gregory.p.smith' closed = True closed_date = closer = 'gregory.p.smith' components = ['Extension Modules', 'Interpreter Core'] creation = creator = 'christian.heimes' dependencies = [] files = ['28044'] hgrepos = [] issue_num = 16500 keywords = ['patch', 'needs review'] message_count = 58.0 messages = ['175878', '175892', '175967', '175972', '175973', '175974', '175975', '175980', '175997', '176002', '176004', '176019', '176020', '176022', '179838', '179888', '179927', '179945', '179949', '200767', '200774', '200777', '200778', '200779', '200780', '200789', '200797', '200826', '200843', '201892', '201899', '201909', '201912', '266590', '294141', '294145', '294146', '294147', '294148', '294149', '294159', '294597', '294629', '294633', '294634', '294635', '294639', '294640', '294641', '294642', '294651', '294702', '294703', '294705', '294711', '294724', '301983', '302006'] nosy_count = 23.0 nosy_names = ['lemburg', 'twouters', 'georg.brandl', 'gregory.p.smith', 'jcea', 'amaury.forgeotdarc', 'pitrou', 'vstinner', 'christian.heimes', 'grahamd', 'Arfrever', 'ionelmc', 'asvetlov', 'neologix', 'socketpair', 'sbt', 'aliles', 'serhiy.storchaka', 'yselivanov', 'DLitz', 'emptysquare', 'xupeng', 'rpcope1'] pr_nums = ['1715', '1834', '1841', '1843', '3516', '3519'] priority = 'normal' resolution = 'fixed' stage = 'resolved' status = 'closed' superseder = None type = 'enhancement' url = 'https://bugs.python.org/issue16500' versions = ['Python 3.7'] ```

    tiran commented 11 years ago

    I propose the addition of an 'afterfork' module. The module shall fulfill a similar task as the 'atexit' module except that it handles process forks instead of process shutdown.

    The 'afterfork' module shall allow libraries to register callbacks that are executed on fork() inside the child process and as soon as possible. Python already has a function that must be called by C code: PyOS_AfterFork(). The 'afterfork' callbacks are called as the last step in PyOS_AfterFork().

    Use case example: The tempfile module has a specialized RNG that re-initialized the RNG after fork() by comparing os.getpid() to an instance variable every time the RNG is accessed. The check can be replaced with an afterfork callback.

    Open questions: How should the afterfork() module handle exceptions that are raised by callbacks?

    Implementation: I'm going to use as much code from atexitmodule.c as possible. I'm going to copy common code to a template file and include the template from atexitmodule.c and afterforkmodule.c with some preprocessor tricks.

    e26428b1-70cf-4e9f-ae3c-9ef0478633fb commented 11 years ago

    pthread_atfork() allows the registering of three types of callbacks:

    1) prepare callbacks which are called before the fork, 2) parent callbacks which are called in the parent after the fork 3) child callbacks which are called in the child after the fork.

    I think all three should be supported.

    I also think that a recursive "fork lock" should be introduced which is held during the fork. This can be acquired around critical sections during which forks must not occur.

    This is more or less a duplicate of bpo-6923. See also bpo-6721.

    tiran commented 11 years ago

    Thanks Richard!

    My first reaction was YAGNI but after I read the two tickets I now understand the need for three different hooks. I suggest that we implement our own hooks like the http://linux.die.net/man/3/pthread_atfork function, especially the order of function calls:

    The parent and child fork handlers shall be called in the order in which they were established by calls to pthread_atfork(). The prepare fork handlers shall be called in the opposite order.

    I like to focus on three hooks + the Python API and leave the usage of the hooks to other developers.

    Proposal:

    I'm not yet sure how to implement the Python API. I could either implement six methods:

      atfork.register_before_fork(callable, *args, **kwargs)
      atfork.register_after_fork_child(callable, *args, **kwargs)
      atfork.register_after_fork_parent(callable, *args, **kwargs)
      atfork.unregister_before_fork(callable)
      atfork.unregister_after_fork_child(callable)
      atfork.unregister_after_fork_parent(callable)

    or two:

      atfork.register(prepare=None, parent=None, child=None, *args, **kwargs)
      atfork.unregister(prepare=None, parent=None, child=None)
    e26428b1-70cf-4e9f-ae3c-9ef0478633fb commented 11 years ago

    Note that Gregory P. Smith has written

    http://code.google.com/p/python-atfork/

    I also started a pure python patch but did not get round it posting it. (It also implements the fork lock idea.) I'll attach it here.

    How do you intend to handle the propagation of exceptions? I decided that after

        atfork.atfork(prepare1, parent1, child1)
        atfork.atfork(prepare2, parent2, child2)
        ...
        atfork.atfork(prepareN, parentN, childN)

    calling "pid = os.fork()" should be equivalent to

        pid = None
        prepareN()
        try:
            ...
                prepare2()
                try:
                    prepare1()
                    try:
                        pid = posix.fork()
                    finally:
                        parent1() if pid != 0 else child1()
                finally:
                    parent2() if pid != 0 else child2()
            ...
        finally:
            parentN() if pid != 0 else childN()
    gpshead commented 11 years ago

    I would not allow exceptions to propagate. No caller is expecting them.

    gpshead commented 11 years ago

    pthread_atfork() cannot be used to implement this. Another non-python thread started by a C extension module or the C application that is embedding Python within it is always free to call fork() on its own with zero knowledge that Python even exists at all. It's guaranteed that fork will be called while the Python GIL is held in this situation which would cause any pre-fork thing registered by Python to deadlock.

    At best, this can be implemented manually as we do with some of the before and after fork stuff today but it must come with the caveat warning that it cannot guarantee that these things are actually called before and after fork() other than direct os.fork() calls from Python code or extremely Python aware C extension modules that may call fork() (very rare, most C & C++ libraries an extension module may be using assume that they've got the run of the house). ie: this problem is unsolvable unless you control 100% of the code being used by your entire user application.

    On Mon, Nov 19, 2012 at 3:59 PM, Gregory P. Smith \report@bugs.python.org\wrote:

    Gregory P. Smith added the comment:

    I would not allow exceptions to propagate. No caller is expecting them.

    ----------


    Python tracker \report@bugs.python.org\ \http://bugs.python.org/issue16500\


    tiran commented 11 years ago

    Meh! Exception handling takes all the fun of the API and is going to make it MUCH more complicated. pthread_atfork() ignores error handling for a good reason. It's going to be hard to get it right. :/

    IFF we are going to walk the hard and rocky road of exception handling, then we are going to need at least four hooks and a register function that takres four callables as arguments: register(prepare, error, parent, child). Each prepare() call pushes an error handling onto a stack. In case of an exception in a prepare handler, the error stack is popped until all error handlers are called. This approach allows a prepare handler to actually prevent a fork() call from succeeding.

    The parent and child hooks are always called no matter what. Exception are recorded and a warning is emitted when at least one hook fails. We might raise an exception but it has to be a special exception that ships information if fork() has succeeded, if the code runs in child or parent and about the child's PID.

    I fear it's going to be *really* hard to get everything right.

    Gregory made a good point, too. We can rely on pthread_atfork() as we are unable to predict how third party code is using fork(): "Take cover, dead locks ahead!" :) A cooperative design of the C API with three function is my preferred way, too. PyOS_AfterForkParent() should take an argument to signal a failed fork() call.

    9a91b5d9-3571-4515-baf6-e38227828e99 commented 11 years ago

    2012/11/20 Christian Heimes \report@bugs.python.org\

    IFF we are going to walk the hard and rocky road of exception handling, then we are going to need at least four hooks and a register function that takres four callables as arguments: register(prepare, error, parent, child). Each prepare() call pushes an error handling onto a stack. In case of an exception in a prepare handler, the error stack is popped until all error handlers are called. This approach allows a prepare handler to actually prevent a fork() call from succeeding.

    FWIW, PyPy already has a notion of fork hooks: https://bitbucket.org/pypy/pypy/src/b4e4017909bac6c102fbc883ac8d2e42fa41553b/pypy/module/posix/interp_posix.py?at=default#cl-682

    Various subsystems (threads cleanup, import lock, threading.local...) register their hook functions.

    You may want to experiment from there :-) A new "atfork" module would be easy to implement.

    e26428b1-70cf-4e9f-ae3c-9ef0478633fb commented 11 years ago

    IFF we are going to walk the hard and rocky road of exception handling, then we are going to need at least four hooks and a register function that takres four callables as arguments: register(prepare, error, parent, child). Each prepare() call pushes an error handling onto a stack. In case of an exception in a prepare handler, the error stack is popped until all error handlers are called. This approach allows a prepare handler to actually prevent a fork() call from succeeding.

    I think there are two main options if a prepare callback fails:

    1) The fork should not occur and the exception should be raised 2) The fork should occur and the exception should be only be printed

    I favour option 1 since, if they want, users can always wrap their prepare callbacks with

    try: ... except: sys.excepthook(*sys.exc_info())

    With option 1 I don't see why error callbacks are necessary. Just unwind the stack of imaginary try...finally... clauses and let any exceptions propagate out using exception chaining if necessary. This is what pure-python-atfork.patch does. Note, however, that if the fork succeeds then any subsequent exception is only printed.

    tiran commented 11 years ago

    Amaury: PyPy doesn't handle exceptions in hooks. Is there a reason why PyPy goes for the simplistic approach?

    Richard: An error callback has the benefit that the API can notice the hooks that some error has occurred. We may not need it, though.

    I can think of six exception scenarios that must be handled:

    (1) exception in a prepare hook -> don't call the remaining prepare hooks, run all related parent hooks in FILO order, prevent fork() call (2) exception in parent hook during the handling of (1) -> print exception, continue with next parent hook (3) exception in fork() call -> run parent hooks in FILO order (4) exception in parent hook during the handling of (3) -> print exception, continue with next parent hook (5) exception in parent hook when fork() has succeeded -> print exception, continue with next parent hook (6) exception in child hook when fork() has succeeded -> print exception, continue with next child hook

    Do you agree?

    amauryfa commented 11 years ago

    PyPy doesn't handle exceptions in hooks. Is there a reason why PyPy goes for the simplistic approach?

    Probably because nobody thought about it. At the moment, there is only one 'before', one 'parent' hook (so the FILO order is simple), and three 'child' hooks. And if the _PyImport_ReleaseLock call fails, you'd better not ignore the error...

    gpshead commented 11 years ago

    I think you are solving a non-problem if you want to expose exceptions from such hooks. Nobody needs it.

    pitrou commented 11 years ago

    I think you are solving a non-problem if you want to expose exceptions from such hooks. Nobody needs it.

    Agreed.

    tiran commented 11 years ago

    Your suggestion is that the hooks are called as:

    for hook in hooks:
        try:
            hook()
        except:
            try:
                sys.excepthook(*sys.exc_info())
            except:
                pass

    That makes the implementation much easier. :)

    vstinner commented 11 years ago

    "The tempfile module has a specialized RNG that re-initialized the RNG after fork() by comparing os.getpid() to an instance variable every time the RNG is accessed. The check can be replaced with an afterfork callback."

    By the way, OpenSSL expects that its PRNG is reseed somehow (call RNG_add) after a fork. I wrote a patch for OpenSSL, but I don't remember if I sent it to OpenSSL. https://bitbucket.org/haypo/hasard/src/4a1be69a47eb1b2ec7ca95a341d4ca953a77f8c6/patches/openssl_rand_fork.patch?at=default

    Reseeding tempfile PRNG is useless (but spend CPU/memory/hang until we have enough entropy?) if the tempfile is not used after fork. I like the current approach.

    --

    I'm not saying that a new atfork module would not help, just that the specific case of tempfile should be discussed :-) I like the idea of a generic module to call code after fork.

    birkenfeld commented 11 years ago

    Might make sense to put this in atexit.atfork() to avoid small-module inflation?

    vstinner commented 11 years ago

    Might make sense to put this in atexit.atfork() to avoid small-module inflation?

    It sounds strange to mix "at exit" and "at fork" in the same module. Both are very different.

    2013/1/13 Arfrever Frehtes Taifersar Arahesis \report@bugs.python.org\:

    Changes by Arfrever Frehtes Taifersar Arahesis \Arfrever.FTA@GMail.Com\:

    ---------- nosy: +Arfrever


    Python tracker \report@bugs.python.org\ \http://bugs.python.org/issue16500\


    malemburg commented 11 years ago

    On 13.01.2013 00:37, STINNER Victor wrote:

    By the way, OpenSSL expects that its PRNG is reseed somehow (call RNG_add) after a fork. I wrote a patch for OpenSSL, but I don't remember if I sent it to OpenSSL. https://bitbucket.org/haypo/hasard/src/4a1be69a47eb1b2ec7ca95a341d4ca953a77f8c6/patches/openssl_rand_fork.patch?at=default

    Apparently not, and according to this thread, they don't think this is an OpenSSL problem to solve:

    http://openssl.6102.n7.nabble.com/recycled-pids-causes-PRNG-to-repeat-td41669.html

    Note that you don't have to reseed the RNG just make sure that the two forks use different sequences. Simply adding some extra data in each process will suffice, e.g. by adding the PID of the new process to the RNG pool. This is certainly doable without any major CPU overhead :-)

    pitrou commented 11 years ago

    It sounds strange to mix "at exit" and "at fork" in the same module. Both are very different.

    That's true. The sys module would probably be the right place for both functionalities.

    tiran commented 10 years ago

    Richard, do you have time to get your patch ready for 3.4?

    e26428b1-70cf-4e9f-ae3c-9ef0478633fb commented 10 years ago

    Richard, do you have time to get your patch ready for 3.4?

    Yes. But we don't seem to have concensus on how to handle exceptions. The main question is whether a failed prepare callback should prevent the fork from happenning, or just be printed.

    52212e46-f037-482c-a15d-38b4efafa0d3 commented 10 years ago

    The main question is whether a failed prepare callback should prevent the fork from happenning

    Yes, I think an exception should prevent the fork from happening.

    It's fail-safe for the PRNG case (you can guarantee that a fork won't occur without properly re-seeding a PRNG), and it makes bugs easier to catch in unit testing.

    tiran commented 10 years ago

    +1 for exception prevents fork

    79528080-9d85-4d18-8a2a-8b1f07640dd7 commented 10 years ago

    I have a couple random remarks:

    vstinner commented 10 years ago

    "now that FDs are non-inheritable by default, fork locks around subprocess and multiprocessing shouldn't be necessary anymore? What other use cases does the fork-lock have?"

    Well, on Windows, it's still not possible to inherit only one handle. If you mark temporary the handle as inheritable (os.set_handle_inheritable), it might be inherited by a diffrent child process if another Python thread spawn a process... It's probably unlikely, so it's one of the use case of such lock :-)

    e26428b1-70cf-4e9f-ae3c-9ef0478633fb commented 10 years ago
    • now that FDs are non-inheritable by default, fork locks around subprocess and multiprocessing shouldn't be necessary anymore? What other use cases does the fork-lock have?

    CLOEXEC fds will still be inherited by forked children.

    • the current implementation keeps hard-references to the functions passed: so if one isn't careful, you can end up easily with a lot of objects kept alive just because of those references, which can be a problem

    True, but you could make the same complaint about atexit.register().

    One can fairly easily create something like weakref.finalize which uses atfork but is smart about not creating hard refs.

    79528080-9d85-4d18-8a2a-8b1f07640dd7 commented 10 years ago

    Richard Oudkerk added the comment:

    > - now that FDs are non-inheritable by default, fork locks around > subprocess and multiprocessing shouldn't be necessary anymore? What > other use cases does the fork-lock have?

    CLOEXEC fds will still be inherited by forked children.

    Hum, right, I was thinking only about subprocess-created children (where an exec follows immediately).

    > - the current implementation keeps hard-references to the functions > passed: so if one isn't careful, you can end up easily with a lot of > objects kept alive just because of those references, which can be a > problem

    True, but you could make the same complaint about atexit.register().

    Yeah, but atexit is usually used for process-lifetime resources (I mean, there's 'exit' in the name). One of the main use cases for atfork hooks would be the numerous stdlib objects which have locks (or locks themselves): most of such objects have arbitrary lifetimes (e.g. logging, events, open files, etc).

    The risk of leak is IMO much greater.

    pitrou commented 10 years ago

    One of the main use cases for atfork hooks would be the numerous stdlib objects which have locks (or locks themselves): most of such objects have arbitrary lifetimes (e.g. logging, events, open files, etc). The risk of leak is IMO much greater.

    Well it is customary for callback-based APIs to hold strong references to their callbacks. If a library wants to avoid leaks, it should register a single callback which will then walk the current "live" resources and protect them. (i.e. the resource lifetime issue should be solved by library or application code, not by the atfork module)

    By the way, +0 to raising and aborting the fork when there's an exception. The only annoyance I can think about is a buggy library which would prevent the user from forking.

    79528080-9d85-4d18-8a2a-8b1f07640dd7 commented 10 years ago

    Well it is customary for callback-based APIs to hold strong references to their callbacks. If a library wants to avoid leaks, it should register a single callback which will then walk the current "live" resources and protect them.

    I guess that the custom usage makes sense. I'd just like a note in the doc, if possible ;-)

    e26428b1-70cf-4e9f-ae3c-9ef0478633fb commented 10 years ago

    Given PEP-446 (fds are now CLOEXEC by default) I prepared an updated patch where the fork lock is undocumented and subprocess no longer uses the fork lock. (I did not want to encourage the mixing of threads with fork() without exec() by exposing the fork lock just for that case.)

    But I found that a test for the leaking of fds to a subprocess started with closefds=False was somewhat regularly failing because the creation of CLOEXEC pipe fds is not atomic -- the GIL is not held while calling pipe().

    It seems that PEP-446 does not really make the fork lock redundant for processes started using fork+exec.

    So now I don't know whether the fork lock should be made public. Thoughts?

    79528080-9d85-4d18-8a2a-8b1f07640dd7 commented 10 years ago

    Given PEP-446 (fds are now CLOEXEC by default) I prepared an updated patch where the fork lock is undocumented and subprocess no longer uses the fork lock. (I did not want to encourage the mixing of threads with fork() without exec() by exposing the fork lock just for that case.)

    But I found that a test for the leaking of fds to a subprocess started with closefds=False was somewhat regularly failing because the creation of CLOEXEC pipe fds is not atomic -- the GIL is not held while calling pipe().

    Was it on Linux? If yes, was it on an old kernel/libc? (I just want to make sure that pipe2() is indeed used if available).

    It seems that PEP-446 does not really make the fork lock redundant for processes started using fork+exec.

    So now I don't know whether the fork lock should be made public. Thoughts?

    IMO it should be kept private for now: I think it's really only useful in some corner cases, and if it turns out to be useful, we can expose it later.

    vstinner commented 10 years ago

    The PEP-446 does not offer any warranty on the atomicity on clearing the inheritable flag.

    e26428b1-70cf-4e9f-ae3c-9ef0478633fb commented 10 years ago

    It is a recent kernel and does support pipe2().

    After some debugging it appears that a pipe handle created in Popen.__init() was being leaked to a forked process, preventing Popen.__init() from completing before the forked process did.

    Previously the test passed because Popen.__init__() acquired the fork lock.

    01e27b45-90f2-4c74-9e5e-7e7e54c3d78e commented 8 years ago

    Also I have to add, maybe you also add automatic file descriptor closing on fork. We already have O_CLOEXEC, we have no O_CLOFORK. So, maybe it useful to add such flag ?

    pitrou commented 7 years ago

    I've posted https://github.com/python/cpython/pull/1715 which adds a single os.register_at_fork function to allow registering at-fork handlers. Comments welcome.

    serhiy-storchaka commented 7 years ago

    atexit.register() has different signature and allows to pass arbitrary positional and keyword arguments to the registered function. This is incompatible with the "when" argument. If we want to support registering arguments with the function, we need either three registering functions or make the "when" parameter the first and positional-only.

    pitrou commented 7 years ago

    API-wise, I went for the minimal route. This avoids any discussion of adding a separate module for a tiny functionality that is only going to be used by a couple of libraries (and probably no application code).

    Comparisons with atexit are not really relevant, IMO, since the use cases are very different.

    As for passing explicit arguments to the callable, people can use a lambda or functools.partial. I don't want to complicate the C implementation with matters that are not really important.

    serhiy-storchaka commented 7 years ago

    I agree that with lambdas and functools.partial the support of arguments is not needed. Unless someone has good reasons for supporting explicit passing of arguments I'm fine with your design.

    If the user code manually calls the fork() and (now deprecated) PyOS_AfterFork(), it runs "child" functions, but not "before" and "parent" functions. Is it worth to emit a runtime warning if the "before" or "parent" lists are not empty? Or even immediately exit a child with error?

    pitrou commented 7 years ago

    I don't think exiting would be a good idea at all. I'm not sure about emitting a warning: the problem is that the people seeing the warning (plain users) can't do anything about it; worse, they probably will not even understand what it is about, and will get confused.

    The package maintainer should see a deprecation warning when compiling code using PyOS_AfterFork() (thanks to the Py_DEPRECATED attribute).

    Of course, I'll also update the docs to add a deprecation once the design is settled.

    serhiy-storchaka commented 7 years ago

    PyOS_AfterFork() is used by one code, but the "before" and "parent" handlers are registered by the other code (likely the code of other library). The author of the program that uses both libraries (the one that uses PyOS_AfterFork() and the one that registers handlers) can notice the warning and report the bug in the first library. I think silently skipping registered handlers would be worse.

    Let allow the user to control the behavior by setting the warnings filter. If the corresponding warnings are ignored, nothing happen, if they are errors, the child is exited, otherwise the warning message is output on stderr as for other warnings.

    pitrou commented 7 years ago

    If the corresponding warnings are ignored, nothing happen, if they are errors, the child is exited,

    Right now PyOS_AfterFork() doesn't return an error code. It is not obvious how the caller would react: simply print the error? raise a fatal error? Something else?

    The only third-party use of PyOS_AfterFork() I found is in uwsgi (and I'm not sure they're using it correctly, since I don't know if the parent process is using Python at all...).

    pitrou commented 7 years ago

    New changeset 346cbd351ee0dd3ab9cb9f0e4cb625556707877e by Antoine Pitrou in branch 'master': bpo-16500: Allow registering at-fork handlers (bpo-1715) https://github.com/python/cpython/commit/346cbd351ee0dd3ab9cb9f0e4cb625556707877e

    serhiy-storchaka commented 7 years ago

    Could you please update the documentation Antoine?

    serhiy-storchaka commented 7 years ago

    Can threading._after_fork() be rewritten with using the new API?

    pitrou commented 7 years ago

    New changeset f7ecfac0c15f0c43ef5e6c0081eb3a059af9f074 by Antoine Pitrou in branch 'master': Doc nits for bpo-16500 (bpo-1841) https://github.com/python/cpython/commit/f7ecfac0c15f0c43ef5e6c0081eb3a059af9f074

    pitrou commented 7 years ago

    Le 28/05/2017 à 11:22, Serhiy Storchaka a écrit :

    Serhiy Storchaka added the comment:

    Can threading._after_fork() be rewritten with using the new API?

    It should be possible indeed. Let me see.

    serhiy-storchaka commented 7 years ago

    Can multiprocessing.util.register_after_fork() be rewritten with using the new API?

    pitrou commented 7 years ago

    Can multiprocessing.util.register_after_fork() be rewritten with using the new API?

    It wouldn't benefit much from it, and there might be timing issue given the comments in BaseProcess._bootstrap():

                old_process = _current_process
                _current_process = self
                try:
                    util._finalizer_registry.clear()
                    util._run_after_forkers()
                finally:
                    # delay finalization of the old process object until after
                    # _run_after_forkers() is executed
                    del old_process
    pitrou commented 7 years ago

    New changeset 4a8bcdf79cdb3684743fe1268de62ee88bada439 by Antoine Pitrou in branch 'master': bpo-16500: Use register_at_fork() in the threading module (bpo-1843) https://github.com/python/cpython/commit/4a8bcdf79cdb3684743fe1268de62ee88bada439

    serhiy-storchaka commented 7 years ago

    In PR 1834 Gregory proposes an alternate API:

        os.register_at_fork(*, before=None, after_in_parent=None, after_in_child=None)

    Maybe open a new issue for this?