bsdci / libioc

A Python library to manage jails with ioc{age,ell}
https://bsd.ci/libioc
Other
38 stars 11 forks source link

Slow stop command #679

Closed urosgruber closed 5 years ago

urosgruber commented 5 years ago

I'm noticing on one of our nodes that stop command waits for something and that is why stop can take a few seconds to complete. I'm not sure if this is connected but also when doing ioc console myjail cli hangs for a few seconds. I'm really struggling to find the cause. Please check stop/start calls and some timing. Also one thing to consider: there is no service to be started or stopped service -e show

/etc/rc.d/cleanvar
/etc/rc.d/newsyslog
/etc/rc.d/syslogd
/etc/rc.d/virecover
/etc/rc.d/motd
# time ioc start ioc/abruptive-firefly-4GBLKB
[+] JailResolverConfig@ioc/abruptive-firefly-4GBLKB: OK [0.001s]
[+] JailDependantsStart@ioc/abruptive-firefly-4GBLKB: No dependant jails [0.0s]
[+] JailLaunch@ioc/abruptive-firefly-4GBLKB: OK [0.235s]
abruptive-firefly-4GBLKB running as JID 39
0.300u 0.218s 0:00.41 124.3%    22+167k 3486+16io 1pf+0w
# time ioc stop ioc/abruptive-firefly-4GBLKB
[+] JailDestroy@ioc/abruptive-firefly-4GBLKB: OK [5.91s]
abruptive-firefly-4GBLKB stopped
4.345u 1.850s 0:06.13 100.9%    6+167k 1656+5io 0pf+0w
# time ioc start ioc/abruptive-firefly-4GBLKB
[+] JailResolverConfig@ioc/abruptive-firefly-4GBLKB: OK [0.001s]
[+] JailDependantsStart@ioc/abruptive-firefly-4GBLKB: No dependant jails [0.0s]
[+] JailLaunch@ioc/abruptive-firefly-4GBLKB: OK [0.236s]
abruptive-firefly-4GBLKB running as JID 40
0.310u 0.209s 0:00.41 124.3%    28+172k 3486+16io 1pf+0w
# time ioc stop ioc/abruptive-firefly-4GBLKB
[+] JailDestroy@ioc/abruptive-firefly-4GBLKB: OK [5.854s]
abruptive-firefly-4GBLKB stopped
4.265u 1.873s 0:06.07 100.9%    6+168k 1656+5io 0pf+0w
# time ioc start ioc/abruptive-firefly-4GBLKB
[+] JailResolverConfig@ioc/abruptive-firefly-4GBLKB: OK [0.001s]
[+] JailDependantsStart@ioc/abruptive-firefly-4GBLKB: No dependant jails [0.0s]
[+] JailLaunch@ioc/abruptive-firefly-4GBLKB: OK [0.236s]
abruptive-firefly-4GBLKB running as JID 41
0.304u 0.216s 0:00.41 124.3%    22+169k 3486+16io 1pf+0w
 # time ioc stop ioc/abruptive-firefly-4GBLKB
[+] JailDestroy@ioc/abruptive-firefly-4GBLKB: OK [5.493s]
abruptive-firefly-4GBLKB stopped
4.099u 1.674s 0:05.71 100.8%    5+167k 1656+5io 0pf+0w

I've noticed *python3.6 being WCPU on 93% during this slowness. So it looks something with python after all.

gronke commented 5 years ago

The stop command will be refactored to utilize py-jail.

urosgruber commented 5 years ago

@gronke any timeframe when I'll be able to test if there is performance improvement? Or another question. Why would the speed be different between physical servers?

gronke commented 5 years ago

Why would the speed be different between physical servers?

There should not be differences.

@urosgruber have you tried to run ioc -d spam stop ioc/abruptive-firefly-4GBLKB? Maybe you can spot at which step it hangs.

any timeframe when I'll be able to test if there is performance improvement?

Unfortunately I'm occupied with other tasks right now. Refactoring JailGenerator.stop will be the next step in this project.

gronke commented 5 years ago

@urosgruber PR #683 utilizes py-jail to stop jails with libc.jail_remove. If it does not solve your issues already, it helps us a lot finding out where it slows down.

To test the feature before it is available in the CLI, it can be tested from the source directory:

cd /usr/local/src/ioc
git checkout master
git pull
cd .libioc
git fetch
git checkout origin/enhancement/jail-stop-libc
cd ..

python3.6 . stop myjail

The make install command of ioc would automatically checkout the libioc submodule, but the combination of the latest ioc CLI and a libioc testing branch can be installed individually:

make -C /usr/local/src/libioc install
make -C /usr/local/src/ioc install-ioc
gronke commented 5 years ago

Reading the issue message body again makes me feel reminded of an issue where the hostname was not in pointing to 127.0.0.1 in /etc/hosts. @urosgruber you could manually put the hostname in the jails hosts file and see if the issue persists.

urosgruber commented 5 years ago

@gronke I have 127.0.0.1 localhost localhost.my.domain in jails /etc/hosts file if that is what you mean? I'll try with this PR today.

urosgruber commented 5 years ago

Here is the output if it helps

root:~/ioc # python3.6 . stop ioc/abruptive-firefly-4GBLKB
[+] JailStop@ioc/abruptive-firefly-4GBLKB: OK [5.296s]
  [+] JailHookPrestop@ioc/abruptive-firefly-4GBLKB: OK [0.006s]
  [+] JailHookStop@ioc/abruptive-firefly-4GBLKB: OK [5.191s]
  [+] JailRemove@ioc/abruptive-firefly-4GBLKB: OK [0.0s]
  [+] JailHookPoststop@ioc/abruptive-firefly-4GBLKB: SKIPPED [0.0s]
  [+] TeardownJailMounts@ioc/abruptive-firefly-4GBLKB: OK [0.062s]
  [+] JailResourceLimitAction@ioc/abruptive-firefly-4GBLKB: SKIPPED [0.0s]
abruptive-firefly-4GBLKB stopped
root:~/ioc # python3.6 . start ioc/abruptive-firefly-4GBLKB
[+] JailResolverConfig@ioc/abruptive-firefly-4GBLKB: OK [0.001s]
[+] JailDependantsStart@ioc/abruptive-firefly-4GBLKB: No dependant jails [0.0s]
[+] JailLaunch@ioc/abruptive-firefly-4GBLKB: OK [0.288s]
abruptive-firefly-4GBLKB running as JID 13
root:~/ioc # python3.6 . stop ioc/abruptive-firefly-4GBLKB
[+] JailStop@ioc/abruptive-firefly-4GBLKB: OK [6.822s]
  [+] JailHookPrestop@ioc/abruptive-firefly-4GBLKB: OK [0.006s]
  [+] JailHookStop@ioc/abruptive-firefly-4GBLKB: OK [6.716s]
  [+] JailRemove@ioc/abruptive-firefly-4GBLKB: OK [0.0s]
  [+] JailHookPoststop@ioc/abruptive-firefly-4GBLKB: SKIPPED [0.0s]
  [+] TeardownJailMounts@ioc/abruptive-firefly-4GBLKB: OK [0.062s]
  [+] JailResourceLimitAction@ioc/abruptive-firefly-4GBLKB: SKIPPED [0.0s]
abruptive-firefly-4GBLKB stopped
gronke commented 5 years ago

The JailHookStop causes the delay. You could run it as python3.6 . -d spam stop ioc/abruptive-firefly-4GBLKB to see the stop command output. (yeah, the empty lines caused by the event stack updating is annoying 😞) Another way to observe what's going on when stopping a jail is to manually run the rc script:

python3.6 . exec ioc/abruptive-firefly-4GBLKB /bin/sh /etc/rc.shutdown
urosgruber commented 5 years ago

Hmmm, I think this new code behave kinda funky. So here is first output from master code

root:~/ioc # ioc -d spam stop ioc/abruptive-firefly-4GBLKB
Setting fstab auto-creation placeholder
Adding line to fstab: /accounts/597e8310-d52f-11e8-9f8b-f2801f1b9fd1/public /ioc/jails/abruptive-firefly-4GBLKB/root/data   nullfs  rw  0   0
fstab loaded from /ioc/jails/abruptive-firefly-4GBLKB/fstab
Clearing resource limits
[+] JailDestroy@ioc/abruptive-firefly-4GBLKB: OK [5.498s]
Writing jail.conf file to /ioc/jails/abruptive-firefly-4GBLKB/launch-scripts/jail.conf
Executing (interactive): /usr/sbin/jail -v -r -f /ioc/jails/abruptive-firefly-4GBLKB/launch-scripts/jail.conf ioc-abruptive-firefly-4GBLKB
  ioc-abruptive-firefly-4GBLKB: run command: /bin/sh /ioc/jails/abruptive-firefly-4GBLKB/launch-scripts/prestop.sh
  ioc-abruptive-firefly-4GBLKB: run command in jail as root: /bin/sh -c [ -f /.iocage/stop.sh ] || exit 0; . /.iocage/stop.sh
  .
  Terminated
  ioc-abruptive-firefly-4GBLKB: sent SIGTERM to: 15110
  ioc-abruptive-firefly-4GBLKB: removed
  ioc-abruptive-firefly-4GBLKB: run command: /bin/sh /ioc/jails/abruptive-firefly-4GBLKB/launch-scripts/poststop.sh
abruptive-firefly-4GBLKB stopped

It was waiting on sent SIGTERM to: 15110

But using new version here is what is strange

root:~/ioc # python3.6 . -d spam stop ioc/abruptive-firefly-4GBLKB
[-] JailStop@ioc/abruptive-firefly-4GBLKB: ...
  [-] JailHookPrestop@ioc/abruptive-firefly-4GBLKB: ...
  [+] JailHookPrestop@ioc/abruptive-firefly-4GBLKB: OK [0.006s]

  [+] JailHookStop@ioc/abruptive-firefly-4GBLKB: OK [0.057s]
Executing (interactive): /usr/sbin/jexec 39 /bin/sh -c /bin/sh /etc/rc.shutdown
  .
  [+] JailRemove@ioc/abruptive-firefly-4GBLKB: OK [0.0s]
  [+] JailHookPoststop@ioc/abruptive-firefly-4GBLKB: SKIPPED [0.0s]
  [-] TeardownJailMounts@ioc/abruptive-firefly-4GBLKB: ...
Setting fstab auto-creation placeholder
[+] JailStop@ioc/abruptive-firefly-4GBLKB: OK [0.163s]
fstab loaded from /ioc/jails/abruptive-firefly-4GBLKB/fstab
Executing: /sbin/umount -f /ioc/jails/abruptive-firefly-4GBLKB/root/bin /ioc/jails/abruptive-firefly-4GBLKB/root/boot /ioc/jails/abruptive-firefly-4GBLKB/root/lib /ioc/jails/abruptive-firefly-4GBLKB/root/libexec /ioc/jails/abruptive-firefly-4GBLKB/root/rescue /ioc/jails/abruptive-firefly-4GBLKB/root/sbin /ioc/jails/abruptive-firefly-4GBLKB/root/usr/bin /ioc/jails/abruptive-firefly-4GBLKB/root/usr/include /ioc/jails/abruptive-firefly-4GBLKB/root/usr/lib /ioc/jails/abruptive-firefly-4GBLKB/root/usr/libexec /ioc/jails/abruptive-firefly-4GBLKB/root/usr/sbin /ioc/jails/abruptive-firefly-4GBLKB/root/usr/share /ioc/jails/abruptive-firefly-4GBLKB/root/usr/libdata /ioc/jails/abruptive-firefly-4GBLKB/root/usr/lib32 /ioc/jails/abruptive-firefly-4GBLKB/root/.iocage /ioc/jails/abruptive-firefly-4GBLKB/root/data /ioc/jails/abruptive-firefly-4GBLKB/root/dev/fd /ioc/jails/abruptive-firefly-4GBLKB/root/dev /ioc/jails/abruptive-firefly-4GBLKB/root/proc /ioc/jails/abruptive-firefly-4GBLKB/root/tmp

  [+] TeardownJailMounts@ioc/abruptive-firefly-4GBLKB: OK [0.061s]
4GBLKB/root/libexec /ioc/jails/abruptive-firefly-4GBLKB/root/rescue /ioc/jails/abruptive-firefly-4GBLKB/root/sbin /ioc/jails/abruptive-firefly-4GBLKB/root/usr/bin /ioc/jails/abruptive-firefly-4GBLKB/root/usr/include /ioc/jails/abruptive-firefly-4GBLKB/root/usr/lib /ioc/jails/abruptive-firefly-4GBLKB/root/usr/libexec /ioc/jails/abruptive-firefly-4GBLKB/root/usr/sbin /ioc/jails/abruptive-firefly-4GBLKB/root/usr/share /ioc/jails/abruptive-firefly-4GBLKB/root/usr/libdata /ioc/jails/abruptive-firefly-4GBLKB/root/usr/lib32 /ioc/jails/abruptive-firefly-4GBLKB/root/.iocage /ioc/jails/abruptive-firefly-4GBLKB/root/data /ioc/jails/abruptive-firefly-4GBLKB/root/dev/fd /ioc/jails/abruptive-firefly-4GBLKB/root/dev /ioc/jails/abruptive-firefly-4GBLKB/root/proc /ioc/jails/abruptive-firefly-4GBLKB/root/tmp
    umount: /ioc/jails/abruptive-firefly-4GBLKB/root/dev/fd: not a file system root directory
    umount: /ioc/jails/abruptive-firefly-4GBLKB/root/proc: not a file system root directory
    umount: /ioc/jails/abruptive-firefly-4GBLKB/root/tmp: not a file system root directory
Executing: /sbin/umount -f -a -F /ioc/jails/abruptive-firefly-4GBLKB/fstab

Command exited with 1: /sbin/umount -f -a -F /ioc/jails/abruptive-firefly-4GBLKB/fstab
    umount: /ioc/jails/abruptive-firefly-4GBLKB/root/data: not a file system root directory
    umount: /ioc/jails/abruptive-firefly-4GBLKB/root/.iocage: not a file system root directory
    umount: /ioc/jails/abruptive-firefly-4GBLKB/root/usr/lib32: not a file system root directory
    umount: /ioc/jails/abruptive-firefly-4GBLKB/root/usr/libdata: not a file system root directory
    umount: /ioc/jails/abruptive-firefly-4GBLKB/root/usr/share: not a file system root directory
    umount: /ioc/jails/abruptive-firefly-4GBLKB/root/usr/sbin: not a file system root directory
    umount: /ioc/jails/abruptive-firefly-4GBLKB/root/usr/libexec: not a file system root directory
    umount: /ioc/jails/abruptive-firefly-4GBLKB/root/usr/lib: not a file system root directory
    umount: /ioc/jails/abruptive-firefly-4GBLKB/root/usr/include: not a file system root directory
    umount: /ioc/jails/abruptive-firefly-4GBLKB/root/usr/bin: not a file system root directory
    umount: /ioc/jails/abruptive-firefly-4GBLKB/root/sbin: not a file system root directory
    umount: /ioc/jails/abruptive-firefly-4GBLKB/root/rescue: not a file system root directory
    umount: /ioc/jails/abruptive-firefly-4GBLKB/root/libexec: not a file system root directory
    umount: /ioc/jails/abruptive-firefly-4GBLKB/root/lib: not a file system root directory
    umount: /ioc/jails/abruptive-firefly-4GBLKB/root/boot: not a file system root directory
    umount: /ioc/jails/abruptive-firefly-4GBLKB/root/bin: not a file system root directory
  [+] JailResourceLimitAction@ioc/abruptive-firefly-4GBLKB: SKIPPED [0.0s]
abruptive-firefly-4GBLKB stopped
root:~/ioc # ioc start ioc/abruptive-firefly-4GBLKB
abruptive-firefly-4GBLKB is already running - skipping start
No jails were started: ioc/abruptive-firefly-4GBLKB template=no,-
root:~/ioc # ioc start ioc/abruptive-firefly-4GBLKB
abruptive-firefly-4GBLKB is already running - skipping start
No jails were started: ioc/abruptive-firefly-4GBLKB template=no,-
root:~/ioc # ioc start ioc/abruptive-firefly-4GBLKB
abruptive-firefly-4GBLKB is already running - skipping start
No jails were started: ioc/abruptive-firefly-4GBLKB template=no,-
root:~/ioc # ioc start ioc/abruptive-firefly-4GBLKB
[+] JailResolverConfig@ioc/abruptive-firefly-4GBLKB: OK [0.001s]
[+] JailDependantsStart@ioc/abruptive-firefly-4GBLKB: No dependant jails [0.0s]
[+] JailLaunch@ioc/abruptive-firefly-4GBLKB: OK [0.238s]
abruptive-firefly-4GBLKB running as JID 40

So it looks tile stop is executed quickly but internally jail is still not stopped so can't be started right away, it took 2-3 seconds before I was able to start it again. Is this intentional?

If there is a service started (sshd, nginx etc.) then it waits for it to stop as it should.

One other thought, not sure if it's even possible. But imagine you need to stop 50 jails at once and stop is being done one by one, so each needs to wait for jail services to stop. That could take 5-10minutes to finish. Would it be possible to start and stop asynchronously. I can open separate ticket for that if it's worth doing it.

gronke commented 5 years ago

So it looks tile stop is executed quickly but internally jail is still not stopped so can't be started right away, it took 2-3 seconds before I was able to start it again. Is this intentional?

I guess jail_remove() just puts it in dying state. Will poke @fabianfreyer for hints.

One other thought, not sure if it's even possible. But imagine you need to stop 50 jails at once and stop is being done one by one, so each needs to wait for jail services to stop. That could take 5-10minutes to finish. Would it be possible to start and stop asynchronously. I can open separate ticket for that if it's worth doing it.

This is possible using libioc. Since querying jail state using py-jail #663, Python threading can be used to invoke tasks in parallel. Multiple CLI commands can be execute simultaneously, so that the jails shut down in parallel.

Just as an idea, but we still want to scratch out the best performance possible in the long run. I'm keeping back some performance optimizations to stay focussed on the interface design. Taking advantage of parallelization is definitely a low-hanging fruit. While writing this, I notice that the modified stop command is an invitation for parallelization:

  1. jail removal
  2. unmounting NullFS from the jails fstab (and basedirs)
  3. network teardown
  4. removal of resource limits
  5. firewall rule removal

@urosgruber You will either see results or hear back on this when I did some research on how to deal with multi-threaded generators in a way that does not make the code more complex to read.

igalic commented 5 years ago

We still need to build a DAG of dependent jails, for those which depend on each other, and cannot be shut down in parallel

gronke commented 5 years ago

We still need to build a DAG of dependent jails, for those which depend on each other, and cannot be shut down in parallel

That logic already exists and can be fitted for parallelization. 👍

gronke commented 5 years ago

So it looks tile stop is executed quickly but internally jail is still not stopped so can't be started right away, it took 2-3 seconds before I was able to start it again. Is this intentional?

I guess jail_remove() just puts it in dying state. Will poke @fabianfreyer for hints.

A jail can be in the DYING state and we will need to wait after all teardown operations have finished for it to finally terminate.

fabianfreyer commented 5 years ago

As long as you don't have fixed JIDs, you should be able to restart the jail while the old one is DYING, iirc the check for name collisions ignores dying jails.

igalic commented 5 years ago

what are fixed JIDs?

gronke commented 5 years ago

what are fixed JIDs?

@igalic when you start a jail with a given jid (e.g. jail -c persist path=/rescue jid=123).

As long as you don't have fixed JIDs, you should be able to restart the jail while the old one is DYING, iirc the check for name collisions ignores dying jails.

The current issue is that the NullFS release mounts cannot be unmounted after jail removal (and awaiting it to leave the dying state). In consequence the ZFS datasets they are mounted on top cannot be deleted. This issue occurred after changing to the following:

  1. Jail removal with libc.jail_remove() instead of forking jail -r a. awaiting until libc.jail_get(..., JAIL_DYING) does fail or return 0
  2. Replacing umount {path} with libc.unmount({path})

https://cirrus-ci.com/task/5661599502172160 shows the following

tests/test_Jail.py::TestNullFSBasejail::test_can_be_started ERROR        [ 61%]
==================================== ERRORS ====================================
_________ ERROR at teardown of TestNullFSBasejail.test_can_be_started __________
tp = <class 'libzfs.ZFSException'>, value = None, tb = None
    def reraise(tp, value, tb=None):
        try:
            if value is None:
                value = tp()
            if value.__traceback__ is not tb:
                raise value.with_traceback(tb)
>           raise value
/usr/local/lib/python3.6/site-packages/six.py:693: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
/usr/local/lib/python3.6/site-packages/six.py:693: in reraise
    raise value
/usr/local/lib/python3.6/site-packages/six.py:693: in reraise
    raise value
tests/conftest.py:203: in new_jail
    new_jail.destroy()
tests/libioc/Jail.py:2350: in destroy
    return list(JailGenerator.destroy(self, force=force))
tests/libioc/Jail.py:1092: in destroy
    raise e
tests/libioc/Jail.py:1089: in destroy
    self.zfs.delete_dataset_recursive(self.dataset)
tests/libioc/ZFS.py:96: in delete_dataset_recursive
    self.delete_dataset_recursive(child)
tests/libioc/ZFS.py:101: in delete_dataset_recursive
    dataset.umount()
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
>   ???
E   libzfs.ZFSException: Device busy
libzfs.pyx:2547: ZFSException
--------------------------- Captured stdout teardown ---------------------------
/.ioc-test-12.0-RELEASE/libioc-test/jails/new-jail-2935/root/bin unmount failed
DEBUG DEBUG
b'USER     CMD          PID   FD MOUNT      INUM MODE         SZ|DV R/W NAME\n'
/.ioc-test-12.0-RELEASE/libioc-test/jails/new-jail-2935/root/boot unmount failed
DEBUG DEBUG
b'USER     CMD          PID   FD MOUNT      INUM MODE         SZ|DV R/W NAME\n'
/.ioc-test-12.0-RELEASE/libioc-test/jails/new-jail-2935/root/lib unmount failed
DEBUG DEBUG
b'USER     CMD          PID   FD MOUNT      INUM MODE         SZ|DV R/W NAME\n'
/.ioc-test-12.0-RELEASE/libioc-test/jails/new-jail-2935/root/libexec unmount failed
DEBUG DEBUG
b'USER     CMD          PID   FD MOUNT      INUM MODE         SZ|DV R/W NAME\n'
/.ioc-test-12.0-RELEASE/libioc-test/jails/new-jail-2935/root/rescue unmount failed
DEBUG DEBUG
b'USER     CMD          PID   FD MOUNT      INUM MODE         SZ|DV R/W NAME\n'
/.ioc-test-12.0-RELEASE/libioc-test/jails/new-jail-2935/root/sbin unmount failed
DEBUG DEBUG
b'USER     CMD          PID   FD MOUNT      INUM MODE         SZ|DV R/W NAME\n'
/.ioc-test-12.0-RELEASE/libioc-test/jails/new-jail-2935/root/.iocage unmount failed
DEBUG DEBUG
b'USER     CMD          PID   FD MOUNT      INUM MODE         SZ|DV R/W NAME\n'
===================== 8 passed, 1 error in 136.83 seconds ======================
*** Error code 1

The first step is to jail_remove() and wait for its death. https://github.com/bsdci/libioc/blob/64dc7e74a9317b579e5c6d957e884d7752636ac3/libioc/Jail.py#L1493-L1496

Then the following code runs to move a jails mountpoints. https://github.com/bsdci/libioc/blob/64dc7e74a9317b579e5c6d957e884d7752636ac3/libioc/Jail.py#L2093-L2107

gronke commented 5 years ago

Calling libc unmount with MNT_FORCE flag has resolved the issue on Cirrus Ci. @urosgruber you might want to give this branch another try.

gronke commented 5 years ago

The PR was already a step forward. @urosgruber Please re-open the issue, if there is still a performance issue on your system.

urosgruber commented 5 years ago

Right now only slow down is that inner process (nginx etc.) are sometimes "waiting for PIDS" and ioc is waiting for this and thus blocking other jails to stop. But other issues seem to be resolved. Should I open a new ticket to run async start/stop? or this is already done as well? Btw I was testing v0.7.1 branch.

gronke commented 5 years ago

Right now only slow down is that inner process (nginx etc.) are sometimes "waiting for PIDS" and ioc is waiting for this and thus blocking other jails to stop

There is not much we can do about this as jail manager. You can still set exec_stop to None (ioc set exec_stop= myjail) to skip the default /bin/sh /etc/rc.shutdown.

Should I open a new ticket to run async start/stop?

Yes, please. Will tidy and update the Milestones, so that we can plan the feature together with some other optimizations. I'm happy btw. that more and more forked subprocesses are replaced with libc calls, but missing parallelization will at some point become a bottleneck.

Btw I was testing v0.7.1 branch.

That one was just released after merging the refactored stop process.