turnkeylinux / tracker

TurnKey Linux Tracker
https://www.turnkeylinux.org
70 stars 16 forks source link

V15.0 - systemd inithook.service fails to initialize appliance when running in a container #1071

Closed Dude4Linux closed 6 years ago

Dude4Linux commented 6 years ago

@JedMeister - I mentioned earlier that I was having difficulty getting v15.0 appliances to initialize when running systemd in a container. I believe I've tracked down the cause and am looking for advice on how to fix the problem. After starting a container and waiting for initialization, the systemd journalctl shows the following:

# journalctl -xe
May 01 16:09:48 tkldev-test systemd[1]: inithooks.service: Failed to set invocation ID on control group /system.slice/inithooks.service, ignoring
May 01 16:09:48 tkldev-test systemd[523]: inithooks.service: Failed at step STDIN spawning /bin/sh: No such file or directory
-- Subject: Process /bin/sh could not be executed
-- Defined-By: systemd
-- Support: https://www.debian.org/support
-- 
-- The process /bin/sh could not be executed and failed.
-- 
-- The error number returned by this process is 2.
May 01 16:09:48 tkldev-test systemd[1]: Starting inithooks: firstboot and everyboot initialization scripts...
-- Subject: Unit inithooks.service has begun start-up
-- Defined-By: systemd
-- Support: https://www.debian.org/support
-- 
-- Unit inithooks.service has begun starting up.
May 01 16:09:48 tkldev-test systemd[1]: inithooks.service: Main process exited, code=exited, status=208/STDIN
May 01 16:09:48 tkldev-test systemd[1]: Failed to start inithooks: firstboot and everyboot initialization scripts.
-- Subject: Unit inithooks.service has failed
-- Defined-By: systemd
-- Support: https://www.debian.org/support
-- 
-- Unit inithooks.service has failed.
-- 
-- The result is failed.
May 01 16:09:48 tkldev-test systemd[1]: inithooks.service: Unit entered failed state.
May 01 16:09:48 tkldev-test systemd[1]: inithooks.service: Failed with result 'exit-code'.
May 01 16:17:01 tkldev-test CRON[614]: pam_unix(cron:session): session opened for user root by (uid=0)
May 01 16:17:01 tkldev-test CRON[615]: (root) CMD (   cd / && run-parts --report /etc/cron.hourly)
May 01 16:17:01 tkldev-test CRON[614]: pam_unix(cron:session): session closed for user root
May 01 17:17:01 tkldev-test CRON[669]: pam_unix(cron:session): session opened for user root by (uid=0)
May 01 17:17:01 tkldev-test CRON[670]: (root) CMD (   cd / && run-parts --report /etc/cron.hourly)
May 01 17:17:01 tkldev-test CRON[669]: pam_unix(cron:session): session closed for user root
May 01 18:17:01 tkldev-test CRON[725]: pam_unix(cron:session): session opened for user root by (uid=0)
May 01 18:17:01 tkldev-test CRON[726]: (root) CMD (   cd / && run-parts --report /etc/cron.hourly)
May 01 18:17:01 tkldev-test CRON[725]: pam_unix(cron:session): session closed for user root

At first I couldn't figure out why /bin/sh couldn't be found as implied by the error message. Finally found a post where someone pointed out the the error message is misleading. Taking a look at the inithooks.service file shows:

# cat /lib/systemd/system/inithooks.service 
[Unit]
Description=inithooks: firstboot and everyboot initialization scripts
After=getty@tty8.service
ConditionKernelCommandLine=!noinithooks

[Service]
Type=oneshot
StandardInput=tty-force
TTYPath=/dev/tty8
TTYReset=yes
TTYVHangup=yes
TTYVTDisallocate=yes
EnvironmentFile=/etc/default/inithooks
ExecStart=/bin/sh -c '\
    FGCONSOLE=$(fgconsole); \
    openvt -f -c 8 -s -w -- ${INITHOOKS_PATH}/run; \
    chvt $FGCONSOLE'

[Install]
WantedBy=basic.target

After doing some checking, I realized that the TTYPath i.e. /dev/tty8 is what was missing. In doing the updates for the LXC appliance and images for LXD, I tried to cleanup and remove as much cruft as possible. I believe that confconsole and the /dev/tty's had already been removed in v14.2, but that was not an issue because we never ran systemd in the containers. So my question is what do. I see three options:

  1. Restore /dev/tty8 so that inithooks.service can run unmodified.
  2. Modify inithooks.service in containers to use /dev/console which was retained.
  3. Modify inithooks.service in containers so it runs without user input. I think 3 is the most desirable mode for containers as we are already pre-seeding inithooks and redirecting the output to /var/log/inithooks.log. Looking for a second opinion.
JedMeister commented 6 years ago

Thanks @Dude4Linux. I started testing lxc containers yesterday myself and hit a somewhat similar issue. Your log doesn't look quite the same as mine, but certainly similar.

So your LXC containers don't have any ttys at all?! Does SSH still work? I would have expected that to fail without at least one tty (/dev/console is the kernel console AFAIK, but perhaps there is some LXC trickery to work around that?) FWIW, in my tests (on Proxmox), I limited the ttys to 1, but then I could only run one SSH session (any additional SSH sessions would fail).

ttys aside, I'm fairly sure that at least part of the issue is with the ExecStart line in the SystemD service file. If you look at the SysvInit script it checks to see if fgconsole works as expected (it won't in a container) and if it doesn't, it skips the trickery to bind inithooks to the foreground tty. In the SystemD script, it doesn't check that, so when it trys to run the command under SystemD in a container, it'll return a non-zero exit code and fail.

As to your suggested fixes, whilst preseeding should obviously be supported and is probably the most desirable, it should also be possible to allow interactive inithooks. Most Proxmox users will just download, boot and log in (without knowing about the pre-seeding). So unless they are confronted with the inithooks, we'll likely be flooded with support requests asking what the default passwords are...

As I said, I was trying to use a single tty but the side effect was only being able to use one SSH session, so I wasn't super keen on that. I'm not sure how it worked under SysvInit, as that was (at least in theory) limited to a single tty, but appears to handle multiple SSH sessions no problem... I haven't dug that deep yet, but will have a bit more of a play today.

JedMeister commented 6 years ago

@Dude4Linux - I was just looking through your lxd-make-image script to see what you do and compare it against what I'm doing. I noticed that you are still writing an inittab file. My understanding is that that file was only used by SysvInit and that SystemD will totally ignore it?! So would be superfluous for Stretch builds. Although perhaps I'm wrong...

Also, I see that you are tweaking getty-static.service. That's an interesting approach, I was doing things a little differently and was using /etc/systemd/logind.conf to limit ttys (but as i said, that had some unwanted SSH session side effects).

Also, not sure if you got that from Ubuntu or not, but on Debian Stretch, by my understanding dbus isn't required. Dbus is a "recommends", not a "depends"; some additional info here.

I might have a go following your steps and see what behaviour I get.

Also, one other consideration is that I'm still running Proxmox v4.x so am probably using a much older version of LXC than you!

JedMeister commented 6 years ago

Oops, just realised that your getty-static.service tweaks are within the Wheezy/Jessie section of your script, so only applied in those cases...

Where abouts are you tweaking the ttys? Or perhaps as I noted above, my understanding of /etc/inittab is wrong?! Can you perhaps double check for me when you get a chance and see how many (if any) ttys you have in your container. FWIW, in my testing, they show up under /dev/pts/.

Dude4Linux commented 6 years ago

@JedMeister - Originally lxd-make-image was written for version 14.2. I pulled most of the code from buildtasks/bt-container, the lxc-turnkey template, and Stéphane's lxc-to-lxd script. The ttys were removed by Alon in buildtasks/bt-container/patches/container/conf but leaving /dev/console for remote ssh access. I can't recall ever having a problem with multiple ssh connections to a container. I just checked and I have /dev/console and /dev/tty which was never removed. Like you observed, as connections are made they show up under /dev/pts/. I only checked two connections so I don't know if there is a hard limit.

When I started working with stretch, I had to revise the code and adapt it where necessary. I tried to maintain backward compatibility with 14.x whenever possible. Since we're using systemd-sysv, I assumed that /etc/inittab was still required, but if not, it should be moved to the conditional section.

The tweak to getty-static.service comes from the lxc-turnkey template which took it from the lxc-debian template. You'd have to ask Alon to explain why it was needed.

I discovered another issue related to pre-seeding containers. bt-container sets REDIRECT_OUTPUT=true in /etc/default/inithooks. If any of the firstboot.d scripts prompt for user input, the process hangs. I'm not sure if it will ever timeout as I don't have the patience to wait. I got bit be the recent addition of /usr/lib/inithooks/firstboot.d/85secalerts which prompts the user for an email address. I had to add export SEC_ALERTS=SKIP to the pre-seeded /etc/inithooks.conf.

JedMeister commented 6 years ago

Thanks for the background and additional info @Dude4Linux.

I can't recall ever having a problem with multiple ssh connections to a container.

No, I've never had issues either.

I just checked and I have /dev/console and /dev/tty which was never removed. Like you observed, as connections are made they show up under /dev/pts/. I only checked two connections so I don't know if there is a hard limit.

Cool thanks for checking that. 2 is better than I managed when I limited ttys to 1. FWIW /dev/tty should just be something of an alias for whichever tty is active, so in this case, /dev/pts/N

I'm thinking that we don't actually need any of that stuff now. I've done a fair more more testing today and FWIW, it seems that SystemD is completely aware that it's running in a container.

With no modifications made, instead of any of the normal (for other TurnKey v15.0 VMs) getty-static.service, I have container-getty@0.service - inactive (dead); start condition failed & container-getty@1.service - active (running). I'm not sure, but I think it may be getty.target - active both in a container and a VM - that takes care of that...?

I've been playing with the inithooks.service file and with a tweaked one applied via the container patch overlay (i.e. overlay/lib/systemd/system/inithooks.service) I can sort of get it to work (still not quite right, but close). However, that's not ideal as it would be overwritten by a package update.

I've been trying to add tweaks via an overlayed /etc/systemd/system/inithooks.service.d/override.conf which is a preferable measure as that wouldn't be overwritten by package updates (perhaps a doulbe edged sword?!). FWIW, removing the service file and using the one that SystemD generates from the init.d script appears to work, but isn't ideal either.

@OnGle did suggest that worst case scenario we could create a separate inithooks-container package (made from the same source code as inithooks, but with a different service file). That should be fairly straight forward I would imagine?! So could be a good option. I still need to get it working reliably though. Unfortunately I don't know enough about SystemD so have been doing lots of reading today...

I hope you don't mind, I moved conversation of your other inithooks point to a separate issue: #1073

Dude4Linux commented 6 years ago

Note: I hadn't refreshed my page when I started composing this, so I hadn't seen your latest reply above. Thanks for opening #1073. I was just about to open an issue and you saved me the trouble.

@JedMeister - Because of the way bt-container redirects output to /var/log/inithooks.log, I don't think it's possible to have interactive inithooks in a container. Keep in mind that managing containers is very different than managing hosts. Typically you would login to the host machine using ssh, and from there use lxc-console or lxc-attach on LXC v1; or lxc exec on LXD v2 to connect to the containers. Seldom if ever, do you want to login to the container using ssh. This is not new behavior, but follows what Alon setup in the first LXC appliance.

In developing lxd-make-image and the 15.0 updates for LXC, I've tried to follow the best-practices from running Debian on LXD v2. They are even more radical and do things like removing /dev/console, disabling root login, and disabling ssh altogether.

I've tested the following change to inithooks.service and verified that it runs the firstboot.d scripts when the container is started for the first time.

# cat /lib/systemd/system/inithooks.service
[Unit]
Description=inithooks: firstboot and everyboot initialization scripts
After=console-getty.service
ConditionKernelCommandLine=!noinithooks

[Service]
Type=oneshot
EnvironmentFile=/etc/default/inithooks
ExecStart=/bin/sh -c '${INITHOOKS_PATH}/run'

[Install]
WantedBy=basic.target

I had to guess on the After= line, but it seems to work. I propose using this only in containers.

Dude4Linux commented 6 years ago

@JedMeister - FWIW, I verified that there are no inittab's in containers running Debian9 or Ubuntu Xenial. My bad for not noticing. I'll modify lxd-make-image on the next update so that /etc/inittab is created only for Wheezy/Jessie. BTW, Wheezy is included because when I started this project over a year ago, Wheezy was still supported.

JedMeister commented 6 years ago

@Dude4Linux

Apolgies on the length of this...

Because of the way bt-container redirects output to /var/log/inithooks.log, I don't think it's possible to have interactive inithooks in a container.

v14.x interactive inithooks on first login (via SSH or directly) within an LXC container work really well. I'm aiming to recreate that experience with v15.x.

As I noted over on #1073: "[assuming no user preseeding] the inithooks run twice, once self-preseeded (which is what brings up the [init]fence). Then they re-run interactively, and sit there waiting for the user to log in."

Keep in mind that managing containers is very different than managing hosts. Typically you would login to the host machine using ssh, and from there use lxc-console or lxc-attach on LXC v1; or lxc exec on LXD v2 to connect to the containers.

Whilst that is clearly a legitimate use case scenario, and may be the way you usually work, I wouldn't say that's always the case. IMO in a multi-tenant type arrangement, it is much better to only allow user's access to their own LXC container rather than the host machine as well.

Whilst I'm aware that LXC still doesn't (and may never) provide the same level of separation and security that OpenVZ did, SSH still serves that purpose fairly nicely. E.g. we have a user who is using it in an educational setting where each student has their own LXC container(s) - initially launched by the teacher. Giving all individual students access to the host is highly undesirable IMO.

Seldom if ever, do you want to login to the container using ssh.

Personally, my host machine is a purpose built headless server and I almost always access the individual container I wish to work with, directly via SSH. I rarely log into the host machine (other than to create the initial container). Having to first log into the host, then enter the machine requires additional commands which are essentially redundant in my use case. I would hate to see that added as a requirement and I doubt that I'm the only one.

FWIW, even in their marketing hype, Ubuntu note: "[...] Install SSH and log into them remotely, they [LXD containers] behave just like real machines."

This is not new behavior, but follows what Alon setup in the first LXC appliance.

Not that it matters, but circa 2010, with a little help from Adrian, I did the initial PoC container (OpenVZ) implementation. Alon then partnered with Proxmox and implemented the "official" TurnKey (v12.0 IIRC) OpenVZ implementation (based on the work that Adrian and I had done).

For v14.0, Anton and I did the transition from OpenVZ to LXC. Interactive inithooks within a container via first login (SSH or otherwise) have always worked since Alon's initial OpenVZ implementation. I'm hoping it won't require as much work to get that working this time as it did with the v14.0 transition from OpenVZ, but we'll have to see.

Considering that the current TurnKey "container" build is first and foremost (at least at this point) a Proxmox integration, support for interactive inithooks is a requirement. Otherwise, we'll be flooded with support requests on what the default passwords are (IME that's what people expect unless they are presented with the inithooks, or read the docs).

They are even more radical and do things like removing /dev/console, disabling root login, and disabling ssh altogether.

FWIW, if you wish to disable root, then inithooks provides an easy way to do that via turnkey-sudoadmin. It's set to false by default, but can easily be enabled for firstboot (or at buildtime).

I've tested the following change to inithooks.service and verified that it runs the firstboot.d scripts when the container is started for the first time.

Thanks for sharing this John. That's really handy. I probably should have shared exactly what I'd tried yesterday, but as I hadn't got to the point of reliability (and was trying out a few different things and doing lots of reading) I wasn't sure of the value. I will endeavour to post back how I go later today.

I propose using this only in containers.

As I noted yesterday, that is one way to go, but to do it properly, would require us to create a new inithooks package that we install in containers (otherwise /lib/systemd/system/inithooks.service would be overwritten by future inithooks package updates). I have no fundamental aversion to going that path and it may be the best path. Perhaps it could be called inithooks-container, inithooks-lxc or similar.

One of the other alternatives that I've been looking at, is to use a local systemd override.conf file that is included as part of the overlay. As I say though, that is the double edged sword. On one hand, we wouldn't need to provide an additional/alternative inithooks package. On the other, if it's only included as an overlay, we'd likely need to rebuild the LXC containers if we discover a bug at some point in the future...

BTW, Wheezy is included because when I started this project over a year ago, Wheezy was still supported.

FWIW Debian LTS support for Wheezy ends end of this month: https://wiki.debian.org/LTS

Dude4Linux commented 6 years ago

@JedMeister - You've made some excellent points. I guess I've been viewing LXC and LXD through the lens of how I intended to use it as a testing environment when paired with Ansible. I agree that for the multi-user environment that you describe, having ssh access control on a per-container basis is a must. That can be setup using either LXC or LXD using pre-seeding. It would also be easy to use Ansible to create a complete set of containers with individual user names and passwords.

After thinking about it for awhile and doing some checking, it seems to me that we need inithooks.service aware of when it running in a container. Currently, both LXC v1 and LXD v2 set an environment variable in each container, i.e. container=lxc. This should also work for Proxmox, but you probably have to test for Docker as well. After a little googling I came up with the following:

#!/bin/bash
if grep -qa 'docker' /proc/1/cgroup; then
  echo "I'm running on docker."
elif grep -qa 'container=lxc' /proc/1/environ; then
  echo "I'm running in container."
fi 

Of course you will have to adjust the logic accordingly.

JedMeister commented 6 years ago

@Dude4Linux - As you would have noticed, I didn't post back yesterday... Mostly because I didn't really have any real progress... I tried your systemd service file, and whilst it works beautifully when pre-seeding, it doesn't work so well when not pre-seeding. It only runs the first time and brings the fence up, then exits. Trying to restart it, does nothing. The only way to work around is manually invoke turnkey-init which is not ideal.

So I'll need to do some more reading and perhaps even reach out to someone who knows more than me for assistance (there is so much to read about SystemD!). One thing I am going to try shortly, is to see what happens if I just remove the default service file altogether (and let SystemD generate it's own one from the old init.d script).

One thing that I did turn up in my digging through SystemD docs though, was that SystemD can already tell what environment it's running in! The command systemd-detect-virt returns a string matching the virtualisation/containterisation. It's pretty cool!

That functionality can also be leveraged in SystemD service scripts, either by specific virtualized environment, or simply container or vm. E.g. via adding a ConditionVirtualization=lxc to the [Service] section of the service file will only run the service when running as an LXC container.

JedMeister commented 6 years ago

@Dude4Linux - Ok had a fair bit of progress today! :smile:

I've pretty much got it working as intended. As I had other inithook bugs to troubleshoot which I thought may have also been to do with the service file. So I did (temporarily) revert to using the autogenerated SystemD service file (i.e. removing the default one supplied with the package and allowing systemD to generate one from /etc/init.d/inithooks). But after I'd worked through those other issues, I tested with your file and it worked great, so thanks for that.

I'm not 100% sure of the best way to proceed with the SystemD service file. I feel like we could do better than overlaying a file handled by package management. Ideally I think it would be good to leverage ConditionVirtualization=lxc somehow to make it so it works no matter where it's running. So I might see if I can get the attention of someone a little more versed in SystemD. OTOH I guess it's pretty unlikely that a user would update the inithooks pacakge (and thus restore the default file) before a container is initialised!

I haven't merged with TurnKey repo yet as I need to double check against the latest revision of your scripts, to make sure I'm not missing anything. But FWIW, here's a couple of things I've done:

You can see my latest patches/container/confhere. I'll probably rework the commits a little before I merge too, but the file itself should be good (works fine for me on Proxmox v4.x).

JedMeister commented 6 years ago

(please note: no need to read the background/thread above; I'll include all required info in this post)

@lamby - I'm seeking your advice on the best path to provide an alternate systemd service file for the LXC builds of our appliances.

The default SystemD inithooks.service file fails within an LXC container (please ask and/or read above if you'd like more info). @Dude4Linux has developed an alternate inithooks.service file which works as intended within an LXC container.

For development and testing, we're overwriting the packaged inithooks.service file, but obviously that is bad practice (locally overwriting files handled by package management). IMO the "right" path would clearly be to package this alternate service file.

However, I'm not clear on the "best practice" path to do that.

A few options spring to mind (and there are no doubt others I haven't considered):

  1. As the default service fails within a container and we could add something like ConditionVirtualization=lxc (or perhaps just ConditionVirtualization=container) to the [Unit] section of the alternate inithooks.service (inithooks-lxc.service or inithooks-container.service perhaps?) so it wouldn't run anywhere else. So we could have both services enabled by default and let the failure (or not) of the respective services control which one runs.

  2. Include both services within the package, with the default service enabled and the LXC version disabled. Then at LXC build time disable the default and enable the LXC specific version.

  3. Include both service files within the package source and provide 2 binary packages; the default inithooks package (with the default service) and a inithooks-lxc package (with the LXC service). Or perhaps even better 3 binary packages, an inithooks-common package along with inithooks-default and inithooks-container

FWIW 3 strikes me as being the "cleanest" option, but am I just complicating things? I'd really appreciate your thoughts.

lamby commented 6 years ago

Silly question without much context - why not the fix the inithooks.service file itself in the package it is currently shipped in? Or do we definitely need two versions..?

JedMeister commented 6 years ago

@lamby - Apologies if there was something missing. I thought I'd provided all the relevant and required info. A single service file is a fine idea but I'm not sure how we can do that?

Can you offer suggestion on how we can merge these 2 SystemD files into one that will work across both platforms?

The one you created (which works great, except within a container):

[Unit]
Description=inithooks: firstboot and everyboot initialization scripts
After=getty@tty8.service
ConditionKernelCommandLine=!noinithooks

[Service]
Type=oneshot
StandardInput=tty-force
TTYPath=/dev/tty8
TTYReset=yes
TTYVHangup=yes
TTYVTDisallocate=yes
EnvironmentFile=/etc/default/inithooks
ExecStart=/bin/sh -c '\
    FGCONSOLE=$(fgconsole); \
    openvt -f -c 8 -s -w -- ${INITHOOKS_PATH}/run; \
    chvt $FGCONSOLE'

[Install]
WantedBy=basic.target

The one that works within a container:


[Unit]
Description=inithooks: firstboot and everyboot initialization scripts
After=console-getty.service
ConditionKernelCommandLine=!noinithooks

[Service]
Type=oneshot
EnvironmentFile=/etc/default/inithooks
ExecStart=/bin/sh -c '${INITHOOKS_PATH}/run'

[Install]
WantedBy=basic.target

FWIW from my brief testing, it appears that if we remove the "proper" SystemD service file you created, then the auto-generated one (that leverages the /etc/init.d/inithooks script) works fine across both platforms. So I guess that's an option? Although, I was working on the assumption that you didn't create that file, just for the hell of it though?! If allowing SystemD to generate a service file that leverages the init.d script is an acceptable way of doing things why bother creating a new one?!

As a further thought, is using logic somewhere within the Debian install scripts to generate the appropriate service file at install time an acceptable option? If so where would that logic best be inserted?

lamby commented 6 years ago

The one you created

Hang on one sec.. I did?

JedMeister commented 6 years ago

Hang on one sec.. I did?

Oops. My bad...! Looks like Alon created it, you just renamed it... Sorry my bad!

Regardless, can you help out with this? Or recommend someone who might be able to?

And/or answer my other questions/suggestions?

JedMeister commented 6 years ago

@lamby - apologies again on my mistake... That certainly does explain your confusion! :smile:

So perhaps I DO need to give you some additional background! Or perhaps you'e not even the best person to give advice...?!

Unfortunately, I don't know a lot about SystemD. I did spend a lot of time reading last week, but I still don't know the best path forward with this... Bottom line is that my research, reading and trial-and-error (that soaked up much of last week), suggests that we do need 2 different service files; one when running in a container, one for everywhere else.

So the options that strike me:

  1. multiple inithooks packages with different service files
  2. somehow merge the 2 service files (as you suggested, but I have no idea how could be accomplished, or even if it's possible)
  3. generate/choose the required service file via debian install logic (at install time)
  4. remove the systemD service file altogether and allow systemD to auto-generate a service file which depends on the existing init/d script
  5. some other better options I'm unaware of...
lamby commented 6 years ago

I have so little LXD experience that I don't think I can speak directly to the merging, but if it's just a case of detecting whether one is in a maintainer and running a different command (is that the case?) then going with option 2 would seem most sensible to me.

Not quite sure how 3 would work. Would that be also detecting a containerised environment at install-time? If so, that would be buggy in the case you are installing the package into a chroot to be subsequently run inside a container. ie. we must do runtime detection if we go down the detection route AFAICT.

JedMeister commented 6 years ago

Thanks as per always for your input @lamby

Not quite sure how 3 would work [...] would be buggy [...] must do runtime detection [...]

Good call - let's rule out 3 altogether then!

Let's also put options 1 & 4 aside for now (i.e. fall back options if need be).

An additional option (6!) is to simply include both service files in the package and just let SystemD workout which to start!? (I'll discuss more below)

[...] if it's just a case of detecting whether one is in a maintainer [container] and running a different command (is that the case?) then going with option 2 would seem most sensible to me.

Different start logic (e.g. ExecStart=/bin/sh -c "if abc; then ghi; else xyz") should be possible. Worst case; via a shell wrapper script - which apparently is completely legitimate, so long as the wrapper execs the final target service process (see this 'nix StackExchange answer).

The bigger issue (and/or perhaps an answer? - i.e. new option "6" above) is the start condition. getty@tty8.service doesn't run inside a container, so the trigger condition (i.e. After=getty@tty8.service) will never occur. Instead, a container has console-getty.service (disabled by default, but running within a container with no intervention from me - perhaps if I understood the how & why of that, I might have another possibility?).

So as noted above, providing both service files within the inithooks package and letting SystemD sort out which one to start is another possibility?! I.e. if getty@tty8.service is running, the default service will start, if console-getty.service, then the LXC variant will start.

Any thoughts on that potential approach?

It should work, but the service not running will be logged as failed. FWIW within a LXC container, many services fail in that way, so will be just another...

I'll continue to do some more reading on SystemD, as it seems likely that there is something that I'm still missing and perhaps a cleaner path. Unfortunately, it appears that this isn't a particularly common problem, or perhaps I just need to use some alternate search terms (or just RTFM start to finish; instead of trying to cut corners...).

lamby commented 6 years ago

Hmm, indeed, I fear the After trigger condition probably implies that a single service file won't work with just a naive shell conditional/wrapper.

Wait a sec, is this just as simple as shipping one container with ConditionVirtualization=true and ConditionVirtualization=false? I think those are checked prior to everything else thus it won't be marked as failed but rather "not running", or whatever it is.

detecting whether one is in a ~maintainer~ [container]

Haha, well-spotted. \o/

JedMeister commented 6 years ago

Hmm, indeed, I fear the After trigger condition probably implies that a single service file won't work with just a naive shell conditional/wrapper.

That was was my conclusion too.

Wait a sec, is this just as simple as shipping one container with ConditionVirtualization=true and ConditionVirtualization=false? I think those are checked prior to everything else thus it won't be marked as failed but rather "not running", or whatever it is.

Unfortunately, that won't quite work either as ConditionVirtualization=true will also catch other virtualisation, e.g. qemu, kvm, vmware, etc. (where we want to use the "normal" service).

ConditionVirtualization=container would work for the "container only" service file though, so that's something... And as I said, the container journal is already full of failed services, so one more is probably not that big a deal.

See http://0pointer.de/blog/projects/detect-virt.html for further details (note that ConditionVirtualization=container looks like it will also detect chroot which may not be an issue, but may perhaps complicate things?)

However, it appears that the console-getty.service (only in a container) is enabled/triggered by a slice!? i.e.:

root@tkl-150rc1 ~# systemctl status console-getty.service
* console-getty.service - Console Getty
   Loaded: loaded (/lib/systemd/system/console-getty.service; disabled; vendor preset: disabled)
   Active: active (running) since Fri 2018-05-04 10:29:19 UTC; 4 days ago
     Docs: man:agetty(8)
 Main PID: 197 (agetty)
    Tasks: 1 (limit: 4915)
   CGroup: /system.slice/console-getty.service
           `-197 /sbin/agetty --noclear --keep-baud console 115200,38400,9600 linux

May 04 10:29:19 tkl-150rc1 systemd[1]: console-getty.service: Failed to reset devices.list: Operation not permitted
May 04 10:29:19 tkl-150rc1 systemd[1]: Started Console Getty.
May 04 10:29:24 tkl-150rc1 systemd[1]: console-getty.service: Failed to reset devices.list: Operation not permitted
May 04 10:29:48 tkl-150rc1 systemd[1]: console-getty.service: Failed to reset devices.list: Operation not permitted
May 04 10:32:29 tkl-150rc1 systemd[1]: console-getty.service: Failed to reset devices.list: Operation not permitted

I still haven't read enough about them yet, but that may be an option for us too?!

lamby commented 6 years ago

Perhaps one of the ConditionalFoo then? ConditionPathExists=/file-only-in-lxc-container ? :) Hm, isn't there even a dynamic Conditional-like, although I don't think it is called that. Oh, hm, there is https://www.freedesktop.org/software/systemd/man/systemd.generator.html. Never used it, mind you...

JedMeister commented 6 years ago

Perhaps one of the ConditionalFoo then? ConditionPathExists=/file-only-in-lxc-container ? :)

Could be a good option.

Hm, isn't there even a dynamic Conditional-like, although I don't think it is called that. Oh, hm, there is https://www.freedesktop.org/software/systemd/man/systemd.generator.html. Never used it, mind you...

You're full of good ideas mate! :smile: That could be an even better option?!

I'll keep reading then do some playing...

lamby commented 6 years ago

Hm, isn't there even a dynamic Conditional-like

Sorry, by that I meant I think I remember running into something (although I don't recall it having a Conditional prefix) that would not execute unless the script returned true...

JedMeister commented 6 years ago

Yes there are a number of Conditions (used within the [Unit] section) which could serve our purpose. So we could could have 2 separate service files. The default one would have ConditionVirtualization=!lxcand the LXC one would have the inverse; ConditionVirtualization=lxc (or something like that).

In retrospect, that would have probably been the quickest and easiest way to go and could perhaps still be the best option?! (please share your thoughts.)

But me being me; spent the afternoon playing with the systemd.generator. I haven't yet double checked that it works as intended in a "normal" VM build, but it should (or at least be close). We'll also need to double check that with the package installed into the chroot, then converted into an lxc container adjusts itself as expected too (but I'm quietly confident).

Regardless, this works pretty sweet inside an LXC container!:

root@tkl-150rc1 ~# cat /lib/systemd/system-generators/inithooks-service-generator 
#!/bin/bash -e

serviceName="inithooks"
description="$serviceName: firstboot and everyboot initialization scripts"

string="^container="
container=$(cat /proc/1/environ | tr '\0' '\n' | sed -n "/$string/ s|$string||"p)

if [ "$container" = "lxc" ]; then
    after="console-getty.service"
    additionalLines="ExecStart=/bin/sh -c '\${INITHOOKS_PATH}/run'"
else
    after="getty@tty8.service"
    additionalLines="StandardInput=tty-force
TTYPath=/dev/tty8
TTYReset=yes
TTYVHangup=yes
TTYVTDisallocate=yes
ExecStart=/bin/sh -c '\
    FGCONSOLE=\$(fgconsole); \
    openvt -f -c 8 -s -w -- \${INITHOOKS_PATH}/run; \
    chvt \$FGCONSOLE'"
fi

generatorDir=$1 # normal-dir i.e. /run/systemd/generator
unitFile=$generatorDir/$serviceName.service

cat > "$unitFile" <<EOF
[Unit]
Description=$description
After=$after
ConditionKernelCommandLine=!noinithooks

[Service]
Type=oneshot
EnvironmentFile=/etc/default/inithooks
$additionalLines

[Install]
WantedBy=basic.target
EOF

mkdir "$generatorDir/basic.target.wants" 2>/dev/null
ln -s "$unitFile" "$generatorDir/basic.target.wants/$serviceName.service"

And here's the generated service file and symlink, created at boot time:

root@tkl-150rc1 ~# cat /run/systemd/generator/inithooks.service 
[Unit]
Description=inithooks: firstboot and everyboot initialization scripts
After=console-getty.service
ConditionKernelCommandLine=!noinithooks

[Service]
Type=oneshot
EnvironmentFile=/etc/default/inithooks
ExecStart=/bin/sh -c '${INITHOOKS_PATH}/run'

[Install]
WantedBy=basic.target
root@tkl-150rc1 ~# ls -l /run/systemd/generator/basic.target.wants/inithooks.service 
lrwxrwxrwx 1 root root 40 May  9 08:40 /run/systemd/generator/basic.target.wants/inithooks.service -> /run/systemd/generator/inithooks.service

And the service in action:

root@tkl-150rc1 ~# systemctl status inithooks.service                                       
* inithooks.service - inithooks: firstboot and everyboot initialization scripts
   Loaded: loaded (/run/systemd/generator/inithooks.service; generated; vendor preset: enabled)
   Active: inactive (dead) since Wed 2018-05-09 08:41:00 UTC; 21min ago
 Main PID: 207 (code=exited, status=0/SUCCESS)

May 09 08:40:58 tkl-150rc1 inithooks[237]: INFO [turnkey-sudoadmin]: ssh_authorizedkeys_inithook root
May 09 08:40:58 tkl-150rc1 inithooks[237]: INFO [turnkey-sudoadmin]: ssh_authorizedkeys_merge root admin
May 09 08:40:58 tkl-150rc1 inithooks[237]: INFO [turnkey-sudoadmin]: permitrootlogin_ssh yes
May 09 08:40:58 tkl-150rc1 inithooks[237]: INFO [turnkey-sudoadmin]: user_state root unlock
May 09 08:40:58 tkl-150rc1 inithooks[237]: passwd: password expiry information changed.
May 09 08:40:58 tkl-150rc1 inithooks[237]: INFO [turnkey-sudoadmin]: user_state admin lock
May 09 08:40:58 tkl-150rc1 inithooks[237]: INFO [turnkey-sudoadmin]: set_sshrootbanner
May 09 08:40:58 tkl-150rc1 inithooks[237]: INFO [turnkey-sudoadmin]: restart_sshd
May 09 08:40:58 tkl-150rc1 inithooks[237]: Restarting ssh (via systemctl): ssh.service.
May 09 08:41:00 tkl-150rc1 systemd[1]: Started inithooks: firstboot and everyboot initialization scripts.

FWIW it still (auto) generates a inithooks.service file from the init.d script (saved to /run/systemd/generator.late). That had me a little baffled for a little while, until I realised that was overwriting my generated service file! (I was initially generating a "late" script which was being overwritten by the systemd-sysv-generated one).

So looks like we have 2 workable and relatively clean options now. I must say I quote like the idea of the generator, but @lamby I would really appreciate your input/thoughts.

lamby commented 6 years ago

So we could could have 2 separate service files. The default one would have ConditionVirtualization=!lxcand the LXC one would have the inverse; ConditionVirtualization=lxc (or something like that).

In retrospect, that would have probably been the quickest and easiest way to go and could perhaps still be the best option?! (please share your thoughts.)

Oh, I interpreted your message in https://github.com/turnkeylinux/tracker/issues/1071#issuecomment-387596229 that ConditionVirtualization= only took booleans. Going with =lxc etc. seems obviously the right approach in that light. I only skipped the manual pages, sorry..

Dude4Linux commented 6 years ago

@JedMeister @lamby I've been trying to follow this discussion while working on getting apt-cacher-ng to work. One concern I have re: using ConditionVirtualization=lxc is that Proxmox is also using LXC and that condition is probably set there as well. Unlike LXC, Proxmox provides a VNC console so Inithooks doesn't have to be preseeded, in fact I don't know how to preseed a Proxmox CT.

@JedMeister - That generated file looks awfully familiar. Are you sure that's not the one I gave you? If not, then wow! I'm pretty sure that it won't work in a VM, but I haven't tested it.

I believe the reason the original file doesn't work in LXC containers, is that all the /dev/tty's were removed as unneeded. Perhaps this is a sign that we went too far. I see from testing a Proxmox container, that two tty's were retained, presumably for use by the VNC console. Perhaps we should consider restoring two of the tty's in the LXC containers.

JedMeister commented 6 years ago

@Dude4Linux

One concern I have re: using ConditionVirtualization=lxc is that Proxmox is also using LXC and that condition is probably set there as well. Unlike LXC, Proxmox provides a VNC console so Inithooks doesn't have to be preseeded, in fact I don't know how to preseed a Proxmox CT.

AFAIK Proxmox builds can be pre-seeded the same as they can on vanilla LXC, although I haven't tested that for v15.0 (yet). So I don't anticipate any issue there.

FYI my testing to date has all been on Proxmox v4.x, interactively completing the inithooks via SSH, although they can also be done via the NoVNC terminal.

The plan is now to use that exact same service file (that you created @Dude4Linux ), but with the addition of ConditionVirtualization=lxc in the [Unit] section.

We'll also need to call it something slightly different (I'm planning inithooks-lxc.service to be explicit) to avoid a namespace clash. It will also need to conflict with inithooks.service (to avoid SystemD also trying to start the inithooks.service it will auto generate from the init.d script). I'm not yet 100% certain of what that will require, but my reading from yesterday assures me it's possible.

As soon as I have something for you to test, I'll share it here.

That generated file looks awfully familiar. Are you sure that's not the one I gave you? If not, then wow! I'm pretty sure that it won't work in a VM, but I haven't tested it.

Yep! :smile: It essentially is the one you gave me, but rather than being a static file, it's generated by the generator script I posted above it (/lib/systemd/system-generators/inithooks-service-generator)! Pretty cool huh!?

Despite the coolness, next up, I'll be testing 2 separate service files (inithooks.service & inithooks-lxc.service) which will be mutually exclusive and only run on the appropriate build (as discussed/noted above).

I believe the reason the original file doesn't work in LXC containers, is that all the /dev/tty's were removed as unneeded. Perhaps this is a sign that we went too far.

You mean the one created by Alon and included in the package? If so, I'm pretty sure there's more to it than that. My tests suggested that on PVE, it still failed unless it was whittled down to what you shared.

FWIW in my tests (on PVE) I'm not adjusting the container ttys at all. I'm just leaving it to SystemD to sort it out.

I see from testing a Proxmox container, that two tty's were retained, presumably for use by the VNC console. Perhaps we should consider restoring two of the tty's in the LXC containers.

Do you mean a v14.x container on Proxmox? Or have you built a v15.0 one to test?

Also, I'm curious to see if you're seeing different behaviour of v15.0 in Proxmox vs vanilla LXC (and/or LXD)?

If so that may potentially cause issue with the interactive inithooks. I was under the impression that the tty setup was just what SystemD was doing itself?! Anyway, I'll cross that bridge when I get to it (unless you beat me to it).

Perhaps what I could do, is once I have things working well, I could upload a full container build somewhere so we can be sure we're both on the same page.

Dude4Linux commented 6 years ago

@JedMeister

Do you mean a v14.x container on Proxmox? Or have you built a v15.0 one to test?

I was referring to a v14.2 container on Proxmox. I've looked but couldn't find a v15.0 proxmox image to test. In a v14.2 zurmo container on a Proxmox 4.4 server

# ls /dev
console  full  null  pts     shm     stdin   tty   tty2     xconsole
fd       log   ptmx  random  stderr  stdout  tty1  urandom  zero

tty1 is used for the VNC console connection.

In a v15.0 core container on LXD

# ls /dev
console  fuse   lxd     null  random  stdin   urandom
fd       log    mqueue  ptmx  shm     stdout  xconsole
full     loop0  net     pts   stderr  tty     zero

Note that /dev/tty1 and /dev/tty2 is missing from the LXC/LXD container, and that /dev/tty8 is missing from both Proxmox and LXC. Earlier you asked if it was possible to use a single image for Proxmox, LXC, and eventually LXD. Toward that goal, I think we should put back (or stop removing) the two tty's and change the inithooks.service to use tty2 instead of tty8 for the virtual console. That way, one service file might work for all platforms. I can't speak for other targets i.e. Docker, Xen, AWS etc.

[Unit]
Description=inithooks: firstboot and everyboot initialization scripts
After=getty@tty2.service
ConditionKernelCommandLine=!noinithooks

[Service]
Type=oneshot
StandardInput=tty-force
TTYPath=/dev/tty2
TTYReset=yes
TTYVHangup=yes
TTYVTDisallocate=yes
EnvironmentFile=/etc/default/inithooks
ExecStart=/bin/sh -c '\
    FGCONSOLE=$(fgconsole); \
    openvt -f -c 2 -s -w -- ${INITHOOKS_PATH}/run; \
    chvt $FGCONSOLE'

[Install]
WantedBy=basic.target

I don't think the LXC containers will care since they work with pre-seeding and inithooks output is being redirected to a log file. Keep in mind that this is just a theory and I haven't yet had time to try it.

JedMeister commented 6 years ago

I've looked but couldn't find a v15.0 proxmox image to test.

No there isn't one yet, but as soon as I have this working reliably (I think I'm close - but am now trying to get the init-fence working a little better), I'll upload one somewhere and post a link and it'd be awesome if you can test if out. I was hoping to get it done today, but have run out of time...

In a v15.0 core container on LXD

# ls /dev
console  fuse   lxd     null  random  stdin   urandom
fd       log    mqueue  ptmx  shm     stdout  xconsole
full     loop0  net     pts   stderr  tty     zero

Note that /dev/tty1 and /dev/tty2 is missing from the LXC/LXD container, and that /dev/tty8 is missing from both Proxmox and LXC. Earlier you asked if it was possible to use a single image for Proxmox, LXC, and eventually LXD. Toward that goal, I think we should put back (or stop removing) the two tty's and change the inithooks.service to use tty2 instead of tty8 for the virtual console. That way, one service file might work for all platforms.

Yeah, it seems that systemd doesn't spawn any ttys other than those in a container.

From what I gather, it's the preconfiguration of systemd (i.e. by the systemd devs) plus perhaps some OS level tweaking, that is "removing" the ttys. I'm basing that assumption on the fact that I've made no modifications at all and systemd appears to be completely aware that it's running in a container and is thus has a limited number of services running. On a v15.0 Core LXC container on PVE 4.x, here's what I get:

root@tkl-150rc1 ~# ls /dev
console  fd    hugepages  log     null  pts     shm     stdin   tty   tty2     zero
core     full  initctl    mqueue  ptmx  random  stderr  stdout  tty1  urandom

It is interesting that you are seeing somewhat different results to me. I wonder if that is something to do with the creation process? Or whether it's something to do with the newer version of LXC, or perhaps the different underlaying OS? I guess we'll see once you test a template that I've created...

Regardless, the service file you initially created works a treat on LXC on Proxmox. The interactive inithooks are accessible via the NoVNC terminal window, or when logging in via SSH, so everything is pretty much as it should be! FYI, the fgconsole openvt and chvt commands all fail within a (LXC) container (AFAIU because virtual terminals are limited/non-existent) so the (simplified) ExecStart line from your script is definitely required.

FWIW for v14.0 we adjusted the init.d script to test to see if fgconsole works or not and if not (i.e. in a container) it simply does ${INITHOOKS_PATH}/run. Because we were still using sysv-init in v14.x PVE/LXC build, we weren't seeing this issue as the service file was never being launched. Now we're using systemd, Alon's service file fails under LXC (because the fgconsole command fails).

I did actually try re-adding some of the other tty stuff to the service file yesterday (out of interest), and that only made weird stuff happen and it became unstable.

I've reverted all that today and have thoroughly tested using your initial LXC service file (with the additional condition so that it only runs on LXC). I still need to test the service files within an ISO build (installed to a VM) to ensure that it works there too, but I'm feeling incredibly confident.

I've also been working on the init-fence a bit too and some other bug fixes in the inithooks package. I was hoping to look at some debug facility which would have made my life a lot easier last week, and would make confirming everything is working as intended much easier in the future, but again, I've run out of time. I'll see how I go next week, but so long as everything is working now, I'm inclined to leave that for another day.

Early next week I'll put it altogether (with new inithooks package) and rebuild the ISO and do some more tests in a VM, to ensure that I haven't broken anything. Assuming that all works, I'll rebuild an LXC (from my new ISO) and retest that. Once it all looks ok from my end, I'll upload that so you can have a look too if you want?

Until next week...

JedMeister commented 6 years ago

closed as part of https://github.com/turnkeylinux/inithooks/commit/34136c3b6bde230b49fe2fb7e7bfae6939d8e3e5

@Dude4Linux - Here's a v15.0 Proxmox/LXC build of Core for you to test out: http://dev-apt.jeremydavis.org/builds/debian-9-turnkey-core_15.0rc1-1_amd64.tar.gz http://dev-apt.jeremydavis.org/builds/debian-9-turnkey-core_15.0rc1-1_amd64.tar.gz.hash

(please note the hashfile isn't signed)

Dude4Linux commented 6 years ago

@JedMeister - I downloaded the files and placed them in my LXC image cache, since they can't be downloaded from the TurnKey mirrors. The first thing I noticed is that I need to modify lxc-turnkey to relax the version check and accept the rc1 extension. Secondly I had to disable the gpg verify as it failed with an unsigned hashfile. That's a good thing. Once I got it to create a container, I noted the following:

INFO [lxc-turnkey]: patch container for sysvinit
sed: can't read /var/lib/lxc/core-15-br0/rootfs/etc/init/cgmanager.conf: No such file or directory
sed: can't read /var/lib/lxc/core-15-br0/rootfs/etc/init.d/cgmanager: No such file or directory

The section for patching the container for sysvinit needs to be made conditional on the version and only done for jessie.

insserv: warning: current start runlevel(s) (empty) of script `checkroot.sh' overrides LSB defaults (S).
insserv: warning: current stop runlevel(s) (S) of script `checkroot.sh' overrides LSB defaults (empty).
insserv: warning: current start runlevel(s) (empty) of script `checkroot.sh' overrides LSB defaults (S).
update-rc.d: error: umountfs Default-Start contains no runlevels, aborting.
insserv: warning: current start runlevel(s) (empty) of script `hwclock.sh' overrides LSB defaults (S).
insserv: warning: current stop runlevel(s) (0 6 S) of script `hwclock.sh' overrides LSB defaults (0 6).

checkroot.sh, umountfs, and hwclock.sh can't be run in containers. Alon chose to disable them in LSB defaults, but that results in these errors everytime a container is created. I recommend removing them entirely from the Proxmox images. I've been in the habit of also removing confconsole from the LXC/LXD images since we are pre-seeding. I see that you have left it installed in the Proxmox image. I have usually used it to initialize Proxmox VM's but I seldom use containers there so I'm not sure if it's used as an alternative to pre-seeding. Perhaps I should be leaving it installed but disabled in LXC/LXD so it can be used for say installing Let-Encrypt.

After starting the v15.0rc1 container, inithooks appears to run as desired. I see you chose to use an additional service, inithooks-lxc, for containers. Here is the contents of syslog after the initial startup. Looks like some issues here, so will have to investigate further;

root@core-15-br0 /var/log# cat syslog 
May 23 19:10:43 core-15-br0 systemd-sysctl[22]: Couldn't write '0' to 'net/ipv6/conf/all/send_redirects', ignoring: No such file or directory
May 23 19:10:43 core-15-br0 systemd-sysctl[22]: Couldn't write '0' to 'net/ipv4/tcp_timestamps', ignoring: No such file or directory
May 23 19:10:43 core-15-br0 systemd[1]: systemd-journal-flush.service: Failed to reset devices.list: Operation not permitted
May 23 19:10:43 core-15-br0 systemd[1]: Starting Flush Journal to Persistent Storage...
May 23 19:10:43 core-15-br0 systemd[1]: Started Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling.
May 23 19:10:43 core-15-br0 systemd[1]: Reached target Local File Systems (Pre).
May 23 19:10:43 core-15-br0 systemd[1]: Reached target Local File Systems.
May 23 19:10:43 core-15-br0 systemd[1]: networking.service: Failed to reset devices.list: Operation not permitted
May 23 19:10:43 core-15-br0 systemd[1]: Starting Raise network interfaces...
May 23 19:10:43 core-15-br0 systemd[1]: lvm2-monitor.service: Failed to reset devices.list: Operation not permitted
May 23 19:10:43 core-15-br0 systemd[1]: Started Flush Journal to Persistent Storage.
May 23 19:10:43 core-15-br0 systemd[1]: systemd-tmpfiles-setup.service: Failed to reset devices.list: Operation not permitted
May 23 19:10:43 core-15-br0 systemd[1]: Starting Create Volatile Files and Directories...
May 23 19:10:43 core-15-br0 systemd[1]: systemd-journal-flush.service: Failed to reset devices.list: Operation not permitted
May 23 19:10:43 core-15-br0 systemd[1]: Started Create Volatile Files and Directories.
May 23 19:10:43 core-15-br0 systemd[1]: Reached target System Time Synchronized.
May 23 19:10:43 core-15-br0 systemd[1]: systemd-update-utmp.service: Failed to reset devices.list: Operation not permitted
May 23 19:10:43 core-15-br0 systemd[1]: Starting Update UTMP about System Boot/Shutdown...
May 23 19:10:43 core-15-br0 systemd[1]: systemd-tmpfiles-setup.service: Failed to reset devices.list: Operation not permitted
May 23 19:10:43 core-15-br0 systemd[1]: Started Update UTMP about System Boot/Shutdown.
May 23 19:10:43 core-15-br0 systemd[1]: Reached target System Initialization.
May 23 19:10:43 core-15-br0 systemd[1]: systemd-udevd.service: Cannot add dependency job, ignoring: Unit systemd-udevd.service is masked.
May 23 19:10:43 core-15-br0 systemd[1]: systemd-udevd.service: Cannot add dependency job, ignoring: Unit systemd-udevd.service is masked.
May 23 19:10:43 core-15-br0 systemd[1]: Started ACPI Events Check.
May 23 19:10:43 core-15-br0 systemd[1]: Reached target Paths.
May 23 19:10:43 core-15-br0 systemd[1]: Listening on UUID daemon activation socket.
May 23 19:10:43 core-15-br0 systemd[1]: Started Daily Cleanup of Temporary Directories.
May 23 19:10:43 core-15-br0 systemd[1]: Listening on ACPID Listen Socket.
May 23 19:10:43 core-15-br0 systemd[1]: Reached target Sockets.
May 23 19:10:43 core-15-br0 systemd[1]: Reached target Basic System.
May 23 19:10:43 core-15-br0 systemd[1]: monit.service: Failed to reset devices.list: Operation not permitted
May 23 19:10:43 core-15-br0 systemd[1]: Starting LSB: service and resource monitoring daemon...
May 23 19:10:43 core-15-br0 systemd[1]: webmin.service: Failed to reset devices.list: Operation not permitted
May 23 19:10:43 core-15-br0 systemd[1]: Starting LSB: Webmin...
May 23 19:10:43 core-15-br0 systemd[1]: rsyslog.service: Failed to reset devices.list: Operation not permitted
May 23 19:10:43 core-15-br0 systemd[1]: Starting System Logging Service...
May 23 19:10:43 core-15-br0 systemd[1]: cron.service: Failed to reset devices.list: Operation not permitted
May 23 19:10:43 core-15-br0 systemd[1]: Started Regular background program processing daemon.
May 23 19:10:43 core-15-br0 systemd[1]: apt-daily.timer: Adding 25min 46.104280s random time.
May 23 19:10:43 core-15-br0 systemd[1]: Started Daily apt download activities.
May 23 19:10:43 core-15-br0 systemd[1]: apt-daily-upgrade.timer: Adding 43min 22.991979s random time.
May 23 19:10:43 core-15-br0 systemd[1]: Started Daily apt upgrade and clean activities.
May 23 19:10:43 core-15-br0 systemd[1]: Reached target Timers.
May 23 19:10:43 core-15-br0 systemd[1]: stunnel4.service: Failed to reset devices.list: Operation not permitted
May 23 19:10:43 core-15-br0 systemd[1]: Starting LSB: Start or stop stunnel 4.x (TLS tunnel for network daemons)...
May 23 19:10:43 core-15-br0 liblogging-stdlog:  [origin software="rsyslogd" swVersion="8.24.0" x-pid="69" x-info="http://www.rsyslog.com"] start
May 23 19:10:43 core-15-br0 systemd[1]: systemd-update-utmp.service: Failed to reset devices.list: Operation not permitted
May 23 19:10:43 core-15-br0 systemd[1]: Started System Logging Service.
May 23 19:10:43 core-15-br0 cron[74]: (CRON) INFO (pidfile fd = 3)
May 23 19:10:43 core-15-br0 cron[74]: (CRON) INFO (Running @reboot jobs)
May 23 19:10:43 core-15-br0 monit[63]: Starting daemon monitor: monit.
May 23 19:10:43 core-15-br0 systemd[1]: Started LSB: service and resource monitoring daemon.
May 23 19:10:43 core-15-br0 ifup[41]: udhcpc (v1.22.1) started
May 23 19:10:43 core-15-br0 stunnel: LOG5[ui]: stunnel 5.39 on x86_64-pc-linux-gnu platform
May 23 19:10:43 core-15-br0 stunnel: LOG5[ui]: Compiled with OpenSSL 1.1.0c  10 Nov 2016
May 23 19:10:43 core-15-br0 stunnel: LOG5[ui]: Running  with OpenSSL 1.1.0f  25 May 2017
May 23 19:10:43 core-15-br0 stunnel: LOG5[ui]: Update OpenSSL shared libraries or rebuild stunnel
May 23 19:10:43 core-15-br0 stunnel: LOG5[ui]: Threading:PTHREAD Sockets:POLL,IPv6,SYSTEMD TLS:ENGINE,FIPS,OCSP,PSK,SNI Auth:LIBWRAP
May 23 19:10:43 core-15-br0 stunnel: LOG5[ui]: Reading configuration from file /etc/stunnel/stunnel.conf
May 23 19:10:43 core-15-br0 stunnel: LOG5[ui]: UTF-8 byte order mark not detected
May 23 19:10:43 core-15-br0 stunnel: LOG5[ui]: FIPS mode disabled
May 23 19:10:43 core-15-br0 ifup[41]: Sending discover...
May 23 19:10:43 core-15-br0 stunnel4[79]: Starting TLS tunnels: /etc/stunnel/stunnel.conf: started
May 23 19:10:43 core-15-br0 stunnel: LOG5[ui]: Configuration successful
May 23 19:10:43 core-15-br0 systemd[1]: Started LSB: Start or stop stunnel 4.x (TLS tunnel for network daemons).
May 23 19:10:45 core-15-br0 webmin[67]: Starting webmindone.
May 23 19:10:45 core-15-br0 systemd[1]: Started LSB: Webmin.
May 23 19:10:46 core-15-br0 ifup[41]: Sending select for 192.168.1.133...
May 23 19:10:46 core-15-br0 ifup[41]: Lease of 192.168.1.133 obtained, lease time 86400
May 23 19:10:46 core-15-br0 ifup[41]: /etc/udhcpc/default.script: Resetting default routes
May 23 19:10:46 core-15-br0 ifup[41]: SIOCDELRT: No such process
May 23 19:10:46 core-15-br0 ifup[41]: /etc/udhcpc/default.script: Adding DNS 192.168.1.1
May 23 19:10:47 core-15-br0 ntpdate[197]: no servers can be used, exiting
May 23 19:10:47 core-15-br0 systemd[1]: Started Raise network interfaces.
May 23 19:10:47 core-15-br0 systemd[1]: Reached target Network.
May 23 19:10:47 core-15-br0 systemd[1]: ssh.service: Failed to reset devices.list: Operation not permitted
May 23 19:10:47 core-15-br0 systemd[1]: Starting OpenBSD Secure Shell server...
May 23 19:10:47 core-15-br0 systemd[1]: fail2ban.service: Failed to reset devices.list: Operation not permitted
May 23 19:10:47 core-15-br0 systemd[1]: Starting Fail2Ban Service...
May 23 19:10:47 core-15-br0 systemd[1]: systemd-user-sessions.service: Failed to reset devices.list: Operation not permitted
May 23 19:10:47 core-15-br0 systemd[1]: Starting Permit User Sessions...
May 23 19:10:47 core-15-br0 systemd[1]: Reached target Network is Online.
May 23 19:10:47 core-15-br0 systemd[1]: rc-local.service: Failed to reset devices.list: Operation not permitted
May 23 19:10:47 core-15-br0 systemd[1]: Starting /etc/rc.local Compatibility...
May 23 19:10:47 core-15-br0 systemd[1]: hubdns.service: Failed to reset devices.list: Operation not permitted
May 23 19:10:47 core-15-br0 systemd[1]: Starting LSB: HubDNS startup and shutdown init script...
May 23 19:10:47 core-15-br0 systemd[1]: Starting Postfix Mail Transport Agent (instance -)...
May 23 19:10:47 core-15-br0 systemd[1]: shellinabox.service: Failed to reset devices.list: Operation not permitted
May 23 19:10:47 core-15-br0 systemd[1]: Starting LSB: Shell In A Box Daemon...
May 23 19:10:47 core-15-br0 systemd[1]: resolvconf.service: Failed to reset devices.list: Operation not permitted
May 23 19:10:47 core-15-br0 systemd[1]: Started Permit User Sessions.
May 23 19:10:47 core-15-br0 systemd[1]: Started /etc/rc.local Compatibility.
May 23 19:10:47 core-15-br0 systemd[1]: Started Getty on tty4.
May 23 19:10:47 core-15-br0 systemd[1]: Started Container Getty on /dev/pts/0.
May 23 19:10:47 core-15-br0 systemd[1]: Started Container Getty on /dev/pts/1.
May 23 19:10:47 core-15-br0 systemd[1]: console-getty.service: Failed to reset devices.list: Operation not permitted
May 23 19:10:47 core-15-br0 systemd[1]: Started Console Getty.
May 23 19:10:47 core-15-br0 systemd[1]: inithooks-lxc.service: Failed to reset devices.list: Operation not permitted
May 23 19:10:47 core-15-br0 systemd[1]: Starting inithooks-lxc: firstboot and everyboot initialization scripts (lxc)...
May 23 19:10:47 core-15-br0 systemd[1]: Started Container Getty on /dev/pts/2.
May 23 19:10:47 core-15-br0 systemd[1]: Started Getty on tty2.
May 23 19:10:47 core-15-br0 systemd[1]: Started Container Getty on /dev/pts/3.
May 23 19:10:47 core-15-br0 systemd[1]: Started Getty on tty1.
May 23 19:10:47 core-15-br0 systemd[1]: Started Getty on tty3.
May 23 19:10:47 core-15-br0 systemd[1]: Reached target Login Prompts.
May 23 19:10:47 core-15-br0 systemd[1]: rc-local.service: Failed to reset devices.list: Operation not permitted
May 23 19:10:47 core-15-br0 systemd[1]: systemd-user-sessions.service: Failed to reset devices.list: Operation not permitted
May 23 19:10:47 core-15-br0 systemd[1]: Started OpenBSD Secure Shell server.
May 23 19:10:47 core-15-br0 systemd[1]: Started LSB: HubDNS startup and shutdown init script.
May 23 19:10:47 core-15-br0 sh[257]: Redirecting output to /var/log/inithooks.log
May 23 19:10:47 core-15-br0 systemd[1]: Started LSB: Shell In A Box Daemon.
May 23 19:10:47 core-15-br0 inithooks: * Regenerating SSH cryptographic keys
May 23 19:10:48 core-15-br0 systemd[1]: hubdns.service: Failed to reset devices.list: Operation not permitted
May 23 19:10:48 core-15-br0 fail2ban-client[238]: 2018-05-23 19:10:48,419 fail2ban.server         [333]: INFO    Starting Fail2ban v0.9.6
May 23 19:10:48 core-15-br0 fail2ban-client[238]: 2018-05-23 19:10:48,420 fail2ban.server         [333]: INFO    Starting in daemon mode
May 23 19:10:49 core-15-br0 systemd[1]: Started Fail2Ban Service.
May 23 19:10:50 core-15-br0 postfix/postfix-script[518]: starting the Postfix mail system
May 23 19:10:50 core-15-br0 postfix/master[520]: daemon started -- version 3.1.8, configuration /etc/postfix
May 23 19:10:50 core-15-br0 systemd[1]: Started Postfix Mail Transport Agent (instance -).
May 23 19:10:50 core-15-br0 systemd[1]: postfix.service: Failed to reset devices.list: Operation not permitted
May 23 19:10:50 core-15-br0 systemd[1]: Starting Postfix Mail Transport Agent...
May 23 19:10:50 core-15-br0 systemd[1]: Started Postfix Mail Transport Agent.
May 23 19:10:50 core-15-br0 systemd[1]: Reached target Multi-User System.
May 23 19:10:50 core-15-br0 systemd[1]: systemd-update-utmp-runlevel.service: Failed to reset devices.list: Operation not permitted
May 23 19:10:50 core-15-br0 systemd[1]: Starting Update UTMP about System Runlevel Changes...
May 23 19:10:50 core-15-br0 systemd[1]: postfix.service: Failed to reset devices.list: Operation not permitted
May 23 19:10:50 core-15-br0 systemd[1]: Started Update UTMP about System Runlevel Changes.
May 23 19:10:50 core-15-br0 inithooks: Creating SSH2 RSA key; this may take some time ...
May 23 19:10:50 core-15-br0 inithooks: 2048 SHA256:bfopE009zeEubQojIU3N0jsswDI6L7Z/fJAA/Mx+PBg root@core-15-br0 (RSA)
May 23 19:10:50 core-15-br0 inithooks: Creating SSH2 ECDSA key; this may take some time ...
May 23 19:10:50 core-15-br0 inithooks: 256 SHA256:6EVRXa+6W9VLPqMN82eDwb4Ylwbr08EiwTgaXtvvGRU root@core-15-br0 (ECDSA)
May 23 19:10:50 core-15-br0 inithooks: Creating SSH2 ED25519 key; this may take some time ...
May 23 19:10:50 core-15-br0 inithooks: 256 SHA256:MFSSzLCtEIhTaAAG/hGHbAtgeZKAqRZe9FX534NCkIY root@core-15-br0 (ED25519)
May 23 19:10:52 core-15-br0 systemd[1]: Reloading.
May 23 19:10:52 core-15-br0 systemd[1]: systemd-udevd.service: Cannot add dependency job, ignoring: Unit systemd-udevd.service is masked.
May 23 19:10:52 core-15-br0 systemd[1]: apt-daily-upgrade.timer: Adding 57min 50.587747s random time.
May 23 19:10:52 core-15-br0 systemd[1]: Reloading.
May 23 19:10:52 core-15-br0 systemd[1]: systemd-udevd.service: Cannot add dependency job, ignoring: Unit systemd-udevd.service is masked.
May 23 19:10:52 core-15-br0 systemd[1]: systemd-udevd.service: Cannot add dependency job, ignoring: Unit systemd-udevd.service is masked.
May 23 19:10:52 core-15-br0 systemd[1]: apt-daily-upgrade.timer: Adding 10min 15.361192s random time.
May 23 19:10:52 core-15-br0 systemd[1]: systemd-udevd.service: Cannot add dependency job, ignoring: Unit systemd-udevd.service is masked.
May 23 19:10:52 core-15-br0 systemd[1]: systemd-udevd.service: Cannot add dependency job, ignoring: Unit systemd-udevd.service is masked.
May 23 19:10:52 core-15-br0 systemd[1]: Stopping OpenBSD Secure Shell server...
May 23 19:10:52 core-15-br0 systemd[1]: Stopped OpenBSD Secure Shell server.
May 23 19:10:52 core-15-br0 systemd[1]: ssh.service: Failed to reset devices.list: Operation not permitted
May 23 19:10:52 core-15-br0 systemd[1]: Starting OpenBSD Secure Shell server...
May 23 19:10:52 core-15-br0 systemd[1]: init.scope: Failed to reset devices.list: Operation not permitted
May 23 19:10:52 core-15-br0 systemd[1]: rsyslog.service: Failed to reset devices.list: Operation not permitted
May 23 19:10:52 core-15-br0 systemd[1]: systemd-journald.service: Failed to reset devices.list: Operation not permitted
May 23 19:10:52 core-15-br0 systemd[1]: inithooks-lxc.service: Failed to reset devices.list: Operation not permitted
May 23 19:10:52 core-15-br0 systemd[1]: system-postfix.slice: Failed to reset devices.list: Operation not permitted
May 23 19:10:52 core-15-br0 systemd[1]: lvm2-lvmetad.service: Failed to reset devices.list: Operation not permitted
May 23 19:10:52 core-15-br0 systemd[1]: cron.service: Failed to reset devices.list: Operation not permitted
May 23 19:10:52 core-15-br0 systemd[1]: monit.service: Failed to reset devices.list: Operation not permitted
May 23 19:10:52 core-15-br0 systemd[1]: sys-kernel-debug.mount: Failed to reset devices.list: Operation not permitted
May 23 19:10:52 core-15-br0 systemd[1]: console-getty.service: Failed to reset devices.list: Operation not permitted
May 23 19:10:52 core-15-br0 systemd[1]: postfix.service: Failed to reset devices.list: Operation not permitted
May 23 19:10:52 core-15-br0 systemd[1]: systemd-update-utmp.service: Failed to reset devices.list: Operation not permitted
May 23 19:10:52 core-15-br0 systemd[1]: system-getty.slice: Failed to reset devices.list: Operation not permitted
May 23 19:10:52 core-15-br0 systemd[1]: system-container\x2dgetty.slice: Failed to reset devices.list: Operation not permitted
May 23 19:10:52 core-15-br0 systemd[1]: dev-hugepages.mount: Failed to reset devices.list: Operation not permitted
May 23 19:10:52 core-15-br0 systemd[1]: rc-local.service: Failed to reset devices.list: Operation not permitted
May 23 19:10:52 core-15-br0 systemd[1]: networking.service: Failed to reset devices.list: Operation not permitted
May 23 19:10:52 core-15-br0 systemd[1]: systemd-sysctl.service: Failed to reset devices.list: Operation not permitted
May 23 19:10:52 core-15-br0 systemd[1]: dev-mqueue.mount: Failed to reset devices.list: Operation not permitted
May 23 19:10:52 core-15-br0 systemd[1]: dev-tty4.mount: Failed to reset devices.list: Operation not permitted
May 23 19:10:52 core-15-br0 systemd[1]: shellinabox.service: Failed to reset devices.list: Operation not permitted
May 23 19:10:52 core-15-br0 systemd[1]: webmin.service: Failed to reset devices.list: Operation not permitted
May 23 19:10:52 core-15-br0 systemd[1]: -.mount: Failed to reset devices.list: Operation not permitted
May 23 19:10:52 core-15-br0 systemd[1]: dev-tty3.mount: Failed to reset devices.list: Operation not permitted
May 23 19:10:52 core-15-br0 systemd[1]: systemd-user-sessions.service: Failed to reset devices.list: Operation not permitted
May 23 19:10:52 core-15-br0 systemd[1]: dev-tty1.mount: Failed to reset devices.list: Operation not permitted
May 23 19:10:52 core-15-br0 systemd[1]: lvm2-monitor.service: Failed to reset devices.list: Operation not permitted
May 23 19:10:52 core-15-br0 systemd[1]: resolvconf.service: Failed to reset devices.list: Operation not permitted
May 23 19:10:52 core-15-br0 systemd[1]: dev-tty2.mount: Failed to reset devices.list: Operation not permitted
May 23 19:10:52 core-15-br0 systemd[1]: stunnel4.service: Failed to reset devices.list: Operation not permitted
May 23 19:10:52 core-15-br0 systemd[1]: systemd-tmpfiles-setup.service: Failed to reset devices.list: Operation not permitted
May 23 19:10:52 core-15-br0 systemd[1]: systemd-journal-flush.service: Failed to reset devices.list: Operation not permitted
May 23 19:10:52 core-15-br0 systemd[1]: hubdns.service: Failed to reset devices.list: Operation not permitted
May 23 19:10:52 core-15-br0 systemd[1]: fail2ban.service: Failed to reset devices.list: Operation not permitted
May 23 19:10:52 core-15-br0 systemd[1]: systemd-remount-fs.service: Failed to reset devices.list: Operation not permitted
May 23 19:10:52 core-15-br0 systemd[1]: Started OpenBSD Secure Shell server.
May 23 19:10:53 core-15-br0 inithooks: Regenerating SSL keys and certificates...
May 23 19:10:54 core-15-br0 inithooks: Updating certificates in /etc/ssl/certs...
May 23 19:11:01 core-15-br0 cron[74]: (*system*cron-apt) RELOAD (/etc/cron.d/cron-apt)
May 23 19:11:05 core-15-br0 inithooks: 1 added, 0 removed; done.
May 23 19:11:05 core-15-br0 inithooks: Running hooks in /etc/ca-certificates/update.d...
May 23 19:11:05 core-15-br0 inithooks: done.
May 23 19:11:05 core-15-br0 systemd[1]: systemd-udevd.service: Cannot add dependency job, ignoring: Unit systemd-udevd.service is masked.
May 23 19:11:05 core-15-br0 systemd[1]: systemd-udevd.service: Cannot add dependency job, ignoring: Unit systemd-udevd.service is masked.
May 23 19:11:05 core-15-br0 systemd[1]: Stopping LSB: Start or stop stunnel 4.x (TLS tunnel for network daemons)...
May 23 19:11:05 core-15-br0 stunnel: LOG5[main]: Terminated
May 23 19:11:05 core-15-br0 stunnel4[2194]: Stopping TLS tunnels: /etc/stunnel/stunnel.conf: stopped
May 23 19:11:05 core-15-br0 systemd[1]: Stopped LSB: Start or stop stunnel 4.x (TLS tunnel for network daemons).
May 23 19:11:05 core-15-br0 systemd[1]: Starting LSB: Start or stop stunnel 4.x (TLS tunnel for network daemons)...
May 23 19:11:05 core-15-br0 stunnel: LOG5[ui]: stunnel 5.39 on x86_64-pc-linux-gnu platform
May 23 19:11:05 core-15-br0 stunnel: LOG5[ui]: Compiled with OpenSSL 1.1.0c  10 Nov 2016
May 23 19:11:05 core-15-br0 stunnel: LOG5[ui]: Running  with OpenSSL 1.1.0f  25 May 2017
May 23 19:11:05 core-15-br0 stunnel: LOG5[ui]: Update OpenSSL shared libraries or rebuild stunnel
May 23 19:11:05 core-15-br0 stunnel4[2214]: Starting TLS tunnels: /etc/stunnel/stunnel.conf: started
May 23 19:11:05 core-15-br0 stunnel: LOG5[ui]: Threading:PTHREAD Sockets:POLL,IPv6,SYSTEMD TLS:ENGINE,FIPS,OCSP,PSK,SNI Auth:LIBWRAP
May 23 19:11:05 core-15-br0 systemd[1]: Started LSB: Start or stop stunnel 4.x (TLS tunnel for network daemons).
May 23 19:11:05 core-15-br0 stunnel: LOG5[ui]: Reading configuration from file /etc/stunnel/stunnel.conf
May 23 19:11:05 core-15-br0 stunnel: LOG5[ui]: UTF-8 byte order mark not detected
May 23 19:11:05 core-15-br0 stunnel: LOG5[ui]: FIPS mode disabled
May 23 19:11:05 core-15-br0 stunnel: LOG5[ui]: Configuration successful
May 23 19:11:05 core-15-br0 inithooks: Updating certificates in /etc/ssl/certs...
May 23 19:11:09 core-15-br0 inithooks: 0 added, 0 removed; done.
May 23 19:11:09 core-15-br0 inithooks: Running hooks in /etc/ca-certificates/update.d...
May 23 19:11:09 core-15-br0 inithooks: done.
May 23 19:11:09 core-15-br0 inithooks: INFO [turnkey-sudoadmin]: inithooks_sudoadmin false
May 23 19:11:09 core-15-br0 inithooks: INFO [turnkey-sudoadmin]: setup_initfence root
May 23 19:11:09 core-15-br0 inithooks: INFO [turnkey-sudoadmin]: update_confconsole_services root
May 23 19:11:09 core-15-br0 inithooks: INFO [turnkey-sudoadmin]: ssh_authorizedkeys_inithook root
May 23 19:11:09 core-15-br0 inithooks: INFO [turnkey-sudoadmin]: ssh_authorizedkeys_merge root admin
May 23 19:11:09 core-15-br0 inithooks: INFO [turnkey-sudoadmin]: permitrootlogin_ssh yes
May 23 19:11:09 core-15-br0 inithooks: INFO [turnkey-sudoadmin]: user_state root unlock
May 23 19:11:09 core-15-br0 inithooks: passwd: password expiry information changed.
May 23 19:11:09 core-15-br0 inithooks: INFO [turnkey-sudoadmin]: user_state admin lock
May 23 19:11:09 core-15-br0 inithooks: INFO [turnkey-sudoadmin]: set_sshrootbanner
May 23 19:11:09 core-15-br0 inithooks: INFO [turnkey-sudoadmin]: restart_sshd
May 23 19:11:10 core-15-br0 systemd[1]: Stopping OpenBSD Secure Shell server...
May 23 19:11:10 core-15-br0 systemd[1]: Stopped OpenBSD Secure Shell server.
May 23 19:11:10 core-15-br0 systemd[1]: ssh.service: Failed to reset devices.list: Operation not permitted
May 23 19:11:10 core-15-br0 systemd[1]: Starting OpenBSD Secure Shell server...
May 23 19:11:10 core-15-br0 systemd[1]: Started OpenBSD Secure Shell server.
May 23 19:11:10 core-15-br0 inithooks: Restarting ssh (via systemctl): ssh.service.
May 23 19:11:14 core-15-br0 inithooks: Ign:1 http://archive.turnkeylinux.org/debian stretch-security InRelease
May 23 19:11:14 core-15-br0 inithooks: Hit:2 http://security.debian.org stretch/updates InRelease
May 23 19:11:14 core-15-br0 inithooks: Ign:3 http://httpredir.debian.org/debian stretch InRelease
May 23 19:11:15 core-15-br0 inithooks: Ign:4 http://archive.turnkeylinux.org/debian stretch InRelease
May 23 19:11:15 core-15-br0 inithooks: Get:5 http://httpredir.debian.org/debian stretch Release [118 kB]
May 23 19:11:15 core-15-br0 inithooks: Get:6 http://httpredir.debian.org/debian stretch Release.gpg [2,434 B]
May 23 19:11:15 core-15-br0 inithooks: Hit:7 http://archive.turnkeylinux.org/debian stretch-security Release
May 23 19:11:15 core-15-br0 inithooks: Hit:8 http://archive.turnkeylinux.org/debian stretch Release
May 23 19:11:15 core-15-br0 inithooks: Get:9 http://httpredir.debian.org/debian stretch/main amd64 Packages [7,122 kB]
May 23 19:11:16 core-15-br0 inithooks: Get:10 http://httpredir.debian.org/debian stretch/main Translation-en [5,394 kB]
May 23 19:11:16 core-15-br0 inithooks: Get:11 http://httpredir.debian.org/debian stretch/contrib amd64 Packages [50.9 kB]
May 23 19:11:16 core-15-br0 inithooks: Get:12 http://httpredir.debian.org/debian stretch/contrib Translation-en [45.9 kB]
May 23 19:11:19 core-15-br0 inithooks: Fetched 12.7 MB in 5s (2,341 kB/s)
May 23 19:11:21 core-15-br0 inithooks: Reading package lists...
May 23 19:11:21 core-15-br0 inithooks: Reading package lists...
May 23 19:11:21 core-15-br0 inithooks: Building dependency tree...
May 23 19:11:21 core-15-br0 inithooks: Reading state information...
May 23 19:11:22 core-15-br0 inithooks: Reading package lists...
May 23 19:11:22 core-15-br0 inithooks: Building dependency tree...
May 23 19:11:22 core-15-br0 inithooks: Reading state information...
May 23 19:11:22 core-15-br0 inithooks: Calculating upgrade...
May 23 19:11:22 core-15-br0 inithooks: The following packages will be upgraded:
May 23 19:11:22 core-15-br0 inithooks:   libprocps6 procps
May 23 19:11:22 core-15-br0 inithooks: 2 upgraded, 0 newly installed, 0 to remove and 0 not upgraded.
May 23 19:11:22 core-15-br0 inithooks: Need to get 309 kB of archives.
May 23 19:11:22 core-15-br0 inithooks: After this operation, 4,096 B of additional disk space will be used.
May 23 19:11:22 core-15-br0 inithooks: Get:1 http://security.debian.org stretch/updates/main amd64 libprocps6 amd64 2:3.3.12-3+deb9u1 [58.5 kB]
May 23 19:11:22 core-15-br0 inithooks: Get:2 http://security.debian.org stretch/updates/main amd64 procps amd64 2:3.3.12-3+deb9u1 [250 kB]
May 23 19:11:23 core-15-br0 inithooks: debconf: delaying package configuration, since apt-utils is not installed
May 23 19:11:23 core-15-br0 inithooks: Fetched 309 kB in 0s (12.0 MB/s)
May 23 19:11:23 core-15-br0 inithooks: (Reading database ... #015(Reading database ... 5%#015(Reading database ... 10%#015(Reading database ... 15%#015(Reading database ... 20%#015(Reading database ... 25%#015(Reading database ... 30%#015(Reading database ... 35%#015(Reading database ... 40%#015(Reading database ... 45%#015(Reading database ... 50%#015(Reading database ... 55%#015(Reading database ... 60%#015(Reading database ... 65%#015(Reading database ... 70%#015(Reading database ... 75%#015(Reading database ... 80%#015(Reading database ... 85%#015(Reading database ... 90%#015(Reading database ... 95%#015(Reading database ... 100%#015(Reading database ... 25703 files and directories currently installed.)
May 23 19:11:23 core-15-br0 inithooks: Preparing to unpack .../libprocps6_2%3a3.3.12-3+deb9u1_amd64.deb ...
May 23 19:11:23 core-15-br0 inithooks: Unpacking libprocps6:amd64 (2:3.3.12-3+deb9u1) over (2:3.3.12-3) ...
May 23 19:11:24 core-15-br0 inithooks: Preparing to unpack .../procps_2%3a3.3.12-3+deb9u1_amd64.deb ...
May 23 19:11:24 core-15-br0 inithooks: Unpacking procps (2:3.3.12-3+deb9u1) over (2:3.3.12-3) ...
May 23 19:11:24 core-15-br0 systemd[1]: Reloading.
May 23 19:11:24 core-15-br0 systemd[1]: systemd-udevd.service: Cannot add dependency job, ignoring: Unit systemd-udevd.service is masked.
May 23 19:11:24 core-15-br0 systemd[1]: systemd-udevd.service: Cannot add dependency job, ignoring: Unit systemd-udevd.service is masked.
May 23 19:11:24 core-15-br0 systemd[1]: apt-daily-upgrade.timer: Adding 48min 18.800872s random time.
May 23 19:11:24 core-15-br0 inithooks: Setting up libprocps6:amd64 (2:3.3.12-3+deb9u1) ...
May 23 19:11:24 core-15-br0 inithooks: Setting up procps (2:3.3.12-3+deb9u1) ...
May 23 19:11:25 core-15-br0 systemd[1]: Reloading.
May 23 19:11:25 core-15-br0 systemd[1]: systemd-udevd.service: Cannot add dependency job, ignoring: Unit systemd-udevd.service is masked.
May 23 19:11:25 core-15-br0 systemd[1]: apt-daily-upgrade.timer: Adding 30min 691.211ms random time.
May 23 19:11:25 core-15-br0 inithooks: Processing triggers for libc-bin (2.24-11+deb9u3) ...
May 23 19:11:25 core-15-br0 inithooks: Processing triggers for systemd (232-25+deb9u2) ...
May 23 19:11:25 core-15-br0 systemd[1]: Reloading.
May 23 19:11:25 core-15-br0 systemd[1]: systemd-udevd.service: Cannot add dependency job, ignoring: Unit systemd-udevd.service is masked.
May 23 19:11:25 core-15-br0 systemd[1]: apt-daily-upgrade.timer: Adding 7min 46.457941s random time.
May 23 19:11:25 core-15-br0 inithooks: Processing triggers for man-db (2.7.6.1-2) ...
May 23 19:11:32 core-15-br0 systemd[1]: Started inithooks-lxc: firstboot and everyboot initialization scripts (lxc).
May 23 19:11:32 core-15-br0 systemd[1]: Startup finished in 49.751s.
May 23 19:17:01 core-15-br0 CRON[4769]: (root) CMD (   cd / && run-parts --report /etc/cron.hourly)
May 23 19:26:18 core-15-br0 systemd[1]: systemd-tmpfiles-clean.service: Failed to reset devices.list: Operation not permitted
May 23 19:26:18 core-15-br0 systemd[1]: Starting Cleanup of Temporary Directories...
May 23 19:26:18 core-15-br0 systemd[1]: init.scope: Failed to reset devices.list: Operation not permitted
May 23 19:26:18 core-15-br0 systemd[1]: stunnel4.service: Failed to reset devices.list: Operation not permitted
May 23 19:26:18 core-15-br0 systemd[1]: monit.service: Failed to reset devices.list: Operation not permitted
May 23 19:26:18 core-15-br0 systemd[1]: system-container\x2dgetty.slice: Failed to reset devices.list: Operation not permitted
May 23 19:26:18 core-15-br0 systemd[1]: systemd-remount-fs.service: Failed to reset devices.list: Operation not permitted
May 23 19:26:18 core-15-br0 systemd[1]: systemd-user-sessions.service: Failed to reset devices.list: Operation not permitted
May 23 19:26:18 core-15-br0 systemd[1]: cron.service: Failed to reset devices.list: Operation not permitted
May 23 19:26:18 core-15-br0 systemd[1]: fail2ban.service: Failed to reset devices.list: Operation not permitted
May 23 19:26:18 core-15-br0 systemd[1]: ssh.service: Failed to reset devices.list: Operation not permitted
May 23 19:26:18 core-15-br0 systemd[1]: rc-local.service: Failed to reset devices.list: Operation not permitted
May 23 19:26:18 core-15-br0 systemd[1]: systemd-journald.service: Failed to reset devices.list: Operation not permitted
May 23 19:26:18 core-15-br0 systemd[1]: dev-mqueue.mount: Failed to reset devices.list: Operation not permitted
May 23 19:26:18 core-15-br0 systemd[1]: dev-tty4.mount: Failed to reset devices.list: Operation not permitted
May 23 19:26:18 core-15-br0 systemd[1]: systemd-sysctl.service: Failed to reset devices.list: Operation not permitted
May 23 19:26:18 core-15-br0 systemd[1]: systemd-journal-flush.service: Failed to reset devices.list: Operation not permitted
May 23 19:26:18 core-15-br0 systemd[1]: postfix.service: Failed to reset devices.list: Operation not permitted
May 23 19:26:18 core-15-br0 systemd[1]: -.mount: Failed to reset devices.list: Operation not permitted
May 23 19:26:18 core-15-br0 systemd[1]: hubdns.service: Failed to reset devices.list: Operation not permitted
May 23 19:26:18 core-15-br0 systemd[1]: networking.service: Failed to reset devices.list: Operation not permitted
May 23 19:26:18 core-15-br0 systemd[1]: systemd-tmpfiles-setup.service: Failed to reset devices.list: Operation not permitted
May 23 19:26:18 core-15-br0 systemd[1]: resolvconf.service: Failed to reset devices.list: Operation not permitted
May 23 19:26:18 core-15-br0 systemd[1]: sys-kernel-debug.mount: Failed to reset devices.list: Operation not permitted
May 23 19:26:18 core-15-br0 systemd[1]: lvm2-lvmetad.service: Failed to reset devices.list: Operation not permitted
May 23 19:26:18 core-15-br0 systemd[1]: rsyslog.service: Failed to reset devices.list: Operation not permitted
May 23 19:26:18 core-15-br0 systemd[1]: dev-tty3.mount: Failed to reset devices.list: Operation not permitted
May 23 19:26:18 core-15-br0 systemd[1]: webmin.service: Failed to reset devices.list: Operation not permitted
May 23 19:26:18 core-15-br0 systemd[1]: dev-tty2.mount: Failed to reset devices.list: Operation not permitted
May 23 19:26:18 core-15-br0 systemd[1]: shellinabox.service: Failed to reset devices.list: Operation not permitted
May 23 19:26:18 core-15-br0 systemd[1]: system-getty.slice: Failed to reset devices.list: Operation not permitted
May 23 19:26:18 core-15-br0 systemd[1]: dev-tty1.mount: Failed to reset devices.list: Operation not permitted
May 23 19:26:18 core-15-br0 systemd[1]: dev-hugepages.mount: Failed to reset devices.list: Operation not permitted
May 23 19:26:18 core-15-br0 systemd[1]: systemd-update-utmp.service: Failed to reset devices.list: Operation not permitted
May 23 19:26:18 core-15-br0 systemd[1]: lvm2-monitor.service: Failed to reset devices.list: Operation not permitted
May 23 19:26:18 core-15-br0 systemd[1]: console-getty.service: Failed to reset devices.list: Operation not permitted
May 23 19:26:18 core-15-br0 systemd[1]: system-postfix.slice: Failed to reset devices.list: Operation not permitted
May 23 19:26:18 core-15-br0 systemd[1]: Started Cleanup of Temporary Directories.
Dude4Linux commented 6 years ago

syslog from v14.2 core bridged container. Clearly there is something very different about v15.0rc1.

root@core-14-br0 /var/log# cat syslog 
May 24 17:00:35 core-14-br0 rsyslogd: [origin software="rsyslogd" swVersion="8.4.2" x-pid="8623" x-info="http://www.rsyslog.com"] start
May 24 17:00:35 core-14-br0 acpid: cannot open input layer
May 24 17:00:35 core-14-br0 acpid: inotify_add_watch() failed: No such file or directory (2)
May 24 17:00:35 core-14-br0 acpid: starting up with netlink and the input layer
May 24 17:00:35 core-14-br0 acpid: 1 rule loaded
May 24 17:00:35 core-14-br0 acpid: waiting for events: event logging is off
May 24 17:00:35 core-14-br0 cron[8710]: (CRON) INFO (pidfile fd = 3)
May 24 17:00:35 core-14-br0 cron[8711]: (CRON) STARTUP (fork ok)
May 24 17:00:35 core-14-br0 cron[8711]: (CRON) INFO (Running @reboot jobs)
May 24 17:00:36 core-14-br0 postfix/master[8803]: daemon started -- version 2.11.3, configuration /etc/postfix
JedMeister commented 6 years ago

Apologies I didn't get to this yesterday. Thanks for your testing.

The section for patching the container for sysvinit needs to be made conditional on the version and only done for jessie.

Good catch!

checkroot.sh, umountfs, and hwclock.sh can't be run in containers. Alon chose to disable them in LSB defaults, but that results in these errors everytime a container is created. I recommend removing them entirely from the Proxmox images.

Ok I'll have a look at that.

I've been in the habit of also removing confconsole from the LXC/LXD images since we are pre-seeding. I see that you have left it installed in the Proxmox image.

Yes it's still installed, but disabled as a service.

I have usually used it to initialize Proxmox VM's but I seldom use containers there so I'm not sure if it's used as an alternative to pre-seeding.

No it's (interactive) inithooks that is used to initialise a container in Proxmox.

FWIW, all headless builds (inc Proxmox containers) include an additional inithook; 29preseed. That first checks for /etc/inithooks.conf and if present, preseeds as per usual. Otherwise, it creates one with a random password and brings up the init-fence. The intihooks then run non-interactively using the randomly configured /etc/inithooks.conf. When the init-fence is enabled, it also enables ~/profile.d/turnkey-init-fence, which launches turnkey-init so on first login the user is instantly presented with the interactive inithooks. In theory that should still work in vanilla LXC containers, but I'm not 100% sure...

IIRC there is now an option in confconsole to re-run turnkey-init too.

Perhaps I should be leaving it installed but disabled in LXC/LXD so it can be used for say installing Let-Encrypt.

Probably not a bad idea. It also includes options for configuring SMTP email relays too. I have plans to add additional functionality to confconsole to do other stuff too, but it's fairly low priority at this point.

After starting the v15.0rc1 container, inithooks appears to run as desired. I see you chose to use an additional service, inithooks-lxc, for containers.

Yep, seemed like the best way to go.

However, its worthy of note that we're now having inithooks issues with the Xen build too. So it seems likely we'll also need to introduce another inithooks-xen.service file for them. Unfortunately SystemD doesn't seem to be picking that up the fact that it's running as a Xen guest, so for consistency, we may change the check, although maybe not, it depends on further testing. Regardless, it shouldn't affect the inithooks-lxc.service itself, just how we check for lxc.

Here is the contents of syslog after the initial startup. Looks like some issues here, so will have to investigate further;

Thanks for that. Unfortunately, my research suggests that is pretty much expected behaviour when using SystemD inside an LXC container.

E.g. all the Failed to reset devices.list: Operation not permitted entries (about a third of your log; 101/318 entries) are expected within an LXC container - see https://github.com/lxc/lxd/issues/2004. Also the lines systemd-udevd.service: Cannot add dependency job, ignoring: Unit systemd-udevd.service is masked. are expected (because we don't install udev).

If those lines are excluded, as well as the info entries, I can only see 2 other errors/warnings:

May 23 19:10:43 core-15-br0 systemd-sysctl[22]: Couldn't write '0' to 'net/ipv6/conf/all/send_redirects', ignoring: No such file or directory
May 23 19:10:43 core-15-br0 systemd-sysctl[22]: Couldn't write '0' to 'net/ipv4/tcp_timestamps', ignoring: No such file or directory

Although perhaps I'm missing something?

syslog from v14.2 core bridged container. Clearly there is something very different about v15.0rc1.

Yep, SystemD! It's much more verbose that SysvInit and it also seems that the inithooks stuff wasn't being logged (as it probably should have been?!)

Dude4Linux commented 6 years ago

checkroot.sh, umountfs, and hwclock.sh can't be run in containers. Alon chose to disable them in LSB defaults, but that results in these errors everytime a container is created. I recommend removing them entirely from the Proxmox images.

Ok I'll have a look at that.

I took another look at disabling vs removing these unneeded services. Turns out that if they are removed, a dist-upgrade will reinstall them but if they are disabled, they remain disabled after the upgrade. Since I'm tired of looking at the resulting error messages, I decided to just send them to /dev/null. Problem solved.

I've been in the habit of also removing confconsole from the LXC/LXD images since we are pre-seeding. I see that you have left it installed in the Proxmox image.

Yes it's still installed, but disabled as a service.

I'll make sure that lxd-make-image does the same.

JedMeister commented 6 years ago

checkroot.sh, umountfs, and hwclock.sh can't be run in containers. Alon chose to disable them in LSB defaults, but that results in these errors everytime a container is created. I recommend removing them entirely from the Proxmox images.

Ok I'll have a look at that.

I took another look at disabling vs removing these unneeded services. Turns out that if they are removed, a dist-upgrade will reinstall them but if they are disabled, they remain disabled after the upgrade. Since I'm tired of looking at the resulting error messages, I decided to just send them to /dev/null. Problem solved.

Awesome, thanks John, apologies that I hadn't got back regarding those. TBH, I got sidetracked and forgot all about them, so I really appreciate your nudge. However, out of interest, I just had a quick look at them in a Proxmox container and they're masked by default?!:

root@jed-150rc1 ~# service umountfs status
* umountfs.service
   Loaded: masked (/dev/null; bad)
   Active: inactive (dead)
root@jed-150rc1 ~# service checkroot.sh status
* checkroot.service
   Loaded: masked (/dev/null; bad)
   Active: inactive (dead)
root@jed-150rc1 ~# service hwclock.sh status
* hwclock.service
   Loaded: masked (/dev/null; bad)
   Active: inactive (dead)

So I'm guessing that Proxmox must mask them itself?!

FWIW, in case you aren't aware, a "masked" service is like it's disabled, but can't even manually be started (the service is just a link to /dev/null). For more info please see systemd for Administrators, Part V - The Three Levels of "Off" (note you no longer need to manually make and destroy the symlinks). You can mask a service like this:

systemctl mask umountfs.service
systemctl mask checkroot.service
systemctl mask hwclock.service

WRT send the log entries to /dev/null how did you configure that? Perhaps we should do that be default in the Proxmox/LXC builds? It would certainly make the logs a bit neater, although we'd need to be careful we don't accidentally redirect something that might be important! Love to hear your thoughts.

Dude4Linux commented 6 years ago

I just added a redirect to /dev/null at the end of each offending line.

    # disable pointless services in a container, only if they are installed
    [ -f ${rootfs}/etc/init.d/checkroot.sh ] && chroot ${rootfs} /usr/sbin/update-rc.d -f checkroot.sh disable > /dev/null 2>&1
    [ -f ${rootfs}/etc/init.d/umountfs ] && chroot ${rootfs} /usr/sbin/update-rc.d -f umountfs disable > /dev/null 2>&1
    [ -f ${rootfs}/etc/init.d/hwclock.sh ] && chroot ${rootfs} /usr/sbin/update-rc.d -f hwclock.sh disable > /dev/null 2>&1
    [ -f ${rootfs}/etc/init.d/hwclockfirst.sh ] && chroot ${rootfs} /usr/sbin/update-rc.d -f hwclockfirst.sh disable > /dev/null 2>&1

I suppose I should add a test and only run these commands in SysV containers and use systemctl mask in SystemD containers. One more thing for the TODO list.

l-arnold commented 5 years ago

I realize this is a closed issue but I expect some of these changes are related to difficulties I am having installing XEN onto Linode.com with TKL 15 builds.

Even with Version 14.2 builds it is more difficult than my last experience with Version 14.0 builds. A few subjects with 14.2: Multiple SSH shells do not seem possible. Every way I have tied in via SSH is showing the same session. In the process of successfully installing even V 14.2 shells I have to do 2 things to make it work:

1 I have to reset the root password (via Linode Rescue) after any sort of reboot in the install process. (I may have done this with 14.0)

2 I have to actively log into root (very quickly or it fails) in the middle of the FirstBoot Init Hook process. If I do not, it seems that the Shell session stops displaying any of the setup logins, and just drifts off out into la-la land. I did not have to do this with 14.0 but I may have logged in via a different shell session that acted differently.

I realize I could probably tweak the system by booting into Debian and mounting the drive but I far prefer setting up the system as it is supposed to be scripted.

Anyway, my issue (referenced elsewhere) will be to work on v. 15. Linode is quite secretive about what sort of bundle installs can take place or not. UnPacked XEN has worked so far. I expect I could get an Unpacked ISO to install, but it would be far best to take their Networking and Kernel settings and simply install the rest.

Great potential to get it to work though moving forward. I expect it will take some time understanding the TKL differences between 14.2 and 15.x. It appears just from this thread that there were very many. Still, I seem only to be hanging on the Failure of the TKL Install to pickup on the Linode Networking that is surviving the build process as evidenced by my ongoing SSH sessions.

All for now.