Closed asottile closed 3 years ago
Thanks for reaching out!
Podman uses the systemd
cgroup manager by default. You could change it to cgroupfs
on the CLI via --cgroup-manager=cgroupfs
or you tweak /usr/share/containers/containers.conf
and set
[engine]
cgroup_manager="cgroupfs"
Please let me know if that works for you.
The containers.conf change should be made in /etc/containers/containers.conf
that changes the error message but doesn't fix it:
$ sudo -u pcrunner podman run --rm -ti ubuntu:focal echo
WARN[0000] Failed to add podman to systemd sandbox cgroup: dbus: couldn't determine address of session bus
and actually, if I install dbus-x11
(which I don't want to) it still doesn't work (this is with the default container config):
$ sudo -u pcrunner podman run --rm -ti ubuntu:focal echo
WARN[0000] Failed to add podman to systemd sandbox cgroup: dbus: couldn't determine address of session bus
Can you try with sudo --login -u
? It looks like the systemd session of the pcrunner
user is not complete.
It's meant to be a service user only -- does podman require a full login session?
(fwiw, this prints -- as I expect for service users):
$ sudo --login -u pcrunner podman run --rm -ti ubuntu:focal echo
This account is currently not available.
I can reproduce the same behavior on Ubuntu 18.04. Starting with podman 3.0.0 one of dependence is dbus-user-session
which breaks our CI with Az.Accounts
powershell module. Is it possible to eliminate installation the dbus-user-session
pkg?
$ pwsh -command "Import-Module Az.Accounts"
** Message: 17:43:58.624: Remote error from secret service: org.freedesktop.DBus.Error.ServiceUnknown: The name org.freedesktop.secrets was not provided by any .service files
** Message: 17:43:58.630: Remote error from secret service: org.freedesktop.DBus.Error.ServiceUnknown: The name org.freedesktop.secrets was not provided by any .service files
** Message: 17:43:58.633: Remote error from secret service: org.freedesktop.DBus.Error.ServiceUnknown: The name org.freedesktop.secrets was not provided by any .service files
journalclt -e
Feb 15 17:43:58 powershell[16735]: Microsoft.Identity.Client.Extensions.TraceSource Error: 0 :
Feb 15 17:43:58 powershell[16735]: [MSAL.Extension][2021-02-15T17:43:58.6249328Z] An error was encountered while saving secret to keyring in the MsalCacheStorage domain:'95' co
Feb 15 17:43:58 powershell[16735]: Microsoft.Identity.Client.Extensions.TraceSource Error: 0 :
Feb 15 17:43:58 powershell[16735]: [MSAL.Extension][2021-02-15T17:43:58.6308996Z] An error was encountered while reading secret from keyring in the MsalCacheStorage domain:'95'
Feb 15 17:43:58 powershell[16735]: Microsoft.Identity.Client.Extensions.TraceSource Error: 0 :
Feb 15 17:43:58 powershell[16735]: [MSAL.Extension][2021-02-15T17:43:58.6340969Z] An error was encountered while clearing secret from keyring in the MsalCacheStorage domain:'95
lsb_release -a
No LSB modules are available.
Distributor ID:Ubuntu
Description:Ubuntu 18.04.5 LTS
Release:18.04
Codename:bionic
Warnings don't reproduce with sudo:
$ sudo --login -u test pwsh -command "Import-Module Az.Accounts;Get-Module"
ModuleType Version PreRelease Name ExportedCommands
---------- ------- ---------- ---- ----------------
Script 2.2.5 Az.Accounts {Add-AzEnvironment, Clear-AzContext, Clear-AzDefault, Connect-AzAccount…}
I'm observing the same behaviour with a fresh podman 3.0.0 install on Ubuntu 20.04...
I never had this issue with podman 2.2.1 on my other Ubuntu 20.04 servers, what has been changed in the latest release?
I'm observing the same behaviour with a fresh podman 3.0.0 install on Ubuntu 20.04...
I never had this issue with podman 2.2.1 on my other Ubuntu 20.04 servers, what has been changed in the latest release?
I believe the reason it is just showing up now is the default log level changed from ERROR => WARN in 3.0.0
I'm observing the same behaviour with a fresh podman 3.0.0 install on Ubuntu 20.04... I never had this issue with podman 2.2.1 on my other Ubuntu 20.04 servers, what has been changed in the latest release?
I believe the reason it is just showing up now is the default log level changed from ERROR => WARN in 3.0.0
This really sucks because now I get false-positive alerts whenever podman is invoked as this gets printed during crontab task executions.
BTW my use case is similar to yours: service account (accessed using sudo -u xxxxx -i
) running on headless servers.
I'm currently hacking around the warning with --log-level error
(in a commit that I hope to revert once this issue is fixed)
Are you saying that you changed the containers.conf to use cgroupfs for cgroup-manager, but you are still seeing it attempt to use dbus-launch?
yes:
$ grep cgroup_manager /etc/containers/containers.conf
cgroup_manager = "cgroupfs"
$ sudo -u pcrunner podman run ubuntu:focal echo
WARN[0000] Failed to add podman to systemd sandbox cgroup: dbus: couldn't determine address of session bus
What about the log_driver? Is this set to journald?
# events_logger = "journald"
it's set to whatever the default is:
$ grep log_d -- /etc/containers/containers.conf
# log_driver = "k8s-file"
Uncomment that line and see if the problem goes away.
no dice
$ grep -E '(cgroup_|log_d)' /etc/containers/containers.conf
log_driver = "k8s-file"
cgroup_manager = "cgroupfs"
$ sudo -u pcrunner podman run ubuntu:focal echo
WARN[0000] Failed to add podman to systemd sandbox cgroup: dbus: couldn't determine address of session bus
func (ic *ContainerEngine) SetupRootless(_ context.Context, cmd *cobra.Command) error {
// do it only after podman has already re-execed and running with uid==0.
if os.Geteuid() == 0 {
ownsCgroup, err := cgroups.UserOwnsCurrentSystemdCgroup()
if err != nil {
logrus.Warnf("Failed to detect the owner for the current cgroup: %v", err)
}
if !ownsCgroup {
conf, err := ic.Config(context.Background())
if err != nil {
return err
}
initCommand, err := ioutil.ReadFile("/proc/1/comm")
// On errors, default to systemd
runsUnderSystemd := err != nil || strings.TrimRight(string(initCommand), "\n") == "systemd"
unitName := fmt.Sprintf("podman-%d.scope", os.Getpid())
if runsUnderSystemd || conf.Engine.CgroupManager == config.SystemdCgroupsManager {
if err := utils.RunUnderSystemdScope(os.Getpid(), "user.slice", unitName); err != nil {
logrus.Warnf("Failed to add podman to systemd sandbox cgroup: %v", err)
}
}
}
return nil
}
...
@giuseppe WDYT? Should we handle the lack of dbus_launch better, IE Not Warn?
@giuseppe WDYT? Should we handle the lack of dbus_launch better, IE Not Warn?
Could you please explain why "dbus_launch" is important for podman?
It's meant to be a service user only -- does podman require a full login session?
It'll work without a full login session but the storage paths may differ when logging in. Since you intend it to be service-only user, it's good.
@giuseppe WDYT? Should we handle the lack of dbus_launch better, IE Not Warn?
I think it should be a debug log.
dbus-launch
is used by github.com/godbus/dbus/v5/conn_other.go
.
I am fine to downgrade the warning to a debug message: opened a PR: https://github.com/containers/podman/pull/9407
Is this a BUG REPORT or FEATURE REQUEST? (leave only one on its own line)
/kind bug
Description
When I run any
podman
command as another user, I get this output:dbus-launch
on ubuntu is provided by thedbus-x11
package which pulls in a full x11 environment which I absolutely do not need on a headless server.Steps to reproduce the issue:
Describe the results you received:
see above
Describe the results you expected:
no extraneous output
Additional information you deem important (e.g. issue happens only occasionally):
Output of
podman version
:Output of
podman info --debug
:Package info (e.g. output of
rpm -q podman
orapt list podman
):Have you tested with the latest version of Podman and have you checked the Podman Troubleshooting Guide?
Yes
Additional environment details (AWS, VirtualBox, physical, etc.):
AWS