Closed matteo-gambarutti closed 6 months ago
Please provide the full quadlet file and run systcemctl cat unifi_network_application.service
to see the full podman command to check if quadlet may have generated something incorrectly.
Also if you use RHEL please contact RHEL support as upstream really only supports the lastest version or try if you can reproduce with podman 5.
Here the full quadlet file:
Description=Unifi Network Application
After=local-fs.target
[Container]
Image=lscr.io/linuxserver/unifi-network-application:latest
ContainerName=unifi_network_application
AutoUpdate=registry
Environment=PUID=${PODMAN_PUID}
Environment=PGID=${PODMAN_PGID}
Environment=TZ=Europe/Amsterdam
Environment=MONGO_USER=unifi
Environment=MONGO_HOST=192.168.1.116
Environment=MONGO_PORT=27017
Environment=MONGO_DBNAME=unifi
Secret=unifi_mongodb_unifi_password,type=env,target=MONGO_PASS
PublishPort=3478:3478/udp
PublishPort=8080:8080
PublishPort=8443:8443
PublishPort=8843:8843
PublishPort=8880:8880
PublishPort=10001:10001/udp
Volume=${CONFIG_DIR}/unifi_network_application:/config:Z
User=$(id -u):$(id -g)
UserNS=keep-id
HealthStartPeriod=2m
HealthCmd=CMD-SHELL curl -f --insecure https://localhost:8443 || exit 1
HealthInterval=30s
HealthRetries=2
HealthOnFailure=kill
[Service]
Restart=on-failure
[Install]
WantedBy=multi-user.target default.target
Here the full podman command from systemctl status:
ExecStart=/usr/bin/podman run --name=unifi_network_application --cidfile=/run/user/1002/unifi_network_application.cid --replace --rm --cgroups=split --sdnotify=conmon -d --user $(id -u):$(id -g) --userns keep-id -v ${CONFIG_DIR}/unifi_network_application:/config:Z --label io.containers.autoupdate=registry --publish 3478:3478/udp --publish 8080:8080 --publish 8443:8443 --publish 8843:8843 --publish 8880:8880 --publish 10001:10001/udp --env MONGO_DBNAME=unifi --env MONGO_HOST=192.168.1.116 --env MONGO_PORT=27017 --env MONGO_USER=unifi --env PGID=${PODMAN_PGID} --env PUID=${PODMAN_PUID} --env TZ=Europe/Amsterdam --secret unifi_mongodb_unifi_password,type=env,target=MONGO_PASS --health-cmd CMD-SHELL curl -f --insecure https://localhost:8443 || exit 1 --health-interval 30s --health-on-failure kill --health-retries 2 --health-start-period 2m lscr.io/linuxserver/unifi-network-application:latest (code=exited, status=125)
please show the systcemctl cat
output, seeing the actual quoting is really important
This is the output:
[podman@homeserver ~]$ systemctl --user cat unifi_network_application.service
# /run/user/1002/systemd/generator/unifi_network_application.service
# Automatically generated by /usr/lib/systemd/user-generators/podman-user-generator
#
[Unit]
Description=Unifi Network Application
After=local-fs.target
SourcePath=/home/podman/.config/containers/systemd/unifi_network_application.container
RequiresMountsFor=%t/containers
[X-Container]
Image=lscr.io/linuxserver/unifi-network-application:latest
ContainerName=unifi_network_application
AutoUpdate=registry
Environment=PUID=${PODMAN_PUID}
Environment=PGID=${PODMAN_PGID}
Environment=TZ=Europe/Amsterdam
Environment=MONGO_USER=unifi
Environment=MONGO_HOST=192.168.1.116
Environment=MONGO_PORT=27017
Environment=MONGO_DBNAME=unifi
Secret=unifi_mongodb_unifi_password,type=env,target=MONGO_PASS
PublishPort=3478:3478/udp
PublishPort=8080:8080
PublishPort=8443:8443
PublishPort=8843:8843
PublishPort=8880:8880
PublishPort=10001:10001/udp
Volume=${CONFIG_DIR}/unifi_network_application:/config:Z
User=$(id -u):$(id -g)
UserNS=keep-id
HealthStartPeriod=2m
HealthCmd=CMD-SHELL curl -f --insecure https://localhost:8443 || exit 1
HealthInterval=30s
HealthRetries=2
HealthOnFailure=kill
[Service]
Restart=on-failure
Environment=PODMAN_SYSTEMD_UNIT=%n
KillMode=mixed
ExecStop=/usr/bin/podman rm -v -f -i --cidfile=%t/%N.cid
ExecStopPost=-/usr/bin/podman rm -v -f -i --cidfile=%t/%N.cid
Delegate=yes
Type=notify
NotifyAccess=all
SyslogIdentifier=%N
ExecStart=/usr/bin/podman run --name=unifi_network_application --cidfile=%t/%N.cid --replace --rm --cgroups=split --sdnotify=conmon -d --user "$(id -u):$(id -g)" --userns keep-id -v ${CONFIG_DIR}/unifi_network_application:/config:Z --label io.containers.autoupdate=registry --publish 3478:3478/udp --publish 8080:8080 --publish 8443:8443 --publish 8843:8843 --publish 8880:8880 --publish 10001:10001/udp --env MONGO_DBNAME=unifi --env MONGO_HOST=192.168.1.116 --env MONGO_PORT=27017 --env MONGO_USER=unifi --env PGID=${PODMAN_PGID} --env PUID=${PODMAN_PUID} --env TZ=Europe/Amsterdam --secret unifi_mongodb_unifi_password,type=env,target=MONGO_PASS --health-cmd "CMD-SHELL curl -f --insecure https://localhost:8443 || exit 1" --health-interval 30s --health-on-failure kill --health-retries 2 --health-start-period 2m lscr.io/linuxserver/unifi-network-application:latest
[Install]
WantedBy=multi-user.target default.target
setting something like "$(id -u):$(id -g)"
is not possible systemd does not run the command through your shell so even if the pull works this will not get expanded and is invalid syntax for the --user option so I don't see how this ever would have worked even before the update.
To be clear my assumption is that the cli args are wrong somehow, because pulling lscr.io/linuxserver/unifi-network-application:latest
will not result in such error rather something like a extra space somewhere, i.e.
podman run --rm -p 8080 8080 lscr.io/linuxserver/unifi-network-application:latest
This it would assume 8080 is the image name although I don't see such case in you command right now.
About that "$(id -u):$(id -g)"
: I got crazy for like 4 months and that small change immediately did the trick as my directories mounted inside the container no longer change user/group. It did work, but do not ask me why ;)
Indeed pulling the image works with podman pull ...
.
I'm pretty sure it comes from the last podman update has the issue arose immediately after the system reboot.
I just downgraded to podman version 4.6.1 and all of my containers do work as expected. I didn't have to do anything, just a reboot of the system (after the downgrade) and they came back running on their own.
Can you do another systemctl cat
to see if there are any differences between the generated units
For sure:
[podman@homeserver ~]$ systemctl --user cat unifi_network_application.service > output.txt
[podman@homeserver ~]$ cat output.txt
# /run/user/1002/systemd/generator/unifi_network_application.service
# Automatically generated by /usr/lib/systemd/user-generators/podman-user-generator
#
[Unit]
Description=Unifi Network Application
After=local-fs.target
SourcePath=/home/podman/.config/containers/systemd/unifi_network_application.container
RequiresMountsFor=%t/containers
[X-Container]
Image=lscr.io/linuxserver/unifi-network-application:latest
ContainerName=unifi_network_application
AutoUpdate=registry
Environment=PUID=${PODMAN_PUID}
Environment=PGID=${PODMAN_PGID}
Environment=TZ=Europe/Amsterdam
Environment=MONGO_USER=unifi
Environment=MONGO_HOST=192.168.1.116
Environment=MONGO_PORT=27017
Environment=MONGO_DBNAME=unifi
Secret=unifi_mongodb_unifi_password,type=env,target=MONGO_PASS
PublishPort=3478:3478/udp
PublishPort=8080:8080
PublishPort=8443:8443
PublishPort=8843:8843
PublishPort=8880:8880
PublishPort=10001:10001/udp
Volume=${CONFIG_DIR}/unifi_network_application:/config:Z
User=$(id -u):$(id -g)
UserNS=keep-id
HealthStartPeriod=2m
HealthCmd=CMD-SHELL curl -f --insecure https://localhost:8443 || exit 1
HealthInterval=30s
HealthRetries=2
HealthOnFailure=kill
[Service]
Restart=on-failure
Environment=PODMAN_SYSTEMD_UNIT=%n
KillMode=mixed
ExecStop=/usr/bin/podman rm -f -i --cidfile=%t/%N.cid
ExecStopPost=-/usr/bin/podman rm -f -i --cidfile=%t/%N.cid
Delegate=yes
Type=notify
NotifyAccess=all
SyslogIdentifier=%N
ExecStart=/usr/bin/podman run --name=unifi_network_application --cidfile=%t/%N.cid --replace --rm --cgroups=split --sdnotify=conmon -d --user 0 --userns keep-id -v ${CONFIG_DIR}/unifi_network_application:/config:Z --label io.containers.autoupdate=registry --publish 3478:3478/udp --publish 8080:8080 --publish 8443:8443 --publish 8843:8843 --publish 8880:8880 --publish 10001:10001/udp --env MONGO_DBNAME=unifi --env MONGO_HOST=192.168.1.116 --env MONGO_PORT=27017 --env MONGO_USER=unifi --env PGID=${PODMAN_PGID} --env PUID=${PODMAN_PUID} --env TZ=Europe/Amsterdam --secret unifi_mongodb_unifi_password,type=env,target=MONGO_PASS --health-cmd "CMD-SHELL curl -f --insecure https://localhost:8443 || exit 1" --health-interval 30s --health-on-failure kill --health-retries 2 --health-start-period 2m lscr.io/linuxserver/unifi-network-application:latest
[Install]
WantedBy=multi-user.target default.target
As I suspected the difference is --user "$(id -u):$(id -g)" vs --user 0 (working one)
Yeah indeed, just noticed the same! What would it be the correct syntax then to not have podman changing the ownership of mounted directories inside containers? Thanks a lot for your time btw!
Well if --user 0 worked for you then I suggest you set User=0
in your quadlet file.
Upgraded again to 4.9.1 and changed to User=0 in the quadlet file fixed the issue. Thanks a lot for you help @Luap99!
Issue Description
After the last system updates on RHEL 9, podman got update to version 4.9.4. I have quadlet files to manage my containers and some of them do not start anymore: only the ones where the image is not from docker.io registry.
In particular I get the following error:
I find it strange to see
Error: short-name resolution enforced but cannot prompt without a TTY
as this is the image definition the quadlet file:Image=lscr.io/linuxserver/unifi-network-application:latest
How can this being fixed? As I said, no issues with containers with and image coming from docker.io.
I did not change anything in the quadlet files definition. Everything was working fine before the updates.
Steps to reproduce the issue
Steps to reproduce the issue
Describe the results you received
Error: short-name resolution enforced but cannot prompt without a TTY
Describe the results you expected
A running container
podman info output
Podman in a container
No
Privileged Or Rootless
Rootless
Upstream Latest Release
No
Additional environment details
Additional environment details
Additional information
Additional information like issue happens only occasionally or issue happens with a particular architecture or on a particular setting