containers / podman

Podman: A tool for managing OCI containers and pods.
https://podman.io
Apache License 2.0
23.1k stars 2.36k forks source link

OS X, podman machine time stops sometime #11541

Closed dm3ch closed 1 year ago

dm3ch commented 3 years ago

Is this a BUG REPORT or FEATURE REQUEST? (leave only one on its own line)

/kind bug

Description Date output is wrong for both containers and podman machine itself on OS X.

Steps to reproduce the issue: I'm not sure 100% in my reproduction guide

  1. Install podman on OSX
  2. Create podman machine
  3. Wait couple of days
  4. Get date from podman machine

Describe the results you received: OS X date:

$ date
Sun Sep 12 15:55:30 MSK 202

Podman container and podman machine ssh date:

$ date
Fri Sep 10 19:47:08 UTC 2021

I have ran same command again and time was completely the same. After stoping and starting machine again time started to go.

Describe the results you expected: Date inside podman machine should be right

Have you tested with the latest version of Podman and have you checked the Podman Troubleshooting Guide? (https://github.com/containers/podman/blob/master/troubleshooting.md) Yes

Additional environment details (AWS, VirtualBox, physical, etc.):

Client:
Version:      3.3.1
API Version:  3.3.1
Go Version:   go1.17
Built:        Mon Aug 30 22:15:26 2021
OS/Arch:      darwin/amd64

Server:
Version:      3.3.1
API Version:  3.3.1
Go Version:   go1.16.6
Built:        Mon Aug 30 23:46:36 2021
OS/Arch:      linux/amd64

OS X info: Screenshot 2021-09-12 at 16 00 11

lvh commented 2 years ago

Sure, you can do that with hammerspoon, obb, talon... It's not particularly easy to do without 3rd party software (and requires IOKit to access the events; unfortunately launchd doesn't expose them directly). You could do something like:

brew install sleepwatcher

cat << EOF > ~/.wakeup
#!/usr/bin/env bash
podman machine ssh "sudo chronyc -m 'burst 4/4' makestep; date -u"
EOF

chmod +x ~/.wakeup
brew services start sleepwatcher

Via Talon (thanks to lunixbochs):

from talon import ui
def fn():
   ...
ui.register('screen_sleep', fn)
ui.register('screen_wake', fn)
konstruktoid commented 2 years ago

Yes, but as a timer or cron etc (see above for example) so it can get shipped with the actual VM.

lvh commented 2 years ago

No, that approach seemed like a bad idea to me. The problem is the VM doesn't know it's being resumed; running chrony all the time on the off chance a suspend/resume happened seems way less efficient than hooking the host resume event. (I suppose an even more generic solution would be to forward sleep/resume events to VMs, but that sounds complicated.)

github-actions[bot] commented 2 years ago

A friendly reminder that this issue had no activity for 30 days.

rhatdan commented 2 years ago

This seems to be more of a CoreOS issue then a Podman issue.

konstruktoid commented 2 years ago

This seems to be more of a CoreOS issue then a Podman issue.

Yeah, absolutely

ssbarnea commented 2 years ago

Apparently it become impossible to use podman 4.0.2 on macos, as the connection is randomly failing, almost like 50/50 chance:

That is like 2 seconds later, repeat and you might get one or another:

$ podman info
Cannot connect to Podman. Please verify your connection to the Linux system using `podman system connection list`, or try `podman machine init` and `podman machine start` to manage a new Linux VM
Error: unable to connect to Podman. failed to create sshClient: Connection to bastion host (ssh://core@localhost:56533/run/user/501/podman/podman.sock) failed.: ssh: handshake failed: ssh: disconnect, reason 2: Too many authentication failures
FAIL: 125

$ podman info
host:
  arch: amd64
  buildahVersion: 1.24.1
  cgroupControllers:
  - memory
  - pids
  cgroupManager: systemd
  cgroupVersion: v2
  conmon:
    package: conmon-2.1.0-2.fc35.x86_64
    path: /usr/bin/conmon
    version: 'conmon version 2.1.0, commit: '
  cpus: 1
  distribution:
    distribution: fedora
    variant: coreos
    version: "35"
  eventLogger: journald
  hostname: localhost.localdomain
  idMappings:
    gidmap:
    - container_id: 0
      host_id: 1000
      size: 1
    - container_id: 1
      host_id: 100000
      size: 1000000
    uidmap:
    - container_id: 0
      host_id: 501
      size: 1
    - container_id: 1
      host_id: 100000
      size: 1000000
  kernel: 5.15.18-200.fc35.x86_64
  linkmode: dynamic
  logDriver: journald
  memFree: 205676544
  memTotal: 2061381632
  networkBackend: netavark
  ociRuntime:
    name: crun
    package: crun-1.4.2-1.fc35.x86_64
    path: /usr/bin/crun
    version: |-
      crun version 1.4.2
      commit: f6fbc8f840df1a414f31a60953ae514fa497c748
      spec: 1.0.0
      +SYSTEMD +SELINUX +APPARMOR +CAP +SECCOMP +EBPF +CRIU +YAJL
  os: linux
  remoteSocket:
    exists: true
    path: /run/user/501/podman/podman.sock
  security:
    apparmorEnabled: false
    capabilities: CAP_CHOWN,CAP_DAC_OVERRIDE,CAP_FOWNER,CAP_FSETID,CAP_KILL,CAP_NET_BIND_SERVICE,CAP_SETFCAP,CAP_SETGID,CAP_SETPCAP,CAP_SETUID,CAP_SYS_CHROOT
    rootless: true
    seccompEnabled: true
    seccompProfilePath: /usr/share/containers/seccomp.json
    selinuxEnabled: true
  serviceIsRemote: true
  slirp4netns:
    executable: /usr/bin/slirp4netns
    package: slirp4netns-1.1.12-2.fc35.x86_64
    version: |-
      slirp4netns version 1.1.12
      commit: 7a104a101aa3278a2152351a082a6df71f57c9a3
      libslirp: 4.6.1
      SLIRP_CONFIG_VERSION_MAX: 3
      libseccomp: 2.5.3
  swapFree: 0
  swapTotal: 0
  uptime: 36m 24.96s
plugins:
  log:
  - k8s-file
  - none
  - passthrough
  - journald
  network:
  - bridge
  - macvlan
  volume:
  - local
registries:
  search:
  - docker.io
store:
  configFile: /var/home/core/.config/containers/storage.conf
  containerStore:
    number: 1
    paused: 0
    running: 1
    stopped: 0
  graphDriverName: overlay
  graphOptions: {}
  graphRoot: /var/home/core/.local/share/containers/storage
  graphStatus:
    Backing Filesystem: xfs
    Native Overlay Diff: "true"
    Supports d_type: "true"
    Using metacopy: "false"
  imageCopyTmpDir: /var/tmp
  imageStore:
    number: 10
  runRoot: /run/user/501/containers
  volumePath: /var/home/core/.local/share/containers/storage/volumes
version:
  APIVersion: 4.0.2
  Built: 1646319416
  BuiltTime: Thu Mar  3 14:56:56 2022
  GitCommit: ""
  GoVersion: go1.16.14
  OsArch: linux/amd64
  Version: 4.0.2

I know that is 1st of April, but I swear that is not a joke.

baude commented 2 years ago

@ssbarnea that error usually means that you have a bunch of ssh keys and have exceeded the limit of keychain. Is that possible?

konstruktoid commented 2 years ago

With both timers and systemd:

$ podman machine ssh 'uname -a ; date -u && uptime --pretty && exit' 2>/dev/null; echo ; date -u && podman --version && uname -sri
Linux localhost.localdomain 5.15.18-200.fc35.x86_64 #1 SMP Sat Jan 29 13:54:17 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
Tue Mar 22 05:56:13 UTC 2022
up 4 days, 19 hours, 18 minutes

Sat Apr  2 14:43:43 UTC 2022
podman version 4.0.2
Darwin 21.4.0 MacBookPro13,3
konstruktoid commented 2 years ago

Can’t reproduce, @ssbarnea. No errors in 50 tries. Two keys in ssh-agent.

$ for i in {1..50}; do echo "${i}:" ; podman info -f json | jq -r '.version.Version' || echo "fail"; done 
1:
4.0.2
2:
4.0.2
[...]
49:
4.0.2
50:
4.0.2
github-actions[bot] commented 2 years ago

A friendly reminder that this issue had no activity for 30 days.

rhatdan commented 2 years ago

Is this problem still happening?

ssbarnea commented 2 years ago

Last time I checked it did, looking again.

ssbarnea commented 2 years ago
podman machine ssh 'uname -a ; date -u && uptime --pretty && exit' 2>/dev/null; echo ; date -u && podman --version && uname -sri

That line is invalid, uname: illegal option -- i.

ssbarnea commented 2 years ago

Closing as I am unable to reproduce with current versions of podman and macos. If someone can do it please comment and I can test again.

konstruktoid commented 2 years ago
$ podman machine ssh 'uname -a ; date -u && uptime --pretty && exit' 2>/dev/null; echo ; date -u && podman --version && uname -sri
$ podman machine ssh 'uname -a && date -u && uptime --pretty' 2>/dev/null; echo ; date -u && podman --version && uname -sri
Linux localhost.localdomain 5.17.3-300.fc36.x86_64 #1 SMP PREEMPT Wed Apr 13 23:08:09 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
Tue May  3 09:27:20 UTC 2022
up 2 minutes

Tue May  3 09:27:20 UTC 2022
podman version 4.0.3
Darwin 21.4.0 unknown
$ uname -sri
Darwin 21.4.0 unknown

That’s a newly initiated poman machine, so it’s pristine. Will check if it starts to drift.

ssbarnea commented 2 years ago
$ podman machine ssh 'uname -a && date -u && uptime --pretty' 2>/dev/null; echo ; date -u && podman --version && uname -sri
Linux localhost.localdomain 5.15.18-200.fc35.aarch64 #1 SMP Sat Jan 29 12:44:33 UTC 2022 aarch64 aarch64 aarch64 GNU/Linux
Tue May  3 09:31:22 UTC 2022
up 1 week, 7 hours, 3 minutes

Tue May  3 09:31:22 UTC 2022
podman version 4.0.3
uname: illegal option -- i
usage: uname [-amnprsv]
FAIL: 1

Looking at the command I am not really surprised it fails because I assume that the remote command is not a shell! so you would have wrap it with bash -c '...'. Still interestingly, it seems to work for you, so at least sometimes it might work.

$ podman machine ssh 'cat /etc/os-release'
Warning: Permanently added '[localhost]:50691' (ED25519) to the list of known hosts.
bash: warning: setlocale: LC_ALL: cannot change locale (en_US.UTF-8)
/usr/bin/sh: warning: setlocale: LC_ALL: cannot change locale (en_US.UTF-8)
/usr/bin/sh: warning: setlocale: LC_ALL: cannot change locale (en_US.UTF-8)
/usr/bin/sh: warning: setlocale: LC_ALL: cannot change locale (en_US.UTF-8)
NAME="Fedora Linux"
VERSION="35.20220305.dev.0 (CoreOS)"
...
konstruktoid commented 2 years ago

What does your machine return?

$ podman machine ssh 'echo $SHELL' 2>/dev/null
/bin/bash
ssbarnea commented 2 years ago

Same as for you.

konstruktoid commented 2 years ago

Drifting started @rhatdan

$ podman machine ssh 'uname -a && date -u && uptime --pretty' 2>/dev/null; echo ; date -u && podman --version && uname -sri
Linux localhost.localdomain 5.17.3-300.fc36.x86_64 #1 SMP PREEMPT Wed Apr 13 23:08:09 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
Tue May  3 10:07:44 UTC 2022
up 43 minutes

Tue May  3 10:18:28 UTC 2022
podman version 4.0.3
Darwin 21.4.0 unknown
$ podman machine ssh 'cat /etc/os-release' 2>/dev/null
NAME="Fedora Linux"
VERSION="36.20220421.dev.0 (CoreOS Prerelease)"
ID=fedora
[...]
mrstanwell commented 2 years ago

I can reproduce this issue with current podman, although I'm still running macOS 10.15.7:

$ podman machine ssh 'uname -a && date && uptime --pretty' 2>/dev/null; echo; date && podman --version && uname -sr
Linux localhost.localdomain 5.15.18-200.fc35.x86_64 #1 SMP Sat Jan 29 13:54:17 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
Mon May  2 17:00:40 CDT 2022
up 2 weeks, 4 days, 5 hours, 6 minutes

Tue May  3 07:58:04 CDT 2022
podman version 4.0.3
Darwin 19.6.0
ssbarnea commented 2 years ago

Locally my drift is only 2s and I use suspend over the night. Still, it might have something to do with version of the server/vm:

$ podman version
Client:       Podman Engine
Version:      4.0.3
API Version:  4.0.3
OS/Arch:      darwin/arm64

Server:       Podman Engine
Version:      4.0.2
API Version:  4.0.2
OS/Arch:      linux/arm64

I suspect that this might be specific to newer fedora-core-36 image which I do not have. I tried installing it with podman machine init new but got into weird issue stopping the old one. Podman reported it as stopped but podman machine ls reported it still running for more than a minute after stopped reported success. TBH, it should never report success if its status is not stopped at the end.

After a suspend session, here is the surprise:

$ date && podman machine ssh new date
Wed May  4 11:15:38 BST 2022
Warning: Permanently added '[localhost]:60794' (ED25519) to the list of known hosts.
bash: warning: setlocale: LC_ALL: cannot change locale (en_US.UTF-8)
/usr/bin/sh: warning: setlocale: LC_ALL: cannot change locale (en_US.UTF-8)
/usr/bin/sh: warning: setlocale: LC_ALL: cannot change locale (en_US.UTF-8)
/usr/bin/sh: warning: setlocale: LC_ALL: cannot change locale (en_US.UTF-8)
Wed May  4 10:21:22 BST 2022

I also wonder where these locale warnings come from, they are quite annoying.

konstruktoid commented 2 years ago

We might as well reopen this.

$ podman machine ssh 'uname -a && date -u && uptime --pretty' 2>/dev/null; echo ; date -u && podman --version && uname -sri
Linux localhost.localdomain 5.17.3-300.fc36.x86_64 #1 SMP PREEMPT Wed Apr 13 23:08:09 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
Tue May  3 15:50:16 UTC 2022
up 6 hours, 25 minutes

Wed May  4 08:04:46 UTC 2022
podman version 4.0.3
Darwin 21.4.0 unknown

up 6 hours, 25 minutes is during the ~23h period between this comment and https://github.com/containers/podman/issues/11541#issuecomment-1115944086

kennywest commented 2 years ago

Same here:

podman machine ssh 'uname -a && date && uptime --pretty' 2>/dev/null; echo; date && podman --version && uname -sr                                                              
Linux localhost.localdomain 5.15.18-200.fc35.x86_64 #1 SMP Sat Jan 29 13:54:17 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
Wed May  4 13:35:31 CEST 2022
up 4 hours, 46 minutes

Wed May  4 13:48:53 CEST 2022
podman version 4.0.0-dev
Darwin 20.6.0
zeekoe commented 2 years ago

Same here. Glad there are others, that's a start. :) My mac slept for a few days since starting podman:

Linux localhost.localdomain 5.15.18-200.fc35.x86_64 #1 SMP Sat Jan 29 13:54:17 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
Thu Apr 28 02:29:17 CEST 2022
up 1 day, 17 hours, 28 minutes

Mon May  9 09:46:14 CEST 2022
podman version 4.0.3
Darwin 21.4.0

If I understand correctly, there's no real workaround yet?

konstruktoid commented 2 years ago

No, no real workaround yet @zeekoe

EdouardMYOB commented 2 years ago

Just spent 2 hours trying to work out why my app using the AWS API was failing running in podman, but worked fine running locally. The culprit was the date shift/freeze - AWS includes the date in it's API auth/signing scheme (presumably to inhibit some kinds of replay attacks) and the AWS APIs were going "that X-Amz-Date: data isn't in the valid window"

So just a "me too" on the issue is still happening.

podman machine stop; podman machine start fixed it, albeit only temporarily based on the rest of this thread.

lvh commented 2 years ago

@EdouardMYOB @konstruktoid Did the workarounds I provided upthread not work for you? https://github.com/containers/podman/issues/11541#issuecomment-1038436498

konstruktoid commented 2 years ago

@lvh, I haven’t tried anything that requires additional tools

shoeffner commented 2 years ago

It's happening for me especially after resuming work after weekends. (The symptom I always see is that package repositories are "not valid yet", e.g., E: Release file for http://ports.ubuntu.com/ubuntu-ports/dists/focal-updates/InRelease is not valid yet (invalid for another 3d 6h 54min 17s). Updates for this repository will not be applied..

sudo systemctl restart systemd-timesyncd.service (mentioned in https://github.com/containers/podman/issues/11541#issuecomment-990998454) worked to fix it for me, @zeekoe, maybe you can try it:

[core@localhost ~]$ timedatectl
               Local time: Fri 2022-05-06 05:27:47 CEST
           Universal time: Fri 2022-05-06 03:27:47 UTC
                 RTC time: Mon 2022-05-09 10:28:46
                Time zone: Europe/Berlin (CEST, +0200)
System clock synchronized: no
              NTP service: active
          RTC in local TZ: no
[core@localhost ~]$ sudo systemctl restart systemd-timesyncd.service 
[core@localhost ~]$ timedatectl
               Local time: Mon 2022-05-09 12:46:51 CEST
           Universal time: Mon 2022-05-09 10:46:51 UTC
                 RTC time: Mon 2022-05-09 10:46:52
                Time zone: Europe/Berlin (CEST, +0200)
System clock synchronized: yes
              NTP service: active
          RTC in local TZ: no
kennywest commented 2 years ago

Upgraded to a more recent version and in 4.1.0 the same problem exists:

Linux localhost.localdomain 5.17.3-300.fc36.x86_64 #1 SMP PREEMPT Wed Apr 13 23:08:09 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
Mon May  9 12:30:25 CEST 2022
up 5 hours, 32 minutes

Mon May  9 13:00:10 CEST 2022
podman version 4.1.0
Darwin 20.6.0
cgwalters commented 2 years ago

We actually use chrony, not systemd-timesyncd. I think there's two choices/paths here:

1) Ensure chrony resyncs immediately after suspend, I don't think it does that today 1a) block ssh access until time sync is complete

2) Query the hypervisor (host system) for time - I think there's some qemu support for this (xref this article) but we don't currently install qemu-guest-agent. We may need to revisit either doing that or shipping whatever bits it does as part of afterburn or chrony. 2a) Block ssh access until time sync is complete

dustymabe commented 2 years ago

AIUI FCOS isn't actually getting suspended. It's the host (laptop). I think this means FCOS doesn't have any way to know when it should try to trigger a sync. Ultimately we can't rely on chrony either, since people may be using their systems offline.

I think we need to figure out a way to get this information from the host and then have a service (specific to podman-machine+FCOS) poll and update.

EdouardMYOB commented 2 years ago

@lvh Tried the chronyc command in https://github.com/containers/podman/issues/11541#issuecomment-1038258396 and that puts the time back in sync.

rhatdan commented 2 years ago

Does the MAC have a way to initialize a service when it wakes up. If so then we could add a script to launch podman machine ssh chronyc or something like that.

dustymabe commented 2 years ago

Does the MAC have a way to initialize a service when it wakes up. If so then we could add a script to launch podman machine ssh chronyc or something like that.

chronyc still relies on the net. It would be best if we could figure out a way to just sync up with the host OS time (which hopefully is correct).

lvh commented 2 years ago

@rhatdan I provided exactly that upthread.

rhatdan commented 2 years ago

Do you know of a way to do that automatically?

lvh commented 2 years ago

Yes, I provided several mechanisms for doing it automatically on machine wake upthread.

rhatdan commented 2 years ago

Ok I see them, sadly they are not native. I am not thrilled with making podman require this package be installed. Is there nothing that qemu does to discover it has been restarted.

PiotrKlimczak commented 2 years ago

Devs are using Podman in our organisation. We have all MacBook Pros and we all have frequent time drifts on all recent versions (in fact all versions we tried), including latest 4.1, causing us frequent random issues. It would be good if podman could have some built in solution to the problem. It wouldn't be anything unusual since there are other OSX specific fixes available already.

ssbarnea commented 2 years ago

How about making podman machine start implicit? seems linked to this subject.

alexanderankin commented 2 years ago

Devs are using Podman in our organisation.

Same here and it would be nice to even have a couple of commands in this thread to run to resolve the issues and continue as before. (i for instance am not an export on timedatectl).

shanesmith commented 2 years ago

Might be a little ham-fisted, but whenever I notice the machine's date is out of sync I just set it to the same as the host with podman machine ssh date --set $(date -Iseconds)

alexanderankin commented 2 years ago

@shanesmith lifesaver! ty

ssbarnea commented 2 years ago

I ended up writing this script and including it in our build scripts, which run both locally and on CI. It will attempt to restore podman https://github.com/ansible/ansible-language-server/blob/main/tools/test-setup.sh#L128-L135

jeffam commented 2 years ago

Might be a little ham-fisted, but whenever I notice the machine's date is out of sync I just set it to the same as the host with podman machine ssh date --set $(date -Iseconds)

On macOS (Catalina) I had to modify the command a bit:

podman machine ssh sudo date --set $(date +'%Y-%m-%dT%H:%M:%S')
choval commented 2 years ago

Might be a little ham-fisted, but whenever I notice the machine's date is out of sync I just set it to the same as the host with podman machine ssh date --set $(date -Iseconds)

On macOS (Catalina) I had to modify the command a bit:

podman machine ssh sudo date --set $(date +'%Y-%m-%dT%H:%M:%S')

You could also run as root directly without sudo. Monterey & podman 4.1.1

podman machine ssh --username root date --set $(date -Iseconds)
benoitf commented 2 years ago

I'm also reproducing it with current podman/podman machine on macOS 12.5 and I'm connected to internet

machine is Rootful

  "if=virtio,file=/Users/benoitf/.local/share/containers/podman/machine/qemu/podman-machine-default_fedora-coreos-36.20220723.2.2-qemu.x86_64.qcow2"
 ],
 "Rootful": true,

Date on my computer: Wed Aug 10 12:06:43 CEST 2022 in the VM: Tue 2022-08-09 23:50:35 CEST

timedatectl output ``` [root@localhost ~]# timedatectl Local time: Tue 2022-08-09 23:50:35 CEST Universal time: Tue 2022-08-09 21:50:35 UTC RTC time: Wed 2022-08-10 10:06:32 Time zone: Europe/Paris (CEST, +0200) System clock synchronized: no NTP service: active RTC in local TZ: no ```
systemctl status chronyd ``` ● chronyd.service - NTP client/server Loaded: loaded (/usr/lib/systemd/system/chronyd.service; enabled; vendor preset: enabled) Active: active (running) since Tue 2022-08-09 17:22:35 CEST; 6h ago Docs: man:chronyd(8) man:chrony.conf(5) Process: 794 ExecStart=/usr/sbin/chronyd $OPTIONS (code=exited, status=0/SUCCESS) Main PID: 857 (chronyd) Tasks: 1 (limit: 2254) Memory: 1.6M CPU: 2.086s CGroup: /system.slice/chronyd.service └─ 857 /usr/sbin/chronyd -F 2 Aug 09 19:30:19 localhost.localdomain chronyd[857]: Selected source 37.187.104.44 (2.fedora.pool.ntp.org) Aug 09 19:30:19 localhost.localdomain chronyd[857]: System clock wrong by 7129.523632 seconds Aug 09 19:31:24 localhost.localdomain chronyd[857]: Can't synchronise: no selectable sources Aug 09 19:42:10 localhost.localdomain chronyd[857]: Selected source 37.187.104.44 (2.fedora.pool.ntp.org) Aug 09 19:42:10 localhost.localdomain chronyd[857]: System clock wrong by 6313.512300 seconds Aug 09 19:44:24 localhost.localdomain chronyd[857]: Selected source 162.159.200.1 (2.fedora.pool.ntp.org) Aug 09 21:09:30 localhost.localdomain chronyd[857]: Source 194.177.34.116 replaced with 51.195.120.107 (2.fedora.pool.ntp.org) Aug 09 21:12:45 localhost.localdomain chronyd[857]: Can't synchronise: no majority Aug 09 21:13:50 localhost.localdomain chronyd[857]: Selected source 51.195.120.107 (2.fedora.pool.ntp.org) Aug 09 21:13:50 localhost.localdomain chronyd[857]: System clock wrong by 85.880138 seconds ```
sudo journalctl -b 0 -u chronyd ``` Aug 09 17:22:34 localhost systemd[1]: Starting chronyd.service - NTP client/server... Aug 09 17:22:35 localhost chronyd[857]: chronyd version 4.2 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +ASYNCDNS +NTS +SECHASH +IPV6 +DEBUG) Aug 09 17:22:35 localhost chronyd[857]: Frequency 9.495 +/- 23.975 ppm read from /var/lib/chrony/drift Aug 09 17:22:35 localhost chronyd[857]: Using right/UTC timezone to obtain leap second data Aug 09 17:22:35 localhost chronyd[857]: Loaded seccomp filter (level 2) Aug 09 17:22:35 localhost systemd[1]: Started chronyd.service - NTP client/server. Aug 09 17:22:41 localhost.localdomain chronyd[857]: Selected source 54.38.222.63 (2.fedora.pool.ntp.org) Aug 09 17:22:41 localhost.localdomain chronyd[857]: System clock wrong by 1.368469 seconds Aug 09 17:22:42 localhost.localdomain chronyd[857]: System clock was stepped by 1.368469 seconds Aug 09 17:22:42 localhost.localdomain chronyd[857]: System clock TAI offset set to 37 seconds Aug 09 17:47:32 localhost.localdomain chronyd[857]: Received KoD RATE from 195.154.220.89 Aug 09 18:13:31 localhost.localdomain chronyd[857]: Selected source 188.165.236.162 (2.fedora.pool.ntp.org) Aug 09 18:13:31 localhost.localdomain chronyd[857]: Source 54.38.222.63 replaced with 162.159.200.1 (2.fedora.pool.ntp.org) Aug 09 18:13:32 localhost.localdomain chronyd[857]: Can't synchronise: no selectable sources Aug 09 18:16:45 localhost.localdomain chronyd[857]: Selected source 162.159.200.1 (2.fedora.pool.ntp.org) Aug 09 18:16:45 localhost.localdomain chronyd[857]: System clock wrong by 1761.019063 seconds Aug 09 18:18:55 localhost.localdomain chronyd[857]: Selected source 188.165.236.162 (2.fedora.pool.ntp.org) Aug 09 18:18:56 localhost.localdomain chronyd[857]: Can't synchronise: no selectable sources Aug 09 18:36:02 localhost.localdomain chronyd[857]: Selected source 5.196.181.37 (2.fedora.pool.ntp.org) Aug 09 18:36:02 localhost.localdomain chronyd[857]: System clock wrong by 7387.229919 seconds Aug 09 18:37:08 localhost.localdomain chronyd[857]: Can't synchronise: no selectable sources Aug 09 18:43:36 localhost.localdomain chronyd[857]: Source 5.196.181.37 replaced with 194.177.34.116 (2.fedora.pool.ntp.org) Aug 09 18:46:50 localhost.localdomain chronyd[857]: Selected source 194.177.34.116 (2.fedora.pool.ntp.org) Aug 09 18:46:50 localhost.localdomain chronyd[857]: System clock wrong by 6227.637024 seconds Aug 09 18:49:09 localhost.localdomain chronyd[857]: Can't synchronise: no selectable sources Aug 09 19:14:03 localhost.localdomain chronyd[857]: Source 195.154.220.89 replaced with 37.187.104.44 (2.fedora.pool.ntp.org) Aug 09 19:17:17 localhost.localdomain chronyd[857]: Selected source 37.187.104.44 (2.fedora.pool.ntp.org) Aug 09 19:17:17 localhost.localdomain chronyd[857]: System clock wrong by 16793.065160 seconds Aug 09 19:19:31 localhost.localdomain chronyd[857]: Can't synchronise: no selectable sources Aug 09 19:30:19 localhost.localdomain chronyd[857]: Selected source 37.187.104.44 (2.fedora.pool.ntp.org) Aug 09 19:30:19 localhost.localdomain chronyd[857]: System clock wrong by 7129.523632 seconds Aug 09 19:31:24 localhost.localdomain chronyd[857]: Can't synchronise: no selectable sources Aug 09 19:42:10 localhost.localdomain chronyd[857]: Selected source 37.187.104.44 (2.fedora.pool.ntp.org) Aug 09 19:42:10 localhost.localdomain chronyd[857]: System clock wrong by 6313.512300 seconds Aug 09 19:44:24 localhost.localdomain chronyd[857]: Selected source 162.159.200.1 (2.fedora.pool.ntp.org) Aug 09 21:09:30 localhost.localdomain chronyd[857]: Source 194.177.34.116 replaced with 51.195.120.107 (2.fedora.pool.ntp.org) Aug 09 21:12:45 localhost.localdomain chronyd[857]: Can't synchronise: no majority Aug 09 21:13:50 localhost.localdomain chronyd[857]: Selected source 51.195.120.107 (2.fedora.pool.ntp.org) Aug 09 21:13:50 localhost.localdomain chronyd[857]: System clock wrong by 85.880138 seconds ```

Pinging from the podman machine

[root@localhost ~]# ping 2.fedora.pool.ntp.org
PING 2.fedora.pool.ntp.org (5.196.160.139) 56(84) bytes of data.
64 bytes from 5.196.160.139 (5.196.160.139): icmp_seq=1 ttl=64 time=0.740 ms
64 bytes from 5.196.160.139 (5.196.160.139): icmp_seq=2 ttl=64 time=0.564 ms
b-thiswatch commented 2 years ago

had the same problem. with the latest podman update 4.2.0, i realised there is a message regarding firewall access. haven't seen that before and was now able to allow access. it seems to be working even after sleep now. at least that's what timedatectl indicates...

EDIT: nope, still stopping randomly or after longer sleep cycle...