I'm running MongoDB as a podman container and it exists about 3 minutes after it has started
Steps to reproduce the issue
Steps to reproduce the issue
Run MongoDB as a container (tried versions 4.4.25 and 7.0.12)
Wait couple of minutes
Inspect the logs and systemd shows that the container has been restarted
Describe the results you received
I can see this as last line in the logs (before the the container gets restarted):
conmon cc85f5451522c54da6b2 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/user.slice/user-1002.slice/user@1002.service/app.slice/unifi_mongodb.service/libpod-payload-cc85f5451522c54da6b2b0c9971d99f13f706814c1e2ef4abb8aa5d6c013816f/memory.events
It turned out to be an issue with a specific virtual machine when forcing AVX on a specific CPU virtualization.
Nothing wrong with podman and/or the mongo image.
Issue Description
I'm running MongoDB as a podman container and it exists about 3 minutes after it has started
Steps to reproduce the issue
Steps to reproduce the issue
Describe the results you received
I can see this as last line in the logs (before the the container gets restarted):
conmon cc85f5451522c54da6b2 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/user.slice/user-1002.slice/user@1002.service/app.slice/unifi_mongodb.service/libpod-payload-cc85f5451522c54da6b2b0c9971d99f13f706814c1e2ef4abb8aa5d6c013816f/memory.events
Describe the results you expected
MongoDB to run without being restarted
podman info output
Podman in a container
No
Privileged Or Rootless
Rootless
Upstream Latest Release
Yes
Additional environment details
openSUSE Tumbleweed, SELinux in enforcing mode
Additional information
Additional information like issue happens only occasionally or issue happens with a particular architecture or on a particular setting